Let's assume that I want to add 2 decimal numbers and print the result on screen. For example 12345678 + 343567. I know that it is done on values in registers, on logic "AND" gates etc. but my question is how does computer know how this number (12345678) representation looks in binary? For example for my microcontroller it takes 1 clock cycle (135ns) to input value (8 bits) to a register and the same amount of time to add R1 to R2. So how is it possible that it is done so quickly? Converting inputted decimal number to its binary form and storing in register in 1 clock cycle.
Also if CPU uses IEEE 754 notation it has to do much more operations. This may be easy and silly question but i cannot understand it. Can someone please explain me how it is done that computer knows so fast to which logic gate pass the current and to which not to do it, to make binary representation of a decimal number?
load literal valueinstruction. The actual machine instruction (as stored in the computer memory) contains the value in binary form. Only when you write out the instruction in mnemonic form is the value in decimal form, but could also be expressed in any radix you choose (e.g. hexadecimal). Any conversion of the source code is done by the assembler. Loading a literal value is not "input". It is a known value already in memory as a constant. So yes, you are missing something important. – sawdust Jul 26 '19 at 01:32