The latest version of this encoding is [IEEE 754|https://en.wikipedia.org/wiki/IEEE_754], which defines an encoding called floating-point numbers, which reserves some number of bits for the "significand" (the number before the exponent), and other bits for the exponent value. This encoding scheme is not perfect. For example, it is not possible to accurately represent all possible numbers with a fixed number of bits, nor is it possible to represent irrational numbers such as 1/7 accurately. This means that simple real number arithmetic like 0.1 + 0.2 doesn't produce the expected 0.3, but rather 0.30000000000000004. This level of precision is good enough for applications that only need approximate values, but scientific applicants requiring more precision, such as experiments with particle accelerators which often need hundreds of digits of precision, have required entirely different encodings.
First, 1/7 is not irrational. It is perfectly rational.
Observations
The bigger problem is that neither 0.1 nor 0.2 are represented exactly in systems where the base of the exponent is not commensurate with 10. 1/10 and 2/10 already fail as do 1/3, 1/5, 1/6, 1/7, and 1/9. This is a big problem for people who use Excel spreadsheets and don't understand why the presumed decimal-currency arithmetic can produce unexpected results. (It gets even more exciting in representation of date-times and why astronomers do it differently even for terrestrial date-times.) Of
course, Python, Java, and C Language programmers can encounter the same problems.
So there is usually not 0.1 and 0.2 there to start with, even if that is what the entered text is and even though the values often appear to be faithfully represented if the internal value is output to the same precision.
Suggestion
For concreteness, I would talk about binary floating-point and then give examples of the notation where the exponent base is 2. That's so significand and exponent are understood in concrete terms. Then you can illustrate limitations that result from the nature of the exponent and any limitation on the range of significands.
(PS: The limitations are also relevant in computer graphics and digital typesetting.)
(PPS: Edited to be specific about binary and base 2 because IEEE 754 has a decimal flavor.)
(PPPS: It broke my brain to confirm that
0x0.1p-4 is indeed the value 1/256.
First, 1/7 is not irrational. It is perfectly rational.
Observations
The bigger problem is that neither 0.1 nor 0.2 are represented exactly in systems where the base of the exponent is not commensurate with 10. 1/10 and 2/10 already fail as do 1/3, 1/5, 1/6, 1/7, and 1/9. This is a big problem for people who use Excel spreadsheets and don't understand why the presumed decimal-currency arithmetic can produce unexpected results. (It gets even more exciting in representation of date-times and why astronomers do it differently even for terrestrial date-times.) Of course, Python, Java, and C Language programmers can encounter the same problems.
So there is usually not 0.1 and 0.2 there to start with, even if that is what the entered text is and even though the values often appear to be faithfully represented if the internal value is output to the same precision.
Suggestion
For concreteness, I would talk about binary floating-point and then give examples of the notation where the exponent base is 2. That's so significand and exponent are understood in concrete terms. Then you can illustrate limitations that result from the nature of the exponent and any limitation on the range of significands.
(PS: The limitations are also relevant in computer graphics and digital typesetting.)
(PPS: Edited to be specific about binary and base 2 because IEEE 754 has a decimal flavor.)
(PPPS: It broke my brain to confirm that 0x0.1p-4 is indeed the value 1/256.