oktalist said:
f(x) = 1 - 10-x
f(0) = 0
f(1) = 0.9
f(2) = 0.99
f(3) = 0.999
limx→∞ f(x) = 1
And the hidden zero thing is rubbish.
Your function only approaches 1 in the limit, but it never actually gets there. This is because n/(infinity) only approaches zero in the limit; it never actually gets there. However, for purposes of convenience, we often ignore that subtlety because it is below our error tolerance, but, when we need to be more mathematically precise, we can not make that assertion.
oktalist said:
It's not a trick. 0.999... = 1.
Yes, it is a trick because it is ignoring the underlying cardinality.
oktalist said:
Infinity doesn't really have a size, as such.
Infinity very much has a size, and that size can be different in different cases. That is why you can take the limit of a numerator and denominator both going to infinity and, yet, obtain a finite ratio. It's because they are different sized infinities.
oktalist said:
There is no "at the end". Recurring decimals are endless.
It doesn't matter how endless they are, you can always add one more digit to make a larger infinity. That's what happens when you multiply by 10.
oktalist said:
So 1 ≠ 1.0 because they have a different number of digits? ("hidden zero")
Numerically, yes, that is precisely true. The number 1 does not have the same precision as 1.0. The representation 1.0 is, at best, an approximation of the number 1. As I said above, for most practical purpose, it is below our error tolerance, so we don't care. But, when we need to be more mathematically precise, we do care.
oktalist said:
As a result, 10*x - x =/= 9.0000rep; there is a hidden 1 all the way at the last digit.
There is no "last digit".
EDIT: I should have pointed out that cardinality is the size of a set, and it can be used to deal with infinite sets like 0.999rep.
0.999... is not a set. It's a number.
The digits that we use to represent numbers are sets. Each element in the set represents a particular fraction multiplied by some factor. The arithmetic operations with which we are familiar perform transformations on those elements, which can themselves be sets. When you multiply x = 0.999rep by 10, you shift all the elements upward and then have to add an extra empty set element at the very end to represent the digit that was vacated as a result of the multiplication. Now, when you do the subtraction, the very last element in 0.999rep, is matched against an empty set element rather than another 9, hence why you don't get 9, exactly, but something infinitesimally smaller than 9.
This is necessarily nonsense. It's something that happens precisely because infinity is not the same size everywhere. It doesn't always have the same meaning. Just like I can not label all the real numbers with integers because there is a greater infinity of real numbers than there is the infinity of integers. They are not the same size infinities. That's the cardinality difference.
EDIT: Correction. You don't get something infinitesimally larger than 9; you get something infinitesimally smaller than 9.