You don't really need maths to understand this, but instead just have some background understanding about concepts like "accuracy" and "infinity" - basically, a conceptual understanding what "numbers" are.
First step: Accuracy
Numbers are not absolute - ignoring what they are supposed to represent, they use a "unit" which is the base (compare: binary base and decimal base). Depending on the base, there are some values that can be expressed with *theoretical* ideal accuracy (in practice, that will never work... nothing in practice can be done with perfect accuracy, and doesnt need to). For those values that however do not lie exactly on the base, you need to express it with fractions... but the problem is, that for some values, the fractions will be repeating - so even that way, you cannot reach that "ideal accuracy"....
UNLESS you cheat.
What is infinity? Well, in short, infinity is not a value. Let me repeat this, because this is an annoyingly common myth:
Infinity is not a value. What infinity is, is a looped function... for those who understand a bit about programming, infinity looks a bit like this:
1: n = n + 1
2: Goto 1:
The important thing to notice, is that the loop doesn't end, and therefore will never ever return a value, unless we stop the loop at some point - which will NOT return infinity, but just the highest precision reached at the point of the break. Or to phrase it more ironically: Infinity is a function that never returns infinity because it never finishes. The "infinite accuracy" is just a theoretical symbol - it has no corresponding phenomena (this is why mathematical points do not exist).
What does that mean for the 0.99999... = 1 question? Well, notice the "..."? Thats just saying "append another 9 forever" (infinity). In theory, if "forever" could ever be reached (contradiction), the result would be 1. It just will never happen
So, while 0.99999... does conceptually allow to express "infinitely close to 1", it logically makes no sense.
But in practice, this will not matter. Because in practice applications, you won't ever precisely hit "1" anyways... no matter if you use "0.999..." in your design or "1". All that matters in practice, is that the accuracy is high enough - and a "0.999..." function that is stopped when it reached enough precision, will do just that.