Sunday, November 12, 2006

0.000... > 0

When I was in high school, I learned that 0.999... = 1. I found it shocking at first, but after thinking about it, I realized that the proof was airtight. But recently, the "controversy" has reared its head again on the internet--here, here, and here (as a poll no less, since the best way to find mathematical truths is by quorum).

At first I read the threads with amusement, but gradually the counter-arguments began to convert me. I now realize that not only is 0.999... ≠ 1, but also that 0.000... ≠ 0. It simply follows from 1 - 0.999... = 0.000... since 0.999... ≠ 1, then 0.000... ≠ 0. And furthermore, all the brilliant proofs for the former also apply to the latter.

I have assembled below a list of said proofs which I've slightly modified to prove that 0.000... > 0. Enjoy!

I now understand how this conclusion is reached. but unlike how the article suggests I have no problem in thinking in the infinite. I have no problem with the 'concept' of 0.000~ as a forever continuing sequence of digits. I accept that in all practical purposes 0.000~ might as well be 0 and that math solutions calculate it to be 0. I also accept that it is impossible to have 0.000~ of anything (you cannot hold an infinity). But this does not stop 0.000~ (as a logical concept) forever being >0.

On to the main issue: 0.0000000~ infinite 0s is NOT equal to 0, because 0.0000000~infinite 0s is not a number. The concept of an infinite number of 0s is meaningless (or at least ill-defined) in this context because infinity is not a number. It is more of a process than anything else, a notion of never quite finishing something.
However, we can talk intelligently about a sequence:
{.0, .00, .000, ... }
in which the nth term is equal to sum(0/(10^i), i=1..n). We can examine its behavior as n tends to infinity.
It just so happens that this sequence behaves nicely enough that we can tell how it will behave in the long term. It will converge to 0. Give me a tolerance, and I can find you a term in the sequence which is within this tolerance to 0, and so too will all subsequent terms in the sequence.
The limit is equal to 0, but the sequence is not. A sequence is not a number, and cannot be equated to one.

We hold 1/3 = 0.333~
but as 0.333~ - 0.333~ = 0.000~ and 0.000~ ≠ 0.0 and 1/3 - 1/3 = 0/1 then surely 0.333~ ≠ 1/3.
Confusing fractions and decimal just highlights the failings of decimal math. 0.000~ does not equal 0.0. If it did, the 0.000~ would simply not exist as a notion. It’s very existence speaks of a continually present piece. The very piece that would not render it 0.0. It keeps approaching emptyness by continually adding another decimal place populated by a 0, which does nothing to diminish the fact that you need to add yet more to it to make it the true 0.0 and so on to infinity.
There is obviously an error in the assumption that 1/3 = 0.333~ or that it highlights the fact that decimal can not render 1/3 accurately. Because 0.000~ ≠ 0.0

Ah I see the problem.. It's just a rounding error built into the nature of decimal Math. there is no easy way to represent a value that is half way between 0.000~ and 0.0 in decimal because the math isn’t set up to deal with that. Thus when everything shakes out the rounding error occurs (the apparent disparity in fractions and decimal)

No it does not. by it's very nature 0.000000000000rec is always just slightly greater than 0.0 thus they are not equal.
But for practical purposes then it is safe to conclude equivalency as long as you remember that they are not in reality equivalent.

0.00000~ is infinitely close to 0.
For practical purposes (and mathematically) it is 0.
But is it really the same as 0?
I don't know.

0.00000~ is not per definition equal to 0. This only works in certain fields of numbers.

What worries me about this proof is that it assumes that 0.0000~ can sensibly be multiplied by 10 to give 00.0000~ with the same number of 0s after the decimal point. Surely this is cheating? In effect, an extra 0 has been sneaked in, so that when the lower number is subtracted, the 0s disappear.
The other problem I have is that no matter how many 0s there are after the decimal point, adding an extra 0 only ever takes you 0/10 of the remaining distance towards unity... so even an infinite number of 0s will still leave you with a smidgen, albeit one that is infinitely small (still a smidgen nevertheless).

In reality,I think 0.0..recurring is 0.
But if the 'concept' of infinity exists, then as a 'concept' .0 recurring is not 0.
From what I know, the sum to infinity formula was to bridge the concept of infinity into reality (to make it practical), that is to provide limits.*
It's like the "if i draw 1 line that is 6 inches and another that is 12, conceptually they are made up of the same number of infinitesimally small points" but these 'points' actually dont exist in reality.
Forgot the guy who came up with the hare and tortoise analogy, about how the hare would not be able to beat the tortoise who had a head-start - as the hare had to pass an infinite number of infinitesimally small points before he'd even reach the tortoise.
He used that as 'proof' that reality didn't 'exist' rather than what was 'obvious' to me (when I heard it) - that infinity didn't exist in reality.
So my conclusion is 0.0 recurring is conceptually the infinitesimal small value numerically after the value 0. (If anyone disagrees, then what is the closest value to 0 that isn't 0 and is greater than 0(mathematically)?)
In reality, it is 0 due to requirements of limits.
Can anyone prove the sum to inifinity formula from 'first prinicipals'?

Okay, non-math-geek, here. Isn't there some difference between a number that can be expressed with a single digit and one that requires an INFINITE number of symbols to name it? I've always imagined that infinity stretches out on either side of the number line, but also dips down between all the integers. Isn't .0000etc in one of those infinite dips?

Haha not only are there holes in your logic, but there are holes in your mathematics.
First of all, by definition the number .00000000... cannot and never will be an integer. An integer is a whole number. .00000000... is not, obviously, hence the ...
The ... is also a sad attempt at recreating the concept of infinity. I only say concept because you can't actually represent infinity on a piece of paper. Except by the symbol ∞. I found a few definitions of infinity, most of them sound like this: "that which is free from any possible limitation." What is a number line? A limitation. For a concrete number which .0000000... is not. (Because it's continuing infinitely, no?)
Also, by your definition, an irrational number is a number that cannot be accurately portrayed as a fraction. Show me the one fraction (not addition of infinite fractions) that can represent .00000000...
You can't, can you?
Additionally, all of your calculations have infinitely repeating decimals which you very kindly shortened up for us (which you can't do, because again, you can't represent the concept of infinity on paper or even in html). If you had stopped the numbers where you did, the numbers would have rounded and the calculation would indeed, equal 0.
Bottom line is, you will never EVER get 0/1 to equal .0000000... You people think you can hide behind elementary algebra to fool everyone, but in reality, you're only fooling yourselves. Infinity: The state or quality of being infinite, unlimited by space or time, without end, without beginning or end. Not even your silly blog can refute that.

When you write out .00000000... you are giving it a limit. Once your fingers stopped typing 0s and started typing periods, you gave infinity a limit. At no time did any of your equations include ∞ as a term.
In any case, Dr. Math, a person who agrees with your .000000 repeating nonsense, also contradicts himself on the same website. "The very sentence "1/infinity = 0" has no meaning. Why? Because
"infinity" is a concept, NOT a number. It is a concept that means
"limitlessness." As such, it cannot be used with any mathematical
operators. The symbols of +, -, x, and / are arithmetic operators, and
we can only use them for numbers."
Wait, did I see a fraction that equals .00000 repeating? No I didn't. Because it doesn't exist.
And for your claim that I have to find a number halfway between .0000 repeating and 0 is absurd. That's like me having you graph the function y=1/x and having you tell me the point at which the line crosses either axis. You can't. There is no point at which the line crosses the axis because, infinitely, the line approaches zero but will never get there. Same holds true for .0000 repeating. No matter how many 0s you add, infinitely, it will NEVER equal zero.
Also, can I see that number line with .000000000000... plotted on it? That would be fascinating, and another way to prove your point.
And is .00000000... an integer? I thought an integer was a whole number, which .00000000... obviously is not.

Even with my poor mathematical skills I can see very clearly that while 0 may be approximately equal to 0.000000000... ("to infinity and beyond!"); this certainly does not mean that 0 equals 0.000000000...
It's a matter of perspective and granularity, if you have low granularity then of course the 2 numbers appear to be the same; at closer inspection they are not.

I'm no mathematics professor, and my minor in mathematics from college is beyond a decade old, but you cannot treat a number going out to infinity as if it were a regular number, which is what is trying to be done here. Kind of the "apples" and "oranges" comparison since you cannot really add "infinity" to a number.
Yes, any number going out to an infinite number of decimal points will converge upon the next number in the sequence (eg: .000000... will converge so closely to 0 that it will eventually become indistinguishable from 0 but it will not *be* 0).
The whole topic is more of a "hey, isn't this a cool thing in mathematics that really makes you think?" than "let's actually teach something here."

.00000... equals 0 only if you round down! It will always be incrementing 1/millionth, 1/billionth, or 1/zillionth of a place, (depending on how far you a human actually counts). If we go out infinitely, there is still something extra, no matter how small, that keeps .0000000... for actually being 0.

I don't agree, actually. I do believe in a sort of indefinable and infinitely divisible amount of space between numbers ... especially if we break into the physical world ... like ... how small is the smallest thing? an electron? what is that made up of? and what is that made up of? Is there a thing that is just itself and isn't MADE UP OF SMALLER THINGS? It's really hard to think about ... but I think it's harder to believe that there is one final smallest thing than it is to believe that everything, even very small things, are made up of smaller things.
And thus ... .0000 repeating does not equal zero. It doesn't equal anything. It's just an expression of the idea that we can't cut an even break right there. Sort of like thirds. You cannot cut the number 1 evenly into thirds. You just can't. It's not divisible by 3. But we want to be able to divide it into thirds, so we express it in this totally abstract way by writing 1/3, or .3333 repeating. But, if .0000 repeating adds up to 0, than what does .33333 repeating add up to? and don't say 1/3, because 1/3 isn't a number ... it's an idea.
That's my rational.

The problem is with imagining infinite numbers.
When you multiply .000... with 10 there is one less digit on the infinite number of result which is 0.000 .... minus 0.000...0. It is almost impossible in my opinion to represent graphically .000..x10 in calculation, hence confusion.
I know it is crazy to think of last number of infinite number but infinite numbers are crazy itself.

Through proofs, yes, you have "proven" that .0 repeating equals 0 and also through certain definitions.
But in the realm of logic and another definition you are wrong. .0 repeating is not an integer by the definition of an integer, and 0 most certainly is an integer. Mathematically, algebraicly...whatever, they have the same value, but that doesn't mean they are the same number.
I'm getting more out of "hard" mathematics and more into the paradoxical realm. Have you ever heard of Zeno's paradoxes? I think that's the most relevant counter-argument to this topic. Your "infinity" argument works against you in this respect. While you can never come up with a value that you can represent mathematically on paper to subtract from .000... to equal zero or to come up with an average of the two, that doesn't mean that it doesn't conceptually exist. "Infinity" is just as intangible as whatever that missing value is.
But really in the end, this all just mathematical semantics. By proof, they are equal to each other but otherwise they are not the same number.

It is obvious to me that you do not understand the concept of infinity. Please brush up on it before you continue to teach math beyond an elementary school level. The problem with your logic is that .0 repeating is not an integer, it is an estimation of a number. While .0 repeating and 0 behave identical in any and all algebraic situations, the two numbers differ fundamentally by an infinitely small amount. Therefore, to say that .0 repeating and 0 are the same is not correct. As you continue .0000000... out to infinity, the number becomes infinitely close to 0, however it absolutely never becomes one, so your statement .000 repeating =0 is not correct.

I wrote a short computer program to solve this.
CODE:
Try
If 0 = 0.0000000000... Then
Print True
Else
Print False
End If
Catch Exception ex
Print ex.message
End Try
The result: "Error: Can not convert theoretical values into real world values."
There you have it folks! End of discussion.

If you could show me a mathematical proof that 1 + 1 = 3, that does not mean 1 + 1 = 3, it means there is something wrong with the laws of our math in general.
We know instinctively that 0 does not equal 0.000000...
If you can use math to show differently, then that proves not that 0 = 0.00000... but that there is something wrong with your math, or the laws of our math itself.
Thus, every proof shown in these discussions that tryed to show 0=0.000... is wrong.
0 != 0.000...
The problem here is that usualy only math teachers understand the problem enough to explain it, and unfortunatly they are also the least likly candidates to step out of the box and dare consider the laws of math that they swear by are actualy at fault.

2 comments:

The Science Pundit said...

For thorough explanation of the actual mathematical proofs of 0.999... = 1, see the Wiki page.

MikeMac said...

How about this:

Assume 0.000... > 0. The real line segment (0, 1) contains all real numbers >0 and <1. So 0.000... є (0,1). By the fact that this is an open interval on a continuum (Real space), there are no end points of the interval. That is to say, it continues shrinking asymptotically and infinitely as it approaches any kind of limit. That means that 0.000... must have a neighborhood
U = (0.000...-λ,0.000...+λ)

which is a subset of (0,1) like so:

0.000... - λ < 0.000... > 0.000... + λ

(where λ is some arbitrary number)

However, that means that there is some number explicitly less than 0.000... yet explicitly greater than 0. This number is a mathematical absurdity, as it cannot be any number of finite length, nor can it be a number consisting of any digits other than 0.

So, let's take the case that λ= 0.000...:
Looking at the other side of our neighborhood, 0.000... + 0.000... This is 2*0.000..., which is ultimately 0.000... (2*0.111... = 0.222...)

So now we have an open interval (0,0.000...) which must contain 0.000..., a definitional contradiction.

Therefore, 0.000... = 0.