Thursday, November 30, 2006

A Simple Turing Pattern

It all started back in September when Discovery Institute hack Casey Luskin attacked science blogger Chris Mooney, author of The Republican War on Science. Then a couple of weeks ago he went after science blogger Carl Zimmer, the fantastic writer whose work appears in the New York Times. Among the inanities he spewed was a defense of imperfection by comparing ID to a Ford Pinto.

"Was the Ford Pinto, with all its imperfections revealed in crash tests, not designed?"

This statement goes against the whole design argument; Is God a poor engineer who didn't heed Murphy's Law?

As ridiculous as that analogy is, Karmen at Chaotic Utopia glommed on to a doozy that all the other science bloggers had missed.

The article called evolution a "simple" process. In our experience, does a "simple" process generate the type of vast complexity found throughout biology?

I can see how this must've really irked Karmen since one of her regular features is Friday Fractals. You see, fractals are complex patterns generated from simple algorithms.



I'm afraid my fractals aren't quite as good as Karmen's since I made mine with the free software GIMP. The point remains that a fractal is a perfect example of a "complex design" that's generated by a few simple instructions.

The fun continues. Mark Chu-Carroll of Good Math, Bad Math expatiated upon the theme by bringing cellular automata (CA) into the mix.

For the simplest example of this, line up a bunch of little tiny machines in a row. Each machine has an LED on top. The LED can be either on, or off. Once every second, all of the CAs simultaneously look at their neighbors to the left and to the right, and decide whether to turn their LED on or off based on whether their neighbors lights are on or off. Here's a table describing one possible set of rules for the decision about whether to turn the LED on or off.

Current State Left Neighbor Right Neighbor New State
OnOnOnOff
OnOnOffOn
OnOffOnOn
OnOffOffOn
OffOnOnOn
OffOnOffOff
OffOffOnOn
OffOffOffOff


There you have two examples of "complex designs" spawned by "simple processes." Before I bring up a third, I should mention that MarkCC made a point that the above CA is turing complete. Nice segue since the next image will be a Turing Pattern. This "design" is so named because it derives from the principles layed out in the great mathematician Alan Turing's 1952 paper The Chemical Basis of Morphogenesis. In it, Turing demonstrates how "complex" natural patterns such as a leopard's stripes (or any embryological development) can be generated from simple chemical interactions. This ScienceDaily article describes it thus:

Based on purely theoretical considerations, Turing proposed a reaction and diffusion mechanism between two chemical substances. Using mathematics, he proved that such a simple system could produce a multitude of patterns. If one substance, the activator, produces itself and an inhibitor, while the inhibitor breaks down or inhibits the activator, a spontaneous distribution pattern of substances in the form of stripes and patches can be created. An essential requirement for this is that the inhibitor can be distributed faster through diffusion than the activator, thereby stabilizing the irregular distribution. This kind of dynamic could determine the arrangement of periodic body structures and the pattern of fur markings.

I generated the following image using the Turing Pattern plug-in for GIMP.




The kicker is that the above mentioned ScienceDaily article is entitled Control Mechanism For Biological Pattern Formation Decoded and it's about how biologists and mathematicians in Freiburg—hence the 'German flag' color scheme on my Turing Pattern—have found an example in nature of just what Turing predicted.

Biologists from the Max Planck Institute of Immunobiology in Freiburg, in collaboration with theoretical physicists and mathematicians at the University of Freiburg, have for the first time supplied experimental proof of the Turing hypothesis of pattern formation. They succeeded in identifying substances which determine the distribution of hair follicles in mice. Taking a system biological approach, which linked experimental results with mathematical models and computer simulations, they were able to show that proteins in the WNT and DKK family play a crucial role in controlling the spatial arrangement of hair follicles and satisfy the theoretical requirements of the Turing hypothesis of pattern formation. In accordance with the predictions of the mathematical model, the density and arrangement of the hair follicles change with increased or reduced expression of the WNT and DKK proteins.

There you go, Mr. Luskin: an example from natural biology of a simple process generating vast complexity. To your Woo, I say Schwiiing!

Tuesday, November 28, 2006

Spiral coolness

This is just too cool! (via Chaotic Utopia)

Kissing Mirror Neurons

On my return trip from Thanksgiving vacation, I had the pleasure of taking DC's Metro to Union Station. At some point early in the trip, four college-aged girls boarded the train. I naturally noticed this because they were all hotties (two of them were super-hotties). I got a bit curious when I noticed that they formed two pairs that were uneasily close. Could it be??

Nah, probably just my imagination; besides, it's rude to stare. So I went back to reading my magazine. But they weren't about to let me do that--they were being noisy. And every time I looked up, my suspicions were bolstered. That's when I saw the blatant Public Display of Affection: "All right, lesbians!" Not staring was more difficult now as was holding back my excitement. At the next stop they got off the Metro and my ride got mundane again.

A famous comedienne (sorry I can't remember which one) once commented on how she didn't understand men's obsessions with lesbians. After all, lesbianism is the ultimate dismissal of masculinity; it should logically be threatening to men. But it's not. Why not?

That's actually a pretty interesting question. In a rational world, men wouldn't get turned on by girl on girl action, but believe me, they do. For a long time, my explanation for this derived from my rudimentary knowledge of evolutionary psychology. Males are out to spread their seed, so they see a lesbian coupling as an opportunity to jump in and procreate more. Females, on the other hand, want a man who will help rear her children, so homosexuals are a bad investment.

This hypothesis started to unravel for me, though. It seemed that every woman I brought the subject up with, was not only cool with having gay male companions, but would jump at the opportunity to go party at a gay bar. I realize that this is anecdotal and that their motives might not in fact be voyeuristic (but their mannerisms somehow gave me that deja-vu feeling of "All right, lesbians!"). This was seriously undermining my EP hypothesis; I needed something new.

On the Amtrak train back to Philly (with the "METRO incident" still fresh on my mind) I read an article about mirror neurons. Everything just clicked together and now I had my new pet hypothesis.

A mirror neuron is a neuron which fires both when an animal performs an action and when the animal observes the same action performed by another (especially conspecific) animal. Thus, the neuron "mirrors" the behavior of another animal, as though the observer were itself performing the action. These neurons have been observed in primates, including humans, and in some birds.

Mirror neurons were first discovered by Giacomo Rizzolatti and other Italian neuroscientists. They were first discovered in monkeys whose brains were wired up with electrodes; they were later confirmed to exist in humans (recent research suggests that humans are particularly well-endowed with mirror neurons). The interesting thing about mirror neurons is that they seem to be sensitive to intent. For example, in the monkey experiments, when the simian watched a hand pick up an object, the same neurons fired as when the monkey itself picked up that object; but when it watched a hand pretend to pick up a non-existent object, the neurons didn't fire. And this pattern was observed even when the monkey's view was obscured by a screen. In other words, when the monkey knew there was an object behind the screen, its (mirror) neurons fired when it watched the hand go behind the screen to pick up the object; but they failed to fire when the monkey knew there was nothing behind the screen.

It stands to reason that we have mirror neurons for kissing. These same neurons that fire when we kiss someone should also fire when we watch others kissing someone. And I would expect that if you're the kind of person who is aroused by kissing (I'll go ahead and aver that that's the predominance of humanity), watching others kiss should trigger some of those same feelings.

But how does this explain men's particular fascination with lesbians? My answer is "the necker cube effect." The Necker Cube is an optical illusion. It consists of 12 interconnected lines drawn on a flat surface. The human brain wants to see it in three dimensions and so adds depth to it. But it doesn't end there; there are two possible 3D configurations: with the lower square up front and with the upper square up front. Since both are possible, and since the brain can't "see" them simultaneously, it flips back and forth. I usually see the lower square up front first, then it starts to flip-flop back and forth.



Perhaps a more appropriate optical illusion is the "two ladies or one" illusion (are the two ladies about to kiss?) ;-)



One of my favorites, though, is the Lyondell cube. Below is my foam Lyondell cube. It is just a cube with a smaller cube cut out of one of its corners. But if you look at it from the right angle, the missing corner becomes a solid cube budding out from the main cube--then it reverts back to a hole. The effect is quite eerie when you hold the cube and wiggle and wobble it in your hand. Just freaky!

Animated Lyondell Cube

My hypothesis is that when watching lesbians kiss, men's kissing mirror neurons are activated, but then, just like the necker cube, they start to flip back and forth between which girl is activating the mirror neurons (and this adds extra excitement).

Since I came up with this hypothesis on the fly, I realize that
A) It may be total bunk, and/or
B) Someone else may have already come up with the same idea.

However I find it intriguing enough to just go with it.

On that note I'll leave you with a short YouTube video (I should probably insert an "adult content" warning here, but if you're the type who is offended by to consenting adults kissing, then you're probably also offended by my posts on religion. Which means that this weblog is not for you.)



And if my hypothesis is correct, I certainly wouldn't want to slight any straight females or gay males who may stumble upon this post.

Belated Congratulations!

I'm a bit late doing this post (although I did leave a comment when it was fresh), but congratulations on the engagement of two excellent science bloggers (physics bloggers, no less).

Jennifer Ouellette of Cocktail Party Physics is one of my favorite bloggers because she's such a pleasure to read (I might just have to buy The Physics of BuffyVerse) and it doesn't hurt that she has me on her blogroll (Of course I still don't have a blogroll myself, but when I get around to it, she'll be there).

Sean Carroll of Cosmic Variance is also an awesome physics blogger. I must confess that I'm not as big a reader of CV as I am of CPP. (although how can you not love photographic evidence of Russell's teapot?)

Love found on the internet between two sciencephiles. What could be better?

Congratulations!

Meme propagation experiment

There's a meme going around the net (via) and there's an experiment seeing how fast it spreads. It goes thus:
1. Please link to this post by Acephalous (as I'm doing)
2. Ask your readers to do the same (if you haven't already, remember, it's for SCIENCE!)
3. Ping Technorati. (and spell it correctly)

I am always willing to do my part for science. Be on the lookout for my upcoming experiment here I'll need my readers to send me money ;-)

Sieg Heil, Mein Furry!

Yesterday I came accross an intersting site while browsing the internets. It's a website called Cats That Look Like Hitler. I guess you can find anything on the internet. My favorite Kitler is Frodo.



Although I must tip my hat to Charlie--the costume had me rolling on the floor.



What's next? Dogs that look like Saddam? Gerbils that look like Kim Jong Il? Personally, I'll just stick to the world leader/animal resemblence that is at the forefront right now.



Read the comment by the artist Chris Savido.

Sunday, November 19, 2006

Paper Art

I first saw this on A Blog Around The Clock. Now it seems someone has put the images together into a video slideshow. These were all made with just a single sheet of paper and scissors. Pretty cool!

Sunday, November 12, 2006

0.000... > 0

When I was in high school, I learned that 0.999... = 1. I found it shocking at first, but after thinking about it, I realized that the proof was airtight. But recently, the "controversy" has reared its head again on the internet--here, here, and here (as a poll no less, since the best way to find mathematical truths is by quorum).

At first I read the threads with amusement, but gradually the counter-arguments began to convert me. I now realize that not only is 0.999... ≠ 1, but also that 0.000... ≠ 0. It simply follows from 1 - 0.999... = 0.000... since 0.999... ≠ 1, then 0.000... ≠ 0. And furthermore, all the brilliant proofs for the former also apply to the latter.

I have assembled below a list of said proofs which I've slightly modified to prove that 0.000... > 0. Enjoy!

I now understand how this conclusion is reached. but unlike how the article suggests I have no problem in thinking in the infinite. I have no problem with the 'concept' of 0.000~ as a forever continuing sequence of digits. I accept that in all practical purposes 0.000~ might as well be 0 and that math solutions calculate it to be 0. I also accept that it is impossible to have 0.000~ of anything (you cannot hold an infinity). But this does not stop 0.000~ (as a logical concept) forever being >0.

On to the main issue: 0.0000000~ infinite 0s is NOT equal to 0, because 0.0000000~infinite 0s is not a number. The concept of an infinite number of 0s is meaningless (or at least ill-defined) in this context because infinity is not a number. It is more of a process than anything else, a notion of never quite finishing something.
However, we can talk intelligently about a sequence:
{.0, .00, .000, ... }
in which the nth term is equal to sum(0/(10^i), i=1..n). We can examine its behavior as n tends to infinity.
It just so happens that this sequence behaves nicely enough that we can tell how it will behave in the long term. It will converge to 0. Give me a tolerance, and I can find you a term in the sequence which is within this tolerance to 0, and so too will all subsequent terms in the sequence.
The limit is equal to 0, but the sequence is not. A sequence is not a number, and cannot be equated to one.

We hold 1/3 = 0.333~
but as 0.333~ - 0.333~ = 0.000~ and 0.000~ ≠ 0.0 and 1/3 - 1/3 = 0/1 then surely 0.333~ ≠ 1/3.
Confusing fractions and decimal just highlights the failings of decimal math. 0.000~ does not equal 0.0. If it did, the 0.000~ would simply not exist as a notion. It’s very existence speaks of a continually present piece. The very piece that would not render it 0.0. It keeps approaching emptyness by continually adding another decimal place populated by a 0, which does nothing to diminish the fact that you need to add yet more to it to make it the true 0.0 and so on to infinity.
There is obviously an error in the assumption that 1/3 = 0.333~ or that it highlights the fact that decimal can not render 1/3 accurately. Because 0.000~ ≠ 0.0

Ah I see the problem.. It's just a rounding error built into the nature of decimal Math. there is no easy way to represent a value that is half way between 0.000~ and 0.0 in decimal because the math isn’t set up to deal with that. Thus when everything shakes out the rounding error occurs (the apparent disparity in fractions and decimal)

No it does not. by it's very nature 0.000000000000rec is always just slightly greater than 0.0 thus they are not equal.
But for practical purposes then it is safe to conclude equivalency as long as you remember that they are not in reality equivalent.

0.00000~ is infinitely close to 0.
For practical purposes (and mathematically) it is 0.
But is it really the same as 0?
I don't know.

0.00000~ is not per definition equal to 0. This only works in certain fields of numbers.

What worries me about this proof is that it assumes that 0.0000~ can sensibly be multiplied by 10 to give 00.0000~ with the same number of 0s after the decimal point. Surely this is cheating? In effect, an extra 0 has been sneaked in, so that when the lower number is subtracted, the 0s disappear.
The other problem I have is that no matter how many 0s there are after the decimal point, adding an extra 0 only ever takes you 0/10 of the remaining distance towards unity... so even an infinite number of 0s will still leave you with a smidgen, albeit one that is infinitely small (still a smidgen nevertheless).

In reality,I think 0.0..recurring is 0.
But if the 'concept' of infinity exists, then as a 'concept' .0 recurring is not 0.
From what I know, the sum to infinity formula was to bridge the concept of infinity into reality (to make it practical), that is to provide limits.*
It's like the "if i draw 1 line that is 6 inches and another that is 12, conceptually they are made up of the same number of infinitesimally small points" but these 'points' actually dont exist in reality.
Forgot the guy who came up with the hare and tortoise analogy, about how the hare would not be able to beat the tortoise who had a head-start - as the hare had to pass an infinite number of infinitesimally small points before he'd even reach the tortoise.
He used that as 'proof' that reality didn't 'exist' rather than what was 'obvious' to me (when I heard it) - that infinity didn't exist in reality.
So my conclusion is 0.0 recurring is conceptually the infinitesimal small value numerically after the value 0. (If anyone disagrees, then what is the closest value to 0 that isn't 0 and is greater than 0(mathematically)?)
In reality, it is 0 due to requirements of limits.
Can anyone prove the sum to inifinity formula from 'first prinicipals'?

Okay, non-math-geek, here. Isn't there some difference between a number that can be expressed with a single digit and one that requires an INFINITE number of symbols to name it? I've always imagined that infinity stretches out on either side of the number line, but also dips down between all the integers. Isn't .0000etc in one of those infinite dips?

Haha not only are there holes in your logic, but there are holes in your mathematics.
First of all, by definition the number .00000000... cannot and never will be an integer. An integer is a whole number. .00000000... is not, obviously, hence the ...
The ... is also a sad attempt at recreating the concept of infinity. I only say concept because you can't actually represent infinity on a piece of paper. Except by the symbol ∞. I found a few definitions of infinity, most of them sound like this: "that which is free from any possible limitation." What is a number line? A limitation. For a concrete number which .0000000... is not. (Because it's continuing infinitely, no?)
Also, by your definition, an irrational number is a number that cannot be accurately portrayed as a fraction. Show me the one fraction (not addition of infinite fractions) that can represent .00000000...
You can't, can you?
Additionally, all of your calculations have infinitely repeating decimals which you very kindly shortened up for us (which you can't do, because again, you can't represent the concept of infinity on paper or even in html). If you had stopped the numbers where you did, the numbers would have rounded and the calculation would indeed, equal 0.
Bottom line is, you will never EVER get 0/1 to equal .0000000... You people think you can hide behind elementary algebra to fool everyone, but in reality, you're only fooling yourselves. Infinity: The state or quality of being infinite, unlimited by space or time, without end, without beginning or end. Not even your silly blog can refute that.

When you write out .00000000... you are giving it a limit. Once your fingers stopped typing 0s and started typing periods, you gave infinity a limit. At no time did any of your equations include ∞ as a term.
In any case, Dr. Math, a person who agrees with your .000000 repeating nonsense, also contradicts himself on the same website. "The very sentence "1/infinity = 0" has no meaning. Why? Because
"infinity" is a concept, NOT a number. It is a concept that means
"limitlessness." As such, it cannot be used with any mathematical
operators. The symbols of +, -, x, and / are arithmetic operators, and
we can only use them for numbers."
Wait, did I see a fraction that equals .00000 repeating? No I didn't. Because it doesn't exist.
And for your claim that I have to find a number halfway between .0000 repeating and 0 is absurd. That's like me having you graph the function y=1/x and having you tell me the point at which the line crosses either axis. You can't. There is no point at which the line crosses the axis because, infinitely, the line approaches zero but will never get there. Same holds true for .0000 repeating. No matter how many 0s you add, infinitely, it will NEVER equal zero.
Also, can I see that number line with .000000000000... plotted on it? That would be fascinating, and another way to prove your point.
And is .00000000... an integer? I thought an integer was a whole number, which .00000000... obviously is not.

Even with my poor mathematical skills I can see very clearly that while 0 may be approximately equal to 0.000000000... ("to infinity and beyond!"); this certainly does not mean that 0 equals 0.000000000...
It's a matter of perspective and granularity, if you have low granularity then of course the 2 numbers appear to be the same; at closer inspection they are not.

I'm no mathematics professor, and my minor in mathematics from college is beyond a decade old, but you cannot treat a number going out to infinity as if it were a regular number, which is what is trying to be done here. Kind of the "apples" and "oranges" comparison since you cannot really add "infinity" to a number.
Yes, any number going out to an infinite number of decimal points will converge upon the next number in the sequence (eg: .000000... will converge so closely to 0 that it will eventually become indistinguishable from 0 but it will not *be* 0).
The whole topic is more of a "hey, isn't this a cool thing in mathematics that really makes you think?" than "let's actually teach something here."

.00000... equals 0 only if you round down! It will always be incrementing 1/millionth, 1/billionth, or 1/zillionth of a place, (depending on how far you a human actually counts). If we go out infinitely, there is still something extra, no matter how small, that keeps .0000000... for actually being 0.

I don't agree, actually. I do believe in a sort of indefinable and infinitely divisible amount of space between numbers ... especially if we break into the physical world ... like ... how small is the smallest thing? an electron? what is that made up of? and what is that made up of? Is there a thing that is just itself and isn't MADE UP OF SMALLER THINGS? It's really hard to think about ... but I think it's harder to believe that there is one final smallest thing than it is to believe that everything, even very small things, are made up of smaller things.
And thus ... .0000 repeating does not equal zero. It doesn't equal anything. It's just an expression of the idea that we can't cut an even break right there. Sort of like thirds. You cannot cut the number 1 evenly into thirds. You just can't. It's not divisible by 3. But we want to be able to divide it into thirds, so we express it in this totally abstract way by writing 1/3, or .3333 repeating. But, if .0000 repeating adds up to 0, than what does .33333 repeating add up to? and don't say 1/3, because 1/3 isn't a number ... it's an idea.
That's my rational.

The problem is with imagining infinite numbers.
When you multiply .000... with 10 there is one less digit on the infinite number of result which is 0.000 .... minus 0.000...0. It is almost impossible in my opinion to represent graphically .000..x10 in calculation, hence confusion.
I know it is crazy to think of last number of infinite number but infinite numbers are crazy itself.

Through proofs, yes, you have "proven" that .0 repeating equals 0 and also through certain definitions.
But in the realm of logic and another definition you are wrong. .0 repeating is not an integer by the definition of an integer, and 0 most certainly is an integer. Mathematically, algebraicly...whatever, they have the same value, but that doesn't mean they are the same number.
I'm getting more out of "hard" mathematics and more into the paradoxical realm. Have you ever heard of Zeno's paradoxes? I think that's the most relevant counter-argument to this topic. Your "infinity" argument works against you in this respect. While you can never come up with a value that you can represent mathematically on paper to subtract from .000... to equal zero or to come up with an average of the two, that doesn't mean that it doesn't conceptually exist. "Infinity" is just as intangible as whatever that missing value is.
But really in the end, this all just mathematical semantics. By proof, they are equal to each other but otherwise they are not the same number.

It is obvious to me that you do not understand the concept of infinity. Please brush up on it before you continue to teach math beyond an elementary school level. The problem with your logic is that .0 repeating is not an integer, it is an estimation of a number. While .0 repeating and 0 behave identical in any and all algebraic situations, the two numbers differ fundamentally by an infinitely small amount. Therefore, to say that .0 repeating and 0 are the same is not correct. As you continue .0000000... out to infinity, the number becomes infinitely close to 0, however it absolutely never becomes one, so your statement .000 repeating =0 is not correct.

I wrote a short computer program to solve this.
CODE:
Try
If 0 = 0.0000000000... Then
Print True
Else
Print False
End If
Catch Exception ex
Print ex.message
End Try
The result: "Error: Can not convert theoretical values into real world values."
There you have it folks! End of discussion.

If you could show me a mathematical proof that 1 + 1 = 3, that does not mean 1 + 1 = 3, it means there is something wrong with the laws of our math in general.
We know instinctively that 0 does not equal 0.000000...
If you can use math to show differently, then that proves not that 0 = 0.00000... but that there is something wrong with your math, or the laws of our math itself.
Thus, every proof shown in these discussions that tryed to show 0=0.000... is wrong.
0 != 0.000...
The problem here is that usualy only math teachers understand the problem enough to explain it, and unfortunatly they are also the least likly candidates to step out of the box and dare consider the laws of math that they swear by are actualy at fault.

Would a recount have made a difference?

A couple of days ago George Allen conceded the Virginia Senatorial race.




It was the right move. Here's a quote from his speech (emphasis mine):

"A lot of folks have been asking about the recount. Let me tell you about the recount.

I've said the people of Virginia, the owners of the government, have spoken. They've spoken in a closely divided voice. We have two 49s, but one has 49.55 and the other has 49.25, after at least so far in the canvasses. I'm aware this contest is so close that I have the legal right to ask for a recount at the taxpayers' expense. I also recognize that a recount could drag on all the way until Christmas.

It is with deep respect for the people of Virginia and to bind factions together for a positive purpose that I do not wish to cause more rancor by protracted litigation which would, in my judgment, not alter the results."


I would agree that it wouldn't have altered the results. In fact, when I first conceived of this post, I had envisioned it as a "why Allen should concede" post--little did I know how quickly he would do just that. To understand why, we need to review a little statistics theory.

Last Monday, Dalton Conley wrote a piece in the New York Times entitled The Deciding Vote. In it he explains a fundamental of "statistical dead-heat" elections.

The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get — you guessed it — four different results. That’s the nature of large numbers — there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.

But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.

What this means is that the vote count in an election is not "the true" count, but rather a poll with a very large sample size, and can thus be treated as such. He goes on to offer a suggestion for determining a winner, which if not met should trigger a run-off election.

In an era of small town halls and direct democracy it might have made sense to rely on a literalist interpretation of “majority rule.” After all, every vote could really be accounted for. But in situations where millions of votes are cast, and especially where some may be suspect, what we need is a more robust sense of winning. So from the world of statistics, I am here to offer one: To win, candidates must exceed their rivals with more than 99 percent statistical certainty — a typical standard in scientific research. What does this mean in actuality? In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes cast.
If this sounds like gobledy-gook to you, let me try to clarify it by throwing some Greek letters at you. I couldn't find any of my old Statistics texts, but the Wikipedia article is actually quite good, so I will draw from it. (For some even better statistics primers, check out Zeno and Echidne.) Let's start with some definitions (according to Wiki)

The margin of error expresses the amount of the random variation underlying a survey's results. This can be thought of as a measure of the variation one would see in reported percentages if the same poll were taken multiple times. The margin of error is just a specific 99% confidence interval, which is 2.58 standard errors on either side of the estimate.

Standard error = \sqrt{\frac{p(1-p)}{n}} ,where p is the probability (in the case of an election, it is the vote percentage. So for a dead-heat race, p=~ 0.5), and n is the sample size (total number of voters).


What does this mean? Since we are looking at a ballot count as a poll, we can use the margin of error to be the random variation we would get from multiple recounts. (The word random is important here. None of these formulas hold if the variation is due to malfeasance).

I won't try to explain where the standard error formula comes from, but I'll try to give some perspective. We can break it into two parts: the numerator and the denominator. The numerator p(1-p) has a maximum when p=0.5 (since 0 < p < 1). This means that the further you get from 50%, the smaller the standard error will be. Therefore, the standard error in a blow-out will be smaller than thatfrom a tie. Since the denominator is inversely proportional to the standard error, the standard error will get smaller as n (# of voters) gets larger. So the more voters you have, the smaller the error you get. One consequence of this is that you reach a point where your standard error is small enough that increasing the sample size gains you very little. (Check out Zeno's excellent post on sample size).

Again, I'll leave it up to the reader to look up how the confidence interval formula is derived--it's a bit beyond the scope of this post. What it means is that since the margin of error is the expected variation from sampling to sampling, we can see it as a multiple of standard errors from the results. And the higher the confidence interval, the more standard errors go into the margin of error. Another way of looking at it is that if you want to be 99% confident that a recount will fall into a certain interval around your result, that interval will need to be wider than if you only wanted to be 68% confident. According to Wiki (again, I'll let you look up the derivation if you wish)

Plus or minus 1 standard error is a 68 % confidence interval, plus or minus 2 standard errors is approximately a 95 % confidence interval, and a 99 % confidence interval is 2.58 standard errors on either side of the estimate.

Therefore,


Margin of error (99%) = 2.58 × \sqrt{\frac{0.5(1-0.5)}{n}} = \frac{1.29}{\sqrt{n}}

Which is the formula Dalton mentioned in his article. Anyway, I hope my condensed explanation at least helps a little to explain what those numbers mean.

Now, on to the Virginia race. The total votes cast, n=2,338,111 (F0r simplicity, I'll be ignoring the Independent candidate Parker and rounding out to p=0.5, so as to use the above formula.) therefore the margin of error is 0.08% which comes out to 1972.5 votes. That means that we can be 99% sure that a recount of Allen's votes will be +/- 1972.5 votes of what it was before. The actual vote count difference between Allen and Webb was 7231 votes--well outside the margin of error. 7231 votes corresponds to a confidence interval of 9.5 standard errors. Allen could've spent the rest of his life recounting the votes and not expected to alter the results. He was absolutely right to concede.

Saturday, November 11, 2006

Lithium Ion battery fire

I found this video today of a laptop lithium ion battery fire. It was done under controlled conditions, so I'm not sure how precisely this represents what could happen to my (or your) laptop. Since I've written about this subject before, I was very interested to watch.


Saturday, November 04, 2006

Richard Dawkins in Philadelphia

On Thursday, Richard Dawkins came to Philadelphia as part of The God Delusion book tour. Since I've been a fan of his writing for many years now, I had to attend. I was able to get off work early, but I still got to the event late. The auditorium was full and the spillover crowd was mobbed around a closed-circuit television showing the lecture live. I didn't exactly have the best seat in the house, but I was able to catch most of it. He essentially read excerpts from his book and threw in a few personal anecdotes. Much of the talk centered around Biblical evidence supporting the now almost-famous line opening Chapter 2 (page 31).

"The God of the OldTestament is arguably the most unpleasant character in all fiction: jealous and proud of it; a petty, unjust, unforgiving control-freak; a vindictive, bloodthirsty ethnic cleanser; a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully."

I have to confess that I just bought my copy on Wednesday and haven't had a chance to read it yet. (I'm still about a hundred pages shy of finishing The Ancestor's Tale.) All indicatons are that it's going to be a very good read.

Later that evening, Dr. Dawkins appeared on The Rational Response Squad show for a 60 minute round table discussion. I found it quite interesting to see him in a setting other than a standard interview or rehearsed speech. The part I found most interesting was at one point, he brought up how many of his critics say that for political reasons he shouldn't make himself so prominent; quotes like "Darwinian natural selection is what led me to become an atheist (my paraphrase, I don't remember the exact quote)" hurt the cause. He said it was a strong argument, that maybe they were right, and asked what his fellow panelists thought about it. That, to me, exemplifies good scientific/rational thinking. You must always be willing to listen to smart people and question your own beliefs and rationales. Kudos to Dawkins for being able to do that.



Personal note:
When I found out that Dawkins was coming to town, I started searching for just the right thing to wear. I settled on a DNA double-helix necktie. I was hoping I'd actually get to talk to him, but it soon became apparent that that wouldn't happen. After waiting in the book signing queue for 20 minutes, one of the ushers came around telling everyone that there wouldn't be time to personalize autographs and that the author would only be signing his name. "Please have your book open to the title page." At that point, my only hope was that he would appreciate my tie.

When I got up there, I told him how I enjoyed the talk, as he autographed my book. When he gave me the book back, I slowly backed away from the table. Then he said "I really like the tie."

Now I know how a star-struck teenaged groupie feels when she finally gets to meet the idol whose posters adorn her bedroom walls.
"(sigh)," he fluttered "I'll never wash this tie again."