I thought that I would be somewhat skeptical of her views. But after listening to her, I think our differences on the subject came down to what the definition of "free will" was. After all, depending on your approach, the question "What is free will?" can be just as difficult if not more than "Is there free will?"
So what is free will? You can't say that it's "not being affected by the outside world;" everything that interacts with its environment is affected by it. A reasonable definition then, would be "not being controlled from the outside." This tends to fit well with the basic idea of free will. It can get a little dicey when you go to draw the line for where "affected" ends and "controlled" begins. Let me give an example to illustrate more or less where I see that line. A mannequin is controlled by its puppeteer; it has no free will. But if the puppet is an autonomous robot, an autonomous robot that can rewrite its own programming based on experience, an autonomous robot whose decisions are complex and variable, then I would say that puppet has free will.
I would also say that autonomous robot is deterministic and probably doesn't have conscienceness--at least not in the way we think of it. This of course means that my definition will not be universally accepted, as I've heard the concept of free will tied to conscienceness and incompatible with determinism. I do not agree with those assessments.
I see no conflict between determinism and my definition of free will. A decision is a process; every process can be broken down into simpler steps. Once you get down to the simplest steps, they must fall into one of three (and only three) categories. Each step is either determined (cause leads to effect), random (effect not dictated by cause), or magic (undefined: almost certainly doesn't exist). This holds true whether you are talking about the base circuitry of wires or neurons, or the "higher software" that does the "thinking." In that sense, we humans are no different than the autonomous robot, and so if we can be said to have free will, then so can the robot.
Another thing I hear thrown about is statements to the effect of "the randomness of quantum mechanics liberates us from the bonds of determinism and allows us to have free will." That is utter nonsense. (quick note: The "randomness" of quantum mechanics is not haphazard; it follows established wave equations, it's experiments are reproducible, and the behavior is predicted by theory.) Besides, if one believes that strict determinism robs you of free will, how in the hell is random behavior any better? Hmm, let me think ... oh yeah, it's not! And furthermore, there's nothing stopping us from adding randomness to our "auto puppet." This puppet--this machine--must have free will just like we do. That only leaves magic, but magic doesn't exist. If your definition of free will requires magic, then free will, as so defined, does not exist.
This brings us to the issue of conscienceness. Nobody really knows what it is, therefore we can't really know whether out auto puppet has some form of it or not. There are behavioral tests out there that some people use to determine degrees of conscienceness (mirror test, turing test). But these behavioral tests leave many of us unsatisfied. We must rely on empathy. We watch a behavior for signs of conscienceness, but we can't know if it's really there. For example, the turing test is a test to see if an A.I. can fool a human into thinking that he or she is talking to another human. In that case, the conscienceness we would percieve in that A.I. (that passed the turing test) would be an illusion--or would it? And is our own conscienceness just an illusion? (Rene: That's not as outlandish as it may sound.)
Many see conscienceness as something ethereal; I think it is most certainly not. I see conscienceness as a web of sensations created by the brain. Take this letter written to NewScientist regarding this article.
The article on confabulation repeats a logical fallacy (7 October, p 32). "The idea that we have conscious free will may be an illusion," writes Helen Phillips, because a 1985 experiment "suggested that a signal to move a finger appears in the brain several hundred milliseconds before someone consciously decides to move that finger".
This is silly. The process of making a "conscious decision" to act is obviously not a single event. Factors for and against action must be weighed up, inhibitions must be overcome, environmental constraints must be checked, the muscular signals must be planned so that the action is properly coordinated, and so forth. That fact that somewhere along this complex pathway a signal can be measured indicating movement of a finger is imminent is quite unsurprising. The fallacy lies in inferring that the sensation of "conscious decision" that appears later on is thus illusory or "faked".
We sense all things in a delayed fashion. Our conscious recognition of a flash of light, for example, occurs well after the light actually flashes. Why should the sensation of our own consciousness be any different?
And if we have no free will, then why even bother producing the fake sensation of consciousness after the fact? If we, the conscious entity, could not exert any free will over what our body will do in the future, then our body would presumably conserve energy by simply turning out the lights.
With the exception of his last paragraph, this pretty much describes my view. We are essentially autonomous robots with sensations--including the sensation of our own thoughts.
UPDATE: Between the time I started writing this post and when I finished, Dennis Overbye wrote this article in the New York Times about free will. It is quite interesting; he gets into the philosophies of Danial Dennet, Alan Turing and Kurt Gödel. Good stuff!