Kinkazzo Burning
~ reflections & disquisitions
It takes both sunshine and rain to make a rainbow...

WHAT REALLY GOES ON IN THERE: DENNETT EXPLAINS CONSCIOUSNESS

Daniel Dennett
...but first let’s explain Dennett:

From the Philosopher’s Café (April 2003)

DAN DENNETT

By Guy Douglas and Stewart Saunders

Daniel C. Dennett is currently Distinguished Arts and Science Professor and Director of the Center for Cognitive Studies at Tufts University in the United States. A noticeable aspect of Dennett’s work is his desire to make serious philosophy accessible to the general reader in many of his books. He is a member of a regrettably small number of contemporary philosophers who are able to do philosophy in public, and do it well. This is in some ways related to his distinctive use of examples, metaphors and what he calls ‘intuition pumps’ which are analogies designed to prime the reader’s intuitions in such as way as to make his arguments vivid and plausible. In recent books such as The Intentional Stance, Consciousness Explained and Kinds of Minds, Dennett presents a way to understand the human mind. He seeks to clarify what a mind is, what consciousness is, and what mental states like beliefs, desires and thoughts are.

Dennett is perhaps most famous in philosophical circles for his approach to the problem of intentionality. When philosophers say the mind exhibits intentionality they are referring to the fact that mental states can be about something. When we think, we tend to think about objects in the world, and this thinking leads us to rational action and effective interaction with world. Dennett suggests that intentionality is not so much an intrinsic feature of agents, rather, it is more a way of looking at agents. Dennett calls the seeing of agents as intentional beings, or beings that act according to their beliefs and desires, as taking the intentional stance.

Dennett asks us to consider the various ways we can look at an object with the goal of predicting and understanding what it is going to do. The most accurate, but least practical, is taking the physical stance. For this we would apply the principles of the physical sciences to the object. A more practical approach, especially if the object is an artifact, is to take the design stance. When we do this we assume that the object will behave as it is designed to behave. For instance, we assume that the alarm will go off at the right time because it has been designed to do so by its human creator. Finally, there is the intentional stance: here we assume that the object has a mind and has goals or desires and that it will tend to operate in order realise its goals (according to its understanding of the world, or what could be called its beliefs).

So is intentionality really there, or is it only a useful fiction according to Dennett? His answer is that in taking the intentional stance one is perceiving a certain complex pattern exhibited by the agent. And this pattern is as real as any pattern. One should not assume, however, that the nature of this pattern is in anyway reflected in the internal constitution of the agent. This is the basis of Dennett’s criticism of intentional realists (like Jerry Fodor) who hold that intentionality is supported by internal mechanisms that reflect the structure of beliefs and desires.

In Darwin’s Dangerous Idea and Kinds of Minds, Dennett has focused on the idea that the intentionality characteristic of humans and other animals is a result of evolutionary processes. As such, the intentional stance is really a special case of the design stance, except here the object has been ‘designed’ by evolutionary processes. In this way Dennett hopes to account for the origin of the ‘patterns of intentionality’ within a framework that is consonant with natural science. This move is controversial, as many theorists believe that natural selection by itself can not explain all features of an organism, arguing that often features are accidental by-products of evolutionary processes. Hence the present debate over Dennett’s theory concerns whether the appeal to natural selection alone can provide a complete account of the intentionality of minds.

In Consciousness Explained, Dennett aims to dispel the myth that there is a central theatre, literally or metaphorically inside the head where the ‘stream of consciousness’ is viewed. While he admits that no theorist actually defends this view, it is his belief that a residual alliance to this way of thinking about the mind instils confusion in many of the current approaches to the topic of consciousness.

A more plausible candidate, he argues, is the Multiple Drafts Model. The Multiple Drafts Model consists of a number of aspects. Firstly, there is no one place where consciousness happens. Our mental states are processed in parallel in the brain, and there is no place where the signals have to reach in order to be conscious. Instead all the mental activity in the brain is accomplished as a result of parallel processes of elaboration and interpretation of sensory inputs. Information is therefore under continuous editorial revision as it enters the nervous system. There is no canonical stream of consciousness to refer to in making a decision as to what we are actually conscious of, and when we first become conscious of it.

But as Dennett wants to argue that there is no central control, then how is it that it seems to others as though there is, and it seems subjectively as though I am a singular conscious agent? Dennett has at least two metaphors designed to be of assistance here. Firstly, he has the theory that the idea of self is a product of a ‘centre of narrative gravity’. What he means by this is this is that the brain works in parallel to process narratives of content. In many ways it is a natural language that serves to present the appearance of a unified stream of consciousness, and a unified ‘intender’. Secondly he has the idea that consciousness is a species of ‘mental fame’: "Those contents are conscious that persevere, that monopolize resources long enough to achieve certain typical and symptomatic effects - on memory, on the control of behaviour and so forth." (Philosophy and Phenomenological Research.53,1993 p.929)

A possible weak point in Dennett’s account is the claim that the phenomenal aspect of our experience is a complex of judgements and dispositions. Many philosophers see the central question of consciousness as explaining the seemingly ineffable subjective quality of our experience, or qualia. Dennett claims that there are no such thing as qualia; the quality of conscious experience is a result of micro-judgements made by various parts of our brain. For Dennett there is no reality to the subjective quality of our experience over an above the fact that there seems to be that subjective quality.

~ ~ ~ ~ ~ ~

...and now 2 reviews of Dennett’s book CONSCIOUSNESS EXPLAINED:

book cover - click to view on Amazon.com
What really goes on in there

By George Johnson (November 1991)
George Johnson is an editor of 'The Week in Review' of The New York Times and the author of "In the Palaces of Memory: How We Build the Worlds Inside Our Heads."


CONSCIOUSNESS EXPLAINED
By Daniel C. Dennett.

Wielding his philosophical razor, William of Ockham declared, in the early 14th century, that in slicing the world into categories, thou shalt not multiply entities needlessly. He might have been pleased when, half a millennium later, James Clerk Maxwell helped tidy things up by writing the equations that show magnetism and electricity as perpendicular shadows cast by light beams, radio waves, X-rays and other forms of what we now call electromagnetic radiation. Einstein did Maxwell one better by equating mass with energy. And today the physicists promise us that once we give them their superconducting supercollider, they will take a giant step toward the day when they can unify light with gravity and the two forces at work inside the nuclei of atoms -- showing how everything, even the geometry of space and time, crystallized from the primordial flash of the big bang.

But there would still be a gaping hole in this grandmother of unification theories: an explanation of the minds that are doing the unifying. Since brains are made from the same atoms as everything else, there must be some way to unify mind and matter. The alternative would be to go against the Ockhamite tradition and, like Descartes, admit mind as a separate substance operating outside the laws of physics.

Denett lecturing
Daniel C. Dennett, the director of the Center for Cognitive Studies at Tufts University, is one of a handful of philosophers who feel this quest is so important that they have become as conversant in psychology, neuroscience and computer science as they are in philosophy. "Consciousness Explained" is his attempt, as audacious as its title, to come up with a scientific explanation for that feeling, sometimes painful, sometimes exhilarating, of being alive and aware, the object of one's own deliberations.

Ever since Emil Du Bois-Reymond demonstrated in 1843 that electricity and not some supernatural life force travels through the nervous system, scientists have tried to explain mental life biologically. It's been a long, slow haul. An important step was taken in the early 1940's when the neurologist-philosopher Warren McCulloch and the teen-age prodigy Walter Pitts showed how webs of neurons exchanging electrical signals could work like little computers, picking out patterns from the confusion buzzing at our senses. Inspired by this metaphor, neuroscientists have been making the case that memories are laid when the brain forms new connections, linking up patterns of neurons that stand for things in the outside world.

But who, or what, is reading these neurological archives? The self? The ego? The soul? For want of a theory of consciousness, it is easy to fall back on the image of a little person -- a homunculus, the philosophers call it -- who sits in the cranial control room monitoring a console of gauges and pulling the right strings. But then, of course, we're stuck with explaining the inner workings of this engineer-marionette. Does it too have a little creature inside it? If so, we fall into an infinite regress, with homunculi embedded in homunculi like an image ricocheting between mirrors.

The great success of cognitive science has been to point a way out of this fun house. As Mr Dennett explained in an essay in his 1978 book, "Brainstorms," the reason we get the regress is that at each level we are assuming a single homunculus with powers and abilities equal to those of its host. Suppose instead that there are in the brain a horde of very stupid homunculi, each utterly dependent on the others. Make the homunculi stupid enough and it's easy to imagine that each can be replaced by a machine -- a circuit made of neurons. But from the collective behavior of all these neurological devices, consciousness emerges -- a qualitative leap no more magical than the one that occurs when wetness arises from the jostling of hydrogen and oxygen atoms.

The information processing carried out by the homuncular hordes need not be a particularly orderly affair. In the late 1950's a computer scientist from the Massachusetts Institute of Technology named Oliver Selfridge unveiled a model called Pandemonium, in which homunculi -- he called them demons -- shouted at one another like delegates in a very democratic parliament, until they reached a consensus on what was going on outside the cranial chamber. In a more recent theory, called the Society of Mind, Selfridge's colleagues Marvin Minsky and Seymour Papert call these homunculi agents. The psychologist Robert Ornstein calls them simpletons, perhaps the most appropriate name of all.

Some homunculi might be dedicated to such basic tasks as detecting horizontal and vertical lines, or identifying phonemes. Their reports would be monitored by other homunculi (shape recognizers, word recognizers) that are monitored by still other homunculi. Suppose you are watching a play. Tripped by reports from various line and shape detectors, the homunculus that recognizes bilateral symmetry might fire, and its signals (along with those of other homunculi) would activate the person detector. There is someone on stage.

But before that final flash, other parts of the brain might be entertaining rival hypotheses -- what Mr Dennett calls multiple drafts. Spinning tops and pine trees can also appear bilaterally symmetrical. But the minority committees of homunculi considering these interpretations would be contradicted by reports from various motion detectors (trees don't move, people don't spin) and finally by the sighting of moving columns generally agreed by yet other homunculi to be arms and legs.

Considering all this hubbub, maybe it's a blessing that we are not more conscious than we are. Usually it is only the winning interpretations that we become aware of. But occasionally we get to eavesdrop on the behind-the-scenes debate. Sometimes in winter, I glance out the back window of my apartment in Brooklyn and am startled to see an old Indian woman in a shawl, like a figure from an R. C. Gorman painting, standing on the terrace of the building behind mine, huddled against the wind. It takes a second longer before a rival, more convoluted interpretation emerges: the shape is really a tree wrapped in burlap to protect it until spring. Sometimes, driving fast with the window down, you might find your word detectors, fed by your phoneme detectors, misfiring, picking voices out of the wind.

But what exactly is happening when these subliminal judgments shove their way into consciousness? As Mr Dennett explains, if the result of all the homuncular discussion is that a winning interpretation is presented for appreciation by some central self, then we have solved nothing. We're back to the image of an intelligent, fully conscious homunculus sitting in a control room, which Mr Dennett calls the Cartesian Theater.

His way out of this mess is to propose what he calls a Joycean machine, a kind of mental operating system (like the computer programs Windows or MS-DOS) that acts as a controller, filtering the cacophony of inner voices into a silent narrative -- a stream of consciousness. To avoid the problem of infinite regress, he hypothesizes that this master controller is not a fully cognizant marionette but a "virtual machine," created on the fly from temporary coalitions of stupid homunculi. It is because of this mental software, he proposes, that we can not only think but reflect on our own thinking, as we engage in the step-by-step deliberations that occupy us when we are most aware of the plodding of our minds.

For someone who is encountering this kind of theory for the first time, that is probably not a very convincing summary. But Mr Dennett's argument is not easily compressible. At a time when so many nonfiction books are just horribly long magazine articles, he makes use of just about every one of his 500 pages. As he readily concedes, it is practically impossible -- for him or anyone else -- to keep from lapsing into a deeply grooved mental habit: thinking that there is some kind of ego inside us, peering out through the ocular peepholes. To break us of these assumptions, he makes his argument cumulatively, using thought experiments and anecdotes to build up his case piece by piece. For 50 pages or so, he attacks his subject from one angle, until we start to get a glimmer of what he means. Then he retreats and attacks from another angle.

Consider, for example, his story of Shakey, a robot invented in the late 1960's by Nils Nilsson and his colleagues at Stanford Research Institute, a scientific think tank in Menlo Park, Calif. Shakey is a box with motorized wheels and a television camera for eyes. Conceived in the dark ages of electronic miniaturization, Shakey had a brain that was too big to keep on board, so the robot used a radio transmitter to communicate with a central computer. Human operators would type commands on a keyboard, like "Push the box off the platform." Shakey would dutifully explore the room until it found the box. Then it would push a ramp up to the platform, roll up on top and shove the box onto the floor. The robot was able to navigate because its software was designed to recognize the signature that boxes, pyramids and other objects left on the electronic retina of its video eye. As an object came into sight, the computer would measure differences in illumination, detecting an edge here, a corner there. Referring to rules about how different objects look from different vantage points, it might decide whether it was seeing, say, the slope of a pyramid or the incline of a ramp.

Mr Nilsson would watch these cogitations on a video monitor, as Shakey confronted a big dark blur, tracing its edges with bold white lines and finally declaring it a box. But (this is the punch line) there was no master homunculus inside Shakey watching a television screen. The monitor was purely for the benefit of the human observers; when it was unplugged, the robot worked just fine. One would look in vain for fleeting images of boxes and pyramids reverberating inside Shakey's circuitry. The robot's brain was just processing signals, the ones and zeros of binary code. It had no need of a Cartesian Theater. But it acted as though it had one.

computer mind
Now let us retreat and approach the problem from a different perspective, an evolutionary one.

In the beginning, the dividing line between self and other was no more than a membrane of lipids, sugars and proteins separating the inside of a cell from the outside world. But little by little pseudopods and flagella, the unicellular precursors of arms and legs, evolved to help organisms embrace the edible and avoid being eaten. As multicellular creatures evolved, Mr Dennett explains, they developed more complex survival mechanisms: duck when confronted with a looming object (it might be a buzzard or a rock); pay attention to vertical symmetry (it might be another creature looking at you, in which case you could draw on detectors that distinguish between predator, prey and potential mate). Mr Dennett speculates that these survival mechanisms are the precursors of the mental homunculi. Over eons, animals acquired an evolutionary grab bag of these self-perpetuating tricks, which allowed them not only to monitor the environment passively but to explore, hungering for the information that increased their odds of survival as surely as a good piece of meat.

At first many of the neural devices were discrete, Mr Dennett speculates, unconnected to one another. But slowly they began to develop communication lines. Imagine the first primitive people, just dimly conscious, learning to use language to milk their fellow humans for information: "Is there food in that cave or a jaguar?" Then one day, someone might have asked for information when there was no one else around: "Now let me see, where was it that I left that chisel?" And, lo and behold, another part of his brain answered. A loop was closed in which the vocal cords, the vibration of the air and the eardrums were used as a pathway to connect one part of the brain with another. A virtual wire was formed. Eventually this signaling became silent -- the voices were in the head.

To make sense of all the mental racket -- the shouting of the homunculi -- the Joycean machine developed and assumed the task of deciding what to think about next. As Mr Dennett sees it, this is a good part of what consciousness is. Riding on top of the neural machinery -- the hardware of the brain -- is a program that simulates a serial computer, creating a step-by-step narrative from the tumult unfolding in the world and in the head.

The Joycean software is not inborn like, for example, the looming-object detector. It is an accretion of learned behaviors, habits of mind, developed for recruiting teams of homunculi to deal with the long deliberative processes that the brain's wetware alone is not well equipped to handle -- planning a trip to Europe, dividing up a restaurant check, reliving an embarrassing encounter and deciding what you should have said.

If at this point you're still not quite in the swing of Mr Dennett's theory, you can be sure he will keep retreating and attacking and retreating and attacking, circling in on his prey.

At first I was a little disappointed when I realized that what I was reading was not so much a brand-new theory of consciousness as a synthesis and sharpening of ideas that have been around awhile -- Mr Minsky and Mr Papert's Society of Mind model, Julian Jaynes's theory of inner voices described in "The Origins of Consciousness in the Breakdown of the Bicameral Mind." But in illuminating these ideas and relentlessly putting them to the test, Mr Dennett's exposition is nothing short of brilliant, the best example I've seen of a science book aimed at both professionals and general readers.

Scientists who look down on colleagues who write popular accounts can rest assured that Mr Dennett is never less than methodical, thorough, fair in his attributions -- you can almost feel those other philosophers and scientists reading over his shoulder as he types. It's a wonder, then, that he managed to write a book that is also so clear and funny, with introspective flights of fancy worthy of Nicholson Baker. How do you know, Mr Dennett muses, that everyone in the world but you isn't a zombie? Or that you are not just a brain in a vat, hooked up to a simulation you think is life? It has been a long time since I have felt so engaged by a book.

For all its clarity and style, "Consciousness Explained" is not easy reading. Mr Dennett probably should have put his methodology (something called heterophenomenology) in one of his appendices (he has one for scientists and one for philosophers). And parts of his argument will be difficult for people who haven't read some of the popular accounts of artificial intelligence and cognitive science. But this book is so good that it's worth studying up for.

In his best seller, "The Emperor's New Mind," the Oxford mathematician Roger Penrose dismissed in a few pages the possibility that consciousness can be explained by thinking of the brain as a kind of computer. If there is any justice, everyone who bought a copy of Mr Penrose's far more difficult book will buy a copy of Mr Dennett's and marvel at how, in the hands of a master explicator, the richness and power of the computer metaphor of the mind comes shining through.

--George Johnson

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

From JoT

Daniel C. Dennett

Dennett's Consciousness Explained
by Daniel C. Dennett (1991)

Dennett is a philosopher who takes hard science very seriously, especially cognitive psychology. This book changed my concept of the human being. Dennett reveals the flaws of the "normal" view of human consciouness (that there's a little person in your head watching your body's input on a big screen) and reveals how persistent this metaphor is even among those who ought to know better. If you think you don't believe in the little person sitting in front of the screen, you might want to read this book to be sure that this metaphor doesn't still affect your model of consciousness. It affected mine. Dennett provides a new hypothesis of human consciousness, basically that consciousness is a set of actions in the brain, which actions are not strictly related in serial time. What I took away from Consciousness Explained is:

Rapid Access

You're aware of a lot less at any one time than you think you are. The body is set up to gather data quickly when you need it, so quickly that you usually think you already knew it. Instead of the brain having an "internal movie screen" with your visual field displayed on it, it's simply capable of finding out anything in your visual field at a moment's notice, faster than thought.

Unconscious Subsystems

A conscious system ought to be made up of simpler subsystems that themselves are unconscious (in the same way that a living being is made up of subsystems that, in themselves, are not alive). The concept of a consciousness separate from and in addition to these unconscious subsystems is extraneous, like the concept of a life spark separate from and in addition to the not-really-alive subsystems in a body.

Gappy Consciousness

The "stream of consciousness" that looks to me like a smooth-running, 3-D, feel-o-rama movie is more like a non-linear series of impressions, with the blank spots unnoticed because they're blank.

Timing

The first time I read Consciousness Explained, I couldn't follow the material on the timing of mental representations of time. The second reading, I think I got it, and it's yet another shock to the commonplace theory of self.

Even if your brain registers content A after content B, if it concludes that A happened before content B, then we perceive it as happening first. This is a big idea. You don't first experience content B, then experience content A, and then judge that A happened first. Instead, content B takes place, then content A takes place, and content A seems like it happened first. This out-of-sequence timing occurs for very fleeting events. The brain's fast enough to get slower events in chronological order.

It should be obvious that the order in which your brain registers content and the order in which that content seems to happen in the outside world need not coincide. The color of the apples seems to be red even though there's no red in your brain corresponding to the apple's color. Likewise, content A seems to be first even though it took shape in the brain after content B.

You don't experience moments one after the other like someone sitting in a movie theater watching one's life on the screen. Your sense of time is a sense like your others.

Outstanding Issue: Qualia

What I'd really like is for someone to read this book and tell me what "red" is. Dennett demonstrates to his satisfaction that there's no real sense to the question "Why is red red and not green?" but I can't follow him there. While he trashes the idea that the difference between "red" and "green" is practically impossible to pin down, what about the difference between "red" and "sweet"? Likewise, he points out that one person's experience of the taste of beer is different from another's, and one person's experience of the taste even changes over time. But we experience color as color and not as smell. It might be nonsene to ask whether two people are experiencing the same flavor, but it's not nonsense to ask whether two people are both experiencing color or taste, is it?

My guess is that Dennett's describing consciousness is like Darwin's describing evolution when people talked about "blood" instead of genes: the theory right in general, but there's a missing concept without which the theory isn't fully explicable. Dennett's hypothesis seems right, but I can't help but feeling that there's a leap he's making that won't be a leap any more, once some new scientific discoveries show us the bridge across the conceptual chasm.

Outstanding Issue: Immortality

One thing I don't get about Consciousness Explained is Dennett's conclusion that his theory of the self allows for meaningful immortality. He describes the self as an abstraction, a "Center of Narrative Gravity." He says that its content is ideas, and so it can be represented in different media and still be basically the same thing, as the story of Romeo and Juliet can be basically the same whether it's a play or a movie. Thus one could, theoretically, replicate one's self on a computer (a superadvanced, futuristic computer, anyway).

I don't see that as meaningful immortality. It wouldn't do me any good to have a computer replica of me enjoying life on the other side of the world. Nor would it do me any good to have a computer replica of me enjoying life in the future. Sure, in either case the computer replica would be "me" as near as anyone could tell, but it wouldn't be me being me. (It's Jehovah's clones all over again.) [Actually, no. If your computer replica had your consciousness, you wouldn’t feel the difference ~K)

What's the difference between me and a robot just like me? you might ask.
None. Physical immortality wouldn't do me any good, either.

Tangent: The Matrix

Dennett's opening thought experiments implicitly trash the idea that columns of green text running up a screen could be the code for a virtual reality, "the matrix." Basically, there's way too much information coming in through the senses to be encoded through such a narrow channel.

—JoT
(June 2001, April 2003)


Labels: , , , , , , , , , , ,


0 Utterly Enriching Reactions:

Post a Comment

<< Home


 

Burped by Flogger