15 November 2006

my mind on my mind

Not long ago, Q introduced me to the Zuboff-Unger Brain Explosion, a thought-experiment that seems to indicate that there is a sliding scale of consciousness (as opposed to definitive 1-or-0-like states of either consciousness or not-consciousness). Between Mr. W's fantastic 11th-grade philosophy class, the cognitive neurolinguistics lab where I worked for a pittance as a college freshman, and engaging conversations with JH, Zq, and SS (among others), I've seen by now my fair share of interesting thought-experiments trying to tease out something tangible about consciousness. Here are a few of the best:

1. Hofstadter's Anthill, Putnam's Bees: Consider an ant colony. Each of the ants has a specific role: there are food-gatherers and soldiers, breeders and the queen. Ants are themselves unintelligent, instinct-driven, instinctual creatures that act predictably. (Indeed, we can model the behavior of a food-gathering ant almost perfectly with the following two rules: 1. if you run into something edible and you aren't holding anything, pick it up, and, 2. if you run into something edible and you are holding something, drop it. These very simple rules create ants who wander, find food, bring it back, and pile it up. Ant behavior is malleable in predictable ways, too: spread scent, and an ant will follow it.) We can feel reasonably confident that no ant consciously directs the running of an ant colony in a way that a king or CEO might consciously direct the complex interactions of a community of people. Nonetheless, however, a full ant colony seems to be an adaptive system that can defend itself, move itself, rebuild when injured, and store resources for the future. We might say that when a child stomps on an anthill, many individual ants die, but the colony itself "heals" as the anthill is rebuilt and new ants take on the function of the old ones. Roughly speaking, we might even conclude that an individual ant is to the ant colony what an individual skin cell is to a conscious person: a useful but expendible part of the whole, rather than a significant thing in itself. An ant colony may thus be seen as a system rather than as a collection of individuals.

Hilary Putnam (or possibly Barnett summarizing Putnam--I can't remember where I read it) takes this a step further with his swarm of bees idea. Imaine a swarm of bees that is organized into the shape of a giant human being. The bees perform the same functions as all of our own parts and systems: a group move together like the heart, another group relays information internally just as neurons do, etc. Nonetheless, we don't call the swarm conscious. To take Putnam's example again, if we shoot the swarm, we aren't worried about the pain that the swarm will feel, though we might worry about the pain of individual bees (or at least we might worry about the morality of killing a bunch of bees, while we aren't worried at all about the morality of hurting the swarm, which we don't take to be any sort of morally relevant agent). Conclusion drawn from all this: organized systems can look surprisingly like conscious beings--but we nonetheless have the strong intuition that organization is not itself sufficient for consciousness.

2. Searle's Chinese Room: Consider a person who sits in a room filled with books of Chinese characters. Let's call him Fred. Sinophones come to Fred with questions written in Chinese, and they pass them to Fred through a window in the wall. Fred's job is to take the pieces of paper that come through the window, follow a set of English-language instructions about how to use the books behind him, and write down whatever characters those books tell him to. Then he passes the paper back out of the window with his drawings on them.

Fred doesn't speak Chinese, but we can imagine that the Chinese-speaker on the other side of the window might think that he does. After all, if a person asked the sum of two and two in Chinese, and Fred's process led him to write the symbol for four, it sure would appear that he knew what he was being asked. If we gave him really good books, Fred could probably answer really complex questions. If we made Fred a computer, and made the books his database, and made the Sinophones ourselves, it would follow that, given enough speed of looking-up, we couldn't tell the difference between a computer that understood us consciously and a computer that was simply a dumb machine like poor old Fred in the Chinese room. Conclusion: the appearance of conscious thought not itself sufficient to prove that conscious thought is actually present (and, moreover, we can't ever know whether the thing on the other side of the window is conscious or not). Unlikely skeptical spin: Well, Fred doesn't understand Chinese, but Fred-plus-the-room-and-all-its-books-and-instructions does understand the language. So we can say that the whole room, with Fred as just a predictable cog in it, understands what it is doing in a conscious way.

3. Block's Chinese Nation: I don't know what's with philosophers of mind and the Chinese, but... imagine a person who doesn't have a brain. Every time a neuron senses something (feels, smells, tastes, etc.), the electrical impulse is conveyed not to a cranial neuron but instead via satellite to a guy in China. He looks up and sees a big sign that shows a symbol on it, and he knows that, when he gets the combination of the symbol and the incoming call, he should send an outward call back to a motor neuron. In this way, when I stub my toe, the sensory neurons in my toe wire their normal electrical impulses to the particular people in China to whom they are connected by satellite, and those people press the buttons that collectively instruct the impulses that cause my motor neurons to have me suddenly pull back my foot here in New York. If we have 100 billion neurons in the brain, then we'd need 100 billion people for this to work, of course (along with a big sign, which is just to allow for some parallel to "brain states"--so when I am excited and my brain is full of pain-inhibiting adrenaline, I might have a different reaction than when I am in a more subdued state). But pretend there were 100 billion people in China; if we had them instead of neurons, would they collectively make up my brain? Intuition, of course, says no. Conclusion: functionally brain-like conscious people can't themselves make up another conscious brain. Corollary that some folks have thrown in there: we just defined a mind to say that it interacts with the body in such a way as to cause physical reactions via neurons. But we can't really say that the Chinese nation constitutes one mind (my mind) while individual Chinese people continue to have their own minds, because then you end up with two different minds, ostensibly working off of two completely separate sets of sensory input, both affecting one Chinese person's physical actions. (Not sure why that's impossible, but so goes the claim.)

4. The Zuboff-Unger Brain Explosion: Consider a disembodied brain sitting in a vat of whatever nutrients disembodied brains need to survive. Now cut it in half. Any neurons that originally had connections to neurons in the other half are fitted with tranceivers that send and receive electrical impulses to the neurons with which it was connected before (using electromagnetic waves, say, so as to be able to do this at the speed of light). Now the brain still works as usual, but we can move the two halves really far away from each other. Well, okay, so cut each of those in half and do the same thing. And halve those, and so on, until you have the brain's individual neurons sitting in nutrient vats spread out across the globe, all sending and receiving electrical impulses via tranceiver. Is this still a brain? Is that extended network conscious as a normal brain is conscious? Zuboff and Unger suggest that it isn't. Conclusion: proximity matters. But this leads to the further conclusion that there are degrees of consciousness. When the brain is all put together, it's a brain. When it's all spread out, it isn't one. When it's only a little bit cut up, or if all the neurons are separated but really really close, then maybe it's mostly a brain. More spread out, less brainy. Less spread out, more brainy. Gee, that's weird.

My reaction to all of these is roughly similar, and it is roughly the following: why can't consciousness be big? It seems to me that self-awareness is an emergent property of sufficiently complex systems that hold a sufficiently large amount of information in them. I mean, the thing that makes Hofstadter, Putnam, Searle, and Block all skeptical of the consciousness of their constructs is that there is no identifiable place for knowledge to inhere. If you say that 3 billion bees together are conscious, or that a room and books and a machine-like man together are conscious, we want to say, "yeah, but it's absurd for a room to be conscious" or "lots of bees interacting don't have any more conscious knowledge of what they're doing than lots of bees NOT interacting." But by these arguments, we could as well say, "yeah, but it's absurd for a single neuron to be conscious" and "lots of neurons interacting shouldn't make anything more conscious than individual neurons NOT interacting." Clearly, the first is true but doesn't tell us a darned thing about the consciousness of a lot of neurons, while the second is just false.

The thing is, lots of interacting parts together really are more than the sum of the parts seperately. There's an awful lot of information stored in the state of their connections to one another, and this is value far above and beyond the information and substance stored in their individual being.

An illustrative example: when talking about artificial neural networks, we consider nodes and weights. A model of natural language might include the nodes "cat," "dog", "a," "ran," "bit," "the," "brother," "my," and "spotted." Pretend that Geraldine repeatedly feeds the model the following sentences: "The dog bit the cat;" "The dog bit my brother;" "The spotted dog ran;" "The cat bit the dog;" and "My brother bit a cat." Every time the system sees that two words appear near each other, it adjusts the weights between those nodes to be a bit stronger relative to the other weights between nodes. At the end of it all, you'd see that the network had learned stronger and weaker weights for the interconnections between relevant nodes. "The" and "dog" would be very closely linked, as would "the," "dog," and "bit." "Cat" and "dog" would also be strongly linked as an effect of how other word linkages would readjust the weights across the system. "My" and "the" would be slightly connected, while "spotted" and "cat" would be quite unrelated. If Geraldine looked at the network and saw just the nodes, she'd know something about what this network "knows" and "doesn't know"--but she wouldn't know that the system understands that "My dog ran" is a better sentence than "Spotted ran my the." (Amusingly, this particular system would be absolutely fine with "My spotted brother ran," however.) Trying to look just at the specific nodes, rather than the relationships between them, dooms the observor. (I should note here that I find all this quite compelling and I have scientific studies to back up the idea that simple grammatical systems like this can work well--but Chomsky and others still want to locate grammar somewhere, and this is a creditable alternative view. They essentially want to say that there also exists a node that explains the rules of grammar in detail, and with it we no longer need to know the weights because now we know the rules of word-combination being used. I confess, I think this is naive and stupid. It is awfully like the internet, though: all the information on the internet is contained in individual servers; the connections themselves are not very dynamic but rather simply allow the nodes to share their information. There's not much further information stored in the state of this network.)

Back to consciousness. In the example above, we can see how most of the relevant grammatical information is contained not in the nodes, but in the connections between them. Similarly, I suggest, a bunch of ants reacting simply and predictably to the pheromones left by fellow ants contains a lot more than just a bunch of ants. That colony of ants also contains an enormous amount of information about how the ants are related to one another and how the actions of one of them affects and will affect the actions of others. This complex, even intractable, set of interconnections is, to me, precisely the first building block of conscousness. Consciousness, I'd argue, isn't located in any part of the brain (and I suspect most of the above philosophers would agree with that statement, though I think doing so would be inconsistent of them); it's located in variable and varying electrical connections between neurons. Enough neurons--indeed, enough ants--and there's a mind-boggling amount of information being stored in those connections (and that, I propose to you, is where and what memory is). Consciousness is just a side effect, an emergent property of any similarly complex system (of which there are, I think, not very many of a size that we can comprehend). If the necessary billions of constantly connected, constantly re-weighted parts are spread out around the universe (a la Zuboff and Unger), I don't see any reason why it is counterintuitive that that is conscious, too. And if the ants and the bees and the guy in the room all seem intuitively unconscious, maybe that's because we aren't thinking about them on the grand scale that is required. Once we start talking about 100 billion ants all interacting together (a truly inconceivable exercise), we'll be looking at something of similar complexity to the brain. Until then, we can all agree that a bunch of ants in an anthill don't form any conscious community.

As for Block, there's no reason that conscous beings can't themselves make up some other consciousness. I just don't get it. Frankly, it seems to me that the way biology works is roughly that less complicated things join together to make more complicated things. Cells work together to make organs. Organs work together to make human bodies. Human bodies work together to make... superbodies, or something. I mean, why is that weird and unusual? Evolution starts with the simple and builds up to the complex.

More than that, though, I just don't understand the whole deal with consciousness at one level precluding the possibility of another, similar consciousness on a much greater scale. Can someone explain it to me?

Finally, I suppose I should point out that this interaction-based model of consciousness that I propose has neither an upper bound on size nor a lower bound on speed. At some point, there just aren't enough interacting parts to allow for (10^11)! connections--there is a minimum degree of complexity required. But, within the constraints of a very large universe, there's no maximum size that a conscious system could be--the elements of that system could be atoms, or neurons, or ants, or people, or even planets. Additionally, we are used to thinking of intelligence on our own time scale (and for good reason, since we live by it). I see no reason why consciousness could not be a much more lumbering thing than our own experience shows it to be, though. Our neurons take only a few miliseconds to fire, and action potentials carry electrical signals across our cells at a speed of anywhere from 10 to 100 meters per second. It seems at least conceivable to me that a system that was 1,000 times bigger and 1,000 times slower would still have as much information and as much consciousness as we do, but we might not recognize it because we simply cannot anthropomorphize things like that.

Anyway. A long post. Thanks for bearing with me. Let's talk artificial neural nets sometime, shall we?

4 Comments:

At 3:59 AM, Blogger blackcrag said...

"Cells work together to make organs. Organs work together to make human bodies. Human bodies work together to make..." societies. Cultures. Nations.

That's about all I have to add to this post.

However, conversely to the ants/bees examples, the more human beings you gather together, the less organized (and therefore less conscious) it looks from the outside.

I agree there is nothing limiting either size or speed of consciousness.

The one thing I come to looking at your argument, is plant-life. We recognise plants as being alive, but not conscious (I am assuming here 'conscious' denotes some form or level of intelligence, that is enough to conceive and perform a deliberate act, such as hunting and killing prey or running from said hunter if you are prey.)

Could not trees be every bit as conscious as a dog, except, as their life spans are so long, we don't recognise anything of theirs as thought?

Somehow I don't believe my own conjecture, which shows there may be another prerequisite to intelligence--mobility. It might be the desire or necessity of moving from a to b, of finding grasslands to graze, or territory to hunt in that jumpstarts the need for consciousness.

The other thought the first paragraph of this comment sparks, is "God". Not any particular god as I don't believe any religion is even close to being right about anything. But, it is said we have the spark of divinity inside us. What if that is true? Then, we collectively, are a Supreme Being. Every individual alive is but one neuron in a Supreme Being's consciousness. And, as we are a part of that consciousness, we cannot perceive the actions of that consciousness, no more than our cells are aware of our actions in the greater world.

I'm basing that on a quantum mechanics theory, that you can either pinpoint where a certain atom is in space and time, or you can see it's direction but you can't do both.

Further thoughts on plants-have-consciousness... perhaps I wasn't thinking big enough. Maybe the planet is conscious on a scale we can't comprehend, and plants are just an organ in the planet's system. The tides and currents are analogous to our circulatory system (we even share the same salinity); maybe the plants are the lings of the planet. After all, they breathe too.

Sorry for the extra long comment. Other than those two thoughts, I fear I'm not smart enough for this debate. I'm not sure I even understand all that you posted, Skay.

 
At 11:52 AM, Blogger Skay said...

Hey now Crag, you seem pretty smart to me. And I don't think you need to apologize for the extra-long post after that whirlwind novel of a blog entry.

As for your idea about mobility, I think it really might hold water. There's a line of argument in cognitive science that runs, essentially, as follows:

1. All abstract thoughts come from the ability to generalize from concrete examples (so our concept of "one" comes from the ability to pick out what's the same about 1 apple, 1 car, 1 finger, 1 phone call, and 1 pass completion, while our concept of "difficult" comes from generalizing the shared characteristic that we find in learning differential equations, making an important decision without enough information, running a marathon, building a bridge out of toothpicks, and working with somebody we dislike). We build and modify our broad concepts as we are exposed to new examples; I might think that a horse is a red, brown, or white animal with four legs and a mane, but if I see a black horse it's not the case that I don't think it's a horse, but rather that I modify my sense of "horseness" to include the possibility of it being black, as well. The more black horses I see, the more I think that "horse" might be black, too.

This line of thought is called "prototype theory" and modern cognitive scientists tend to buy into it pretty strongly, preferring to think that we figure out our abstract ideas rather than being born with them. But I'd like to put in a plug here: the whole idea really originates with Aristotle.

2. Specific thoughts are definitionally tied to specific instances of something in the world. It doesn't make any sense to think, "that horse is black," if there is no horse over there in the first place.

3. We can only know there's a horse over there (or whatever) if it somehow makes an impression on us, that is, if we somehow sense it.

4. There's really no other kind of conscious thought than that with abstract referents and that with specific real-world referents.

4. Ergo, all thought requires sensory experience of the world.

This kind of argument, sometimes called the "body argument," is used to debunk the possibility of computer-based artificial intelligence. The general idea is that, no matter how good or how brain-like our programming, if the computer can't somehow "see," "hear," "touch," or otherwise experience a vast number of things in our world, it can never gain true intelligence, much less consciousness.

This brings us back to plants and mobility. Of course, things that happen in the world do affect plants (just as they affect rocks and paperclips and baseballs), but it is also the case that their effect is greatly limited when compared with their effect on, say, mobile mammals. We have an awful lot of ways of taking in the world and interacting with it that are closed to plants (and inanimate, unliving objects). And mobility is a great enabler. Besides allowing us to feel different sensations, it brings us to new places and allows us to greater engage with the world, and to direct our actions in it.

I'm not sure I'd go so far as to say that self-directed, human-like mobility is a necessary feature of consciousness, but I would say that it sure helps.

 
At 10:51 AM, Blogger Skay said...

John,

Yes, mind-brain materialists do believe this, and for I think a couple of very different reasons.

First, we return to the intuition that consciousness requires some experience of the world. If that's so, it doesn't seem to make any sense that billions of widely-distributed neurons could taste a pickle (for example). I'm not convinced this is much different from the broader claim that consciousness in any form requires something like a body.

Second is the very different argument that there may be no such thing as nonorganic "functionally equivalent components" of the parts of the brain. Essentially, Zuboff-Unger is good in theory, but in practice there can be no way to hook a working tranceiver up to a single neuron in such a way as to sufficiently mimic the brain's electrical activity.

 
At 7:38 AM, Anonymous Anonymous said...

Satellite radio began delivering chipsets to its manufacturing partners in late September or early October of 2000. The chipset consists of two custom integrated circuits designed by STMicroelectronics.

Digital and portable radio transceivers

 

Post a Comment

<< Home