What If Dogs Had Human Intelligence?

I’ve recently read a fascinating book called Fifteen Dogs, by André Alexis. In this fanciful, rather sobering tale, two Greek gods make a bet with each other about what dogs would experience if they were given human intelligence.

Although this story doesn’t focus on the issues I’ve addressed in this blog, it does highlight the fact that every mind shapes reality in its own way. Their new brain power radically alters their world-view, and this is quite disturbing to some of these canines. In fact one dominant dog named Atticus insists that those in his pack mostly suppress their new intellectual gifts. Continue reading


Crazyism and Consciousness

This week I attended a talk sponsored by the Center for the Explanation of Consciousness at Stanford University on “Crazyism about Consciousness and Morality,” by Eric Schwitzgebel. Eric is a philosopher at the University of California at Riverside. I’ve appreciated his work for some time, and I quote him in Your Living Mind: The Mystery of Consciousness and Why It Matters to You.

“Crazyism” about consciousness is the claim that to understand consciousness we will need to accept some idea that currently seems bizarre (bonkers, ludicrous, off the wall, ‘round the bend) and that has not yet been proven to be true. We do not yet know which crazy idea about consciousness will solve its deepest mysteries. We may not have even thought of it yet! But until we accept it, we will be totally unable to understand conscious experience.

As Schwitzgebel wrote in Perplexities of Consciousness, “it became evident in the late twentieth century … that all metaphysical accounts of consciousness will have some highly counterintuitive consequences. … Something apparently preposterous, it seems, must be true of consciousness.”*

Eric told us that he likes to open up new possibilities, to expand the range of alternatives. Many philosophers try to do the opposite. They concentrate on eliminating incorrect ideas, so as to zero in on The Truth. I tend to do this myself. I want to keep “cutting to the chase,” pushing to the bottom line, aiming for the bullseye. This attitude is often helpful, but Schwitzgebel’s work helps keep me from being too confident about my own pet theories.

I haven’t space to recap the arguments he marshalled for crazyism, but they were impressive, and I mostly agree with them. In my own work I’ve emphasized the idea that we make crucial mistakes in understanding our own minds, and that these errors make consciousness seem stranger than it really is. More broadly, we need to re-evaluate the relationship between:

What’s so

Our beliefs about what’s so

The words we use to express these beliefs

Many of our beliefs about consciousness are based on introspection. If there’s something kooky about our concept of consciousness, perhaps something has gone awry in our introspection-based judgments. So in what ways does introspection inform us about consciousness, and in what ways does it mislead us? In Your Living Mind I wrote:


For now, it seems likely that we usually do well at detecting, recognizing, and noticing changes in conscious sensory perceptions, including particular qualia. Sometimes we also make helpful comparisons among qualia. But we often make mistakes about other aspects of our experiences. Here are some errors that are particularly common and pernicious:

  1. Confusing our experiences with our judgments about experiences
  2. Thinking introspection reveals the internal structure of experiences
  3. Thinking introspection reveals the essential nature of experiences**


What do you think? By re-assessing introspection can we deliver ourselves from crazyism about consciousness? Your comments are welcome!

Roger Christan Schriner

*Schwitzgebel, Perplexities of Consciousness, p. x.

**Schriner, Your Living Mind, p. 155.

More on Zombies

In my last post I discussed David Chalmer’s idea of philosophical zombies – hypothetical creatures whose brains have precisely the same physical structures as ours and function in the same ways that our brains do, but without consciousness. Several people who read early drafts of my book, Your Living Mind, dismissed zombies as irrelevant. The whole idea is moot, one of them remarked, since it would be impossible for us to know that such a creature is a zombie. (Maybe the person sitting right next to you is one of them!) But Chalmers’ scenario is an example of both the value and the subtlety of thought experiments. If there actually could be such creatures, then conscious experiences are not brain events.

The zombie story asserts that if there could be a creature that is physically identical to you, but not conscious, then consciousness is not a state of your brain. We could dispute this claim by arguing that even though a creature physically identical to you could exist without being conscious, nevertheless consciousness is a state of your brain. But that won’t work. Let’s call your current brain state CBS. If your brain’s being in state CBS is sufficient for your being conscious, then if some other brain is in CBS, it would also have to be conscious. So you could not have a physically identical zombie twin. (What a relief!) On the other hand, if a brain’s being in state CBS is not sufficient for its being conscious, then consciousness is not a brain state. We would need a brain state plus something else to have consciousness – or we would just need the “something else.” So if zombies are truly possible, qualia are not brain states. Since there has been a strong trend toward saying that all real things are, in some sense, physical, that would be a revolutionary finding.

Michael Tye clarifies Chalmers’ idea with an omnipotent-being scenario. “One way to picture what is being claimed here is to imagine God laying out all the microphysical phenomena throughout the universe. Having done so, and having settled all the microphysical properties of those phenomena along with the basic microphysical laws, God did not then have to ask Himself ‘Shall I make lightning flashes or caterpillars or mountains … ?’ No further work was needed on His part.” Why? Because a lightning flash simply is a group of microphysical entities operating according to certain laws. By making all these particles and deciding how they would interact, the Creator would have ensured that lightning flashes, caterpillars, etc. would exist.

But what if consciousness is not physical? In that case zombies are possible. “Even if God had no further work to do in determining whether there would be a tree in place p or a river in place q or a neuron-firing in place r, say, having settled all the microphysical facts, God did have more work to do to guarantee that we were not zombies.”*

Tye is not trying to show that a deity created consciousness. That’s not the point. He’s just noting that this is one way of understanding Chalmers’ scenario. Conceivably, then, there could be an exact physical duplicate of you, right down to the last whirling electron, that does not enjoy a single millisecond of conscious experience.

Chalmers emphasizes that he is not trying to prove that a zombie duplicate of you or me could really exist in this universe – only that this sort of thing is conceivable. But what does “conceivable” mean? Now the fog drifts in. There are several types of conceivability, including a contentious notion called “ideal conceivability.” Philosophical professionals have not yet sorted out these intricacies.

In trying to solve the hardest problems of consciousness we seem to be perpetually stuck at square one. Nagel has stated bluntly that “we have at present no conception of what an explanation of the physical nature of a mental phenomenon would be. Without consciousness the mind-body problem would be much less interesting. With consciousness it seems hopeless.”** And William Seager concludes his book, Theories of Consciousness, with this dispirited admission: “It is indecent to have a ragged and unpatchable hole in our picture of the world. Cold comfort to end with the tautology that an unpatchable hole is … unpatchable.”***

To some it seems as if these scholars are worrying about trivialities, as irrelevant as asking how many angels can dance on the head of a pin. But some questions about the nature of reality actually are quite difficult. I have my own ideas about how to understand consciousness, but on some level I must also bow to this great mystery.

Roger Christan Schriner

*Michael Tye (2009) Consciousness Revisited: Materialism without Phenomenal Concepts. (Cambridge, MA: The MIT Press), pp. 25-26.

**Thomas Nagel (1974) “What Is It Like to Be a Bat?” Philosophical Review, October, 1984, Vol. 83, No. 4, p. 436.

***William Seager (1999) Theories of Consciousness: an Introduction and Assessment. (New York: Routledge), p. 252. Ellipses are in the original text.

What Is It Like to Be a Bat? More on Nagel’s Famous Conundrum

In my previous posting I quoted Thomas Nagel’s famous essay, “What Is It Like to Be a Bat?” Nagel suggested that it is “like something” to be conscious. It is like something, for example, to be a bat. It is like something else to be you, reading these words right now. It is like something to be tasting guacamole. It is like something else to feel nauseous (unless it was really bad guacamole).

And so on. By contrast, most of us assume there is nothing it is like to be a light bulb or a toadstool. They are not conscious, so it is like nothing at all to be them. (Some would argue that the toadstool is sentient, and of course panpsychists would argue that even the light bulb is conscious.)

In time “a consensus … emerged that Thomas Nagel’s expression, ‘what it is like to be’ succeeds in capturing well what is at stake” in discussions about consciousness” (Varela and Shear, Journal of Consciousness Studies, February/March, 1999, p. 3). But even though it was a stunning intuitive breakthrough, some people doubt that it has any clear meaning. David Rosenthal complains that the term “‘what it’s like’ is not reliable common currency” and quotes William Lycan as saying that this phrase is “positively pernicious and harmful” (Rosenthal, Analysis, July, 2011, p. 434).

Here’s an example of the way this phrase can confuse us. Some say that what-it’s-like includes only sensory experiences, such as seeing the blueness of the sky, hearing a harpsichord, or suffering a migraine headache. Others say that non-sensory mental states such as highly abstract thoughts can be “like something.” As Rocco Gennaro writes, “It does indeed seem right to hold that there is something it is like to think that rabbits have tails, believe that ten plus ten equals twenty, or have a desire for some Indian food” (The Consciousness Paradox, p. 27). It’s not easy to adjudicate this dispute.

Another problem is that this term allows us to fuse the subject of experience and the object of experience without realizing we’re doing that. If that sounds confusing, it probably means you’re paying attention. It is very difficult to think and communicate about this issue, but I’ll give it a go:

People sometimes use “what it’s like” to refer to what it is that we are experiencing – blueness, musical notes, pain, and so on. But others use what-it’s-like language to mean what it’s like to subjectively experience these things – what experiences are like for the creature that has those experiences.

This reflects a duality in the way we think about consciousness. Sensory experiences seem to be both what we experience and the way these experiences seem to us. They are a certain way, and they seem a certain way. There is a subtle but crucial difference between the sense that you are experiencing something and the sense that you are experiencing something.

I read Nagel himself as emphasizing the second interpretation. He didn’t talk about what a bat’s echolocation patterns are like. He talked about what it was like to BE the bat, sensing via echolocation. But many philosophers who speak of what-it’s-like emphasize the first interpretation – what the mental states that we are experiencing are “really like.”

I’ll mention just one reason this distinction is important. Theoretically, what we are experiencing and what it’s like for us to experience it could come apart. Suppose pain is a brain state that is sometimes conscious and sometimes unconscious, as when a headache drifts in and out of my awareness. And suppose the experiencing self is a (very complex) brain state that detects and responds to sensory inputs. In principle, my brain could malfunction so that when the experiencing self detects a pain state, it operates as if it was enjoying the experience of tasting peanut butter. In that case, what am I really experiencing, pain or peanut-flavor? (For a spirited argument about this point see the articles by Block, Rosenthal, and Weisberg in Analysis, July, 2011.)

So when people talk about what it’s like to have conscious experiences, this implies both the experience that we’re aware of and what it’s like to be aware of this experience. It fuses the object of experience with the subject of experience. Discussing what-it’s-like makes it sound as if we’re dealing with one idea, but it’s actually an amalgamation of two ideas. This causes confusion.

And speaking of confusion, I’d appreciate feedback about which aspects of this blog-post are unclear to readers. I’m trying to refine my presentation of this murky topic.

In Your Living Mind I wrote: “We seem to have needed such a term, but perhaps we will eventually find better ways to express what Nagel was getting at.” I still find myself using what-it’s-like language, but I try to make clear what this phrase means to me. To me, consciousness involves what experiences are like for the experiencing subject. This seems more in keeping with Nagel’s original intent.

Roger Christan Schriner

Change Blindness

Following up on last week’s entry about “normal disabilities,” I’ll say a bit about change blindness, a phenomenon that has become widely known in recent years. It may seem as if we see what’s in front of us in full detail, so that we have an essentially complete picture of the visible world. We now know this is false, thanks to the discovery that people often fail to see changes in a visual scene. Some years ago at a Toward a Science of Consciousness conference in Tucson, I sat in the audience and watched as a picture was flashed repeatedly on a big video screen. We were warned that part of the picture would change from one flash to the next, and then back again. I did not notice the change at first, but when I did the difference was dramatic. A cathedral jumped back and forth from one part of the picture to another. In another case the chimney of a house lept to the opposite end of the roof. At first, only a few people would notice these shifts, and then others would laugh and gasp as they saw what was happening.

If visual experience just copied what’s in front of us, people would notice these changes immediately, but they don’t. This has surprised many vision researchers.

Here’s another example: “Subjects are shown a video, the title of which is ‘The Color Changing Card Trick.’ They attend closely, expecting to see a color change involving the cards. One of the cards does change color. The subjects are then told that . . . there were four other color changes – one involving the color of the tablecloth upon which the cards were resting, one involving the color of the backdrop, and two involving the shirts of the experimenters. This is shocking news to nearly all subjects. When the video is played again, the color changes are obvious” (Michael Tye, Consciousness Revisited: Materialism without Phenomenal Concepts, p. 170).

This one’s just amazing. Daniel Simons and Daniel Levin “set up a kind of slapstick scenario in which an experimenter would pretend to be lost on the Cornell Campus, and would approach an unsuspecting passer-by to ask for directions. Once the passer-by started to reply, two people carrying a large door would (rudely!) walk right between the enquirer and the passer-by. During the walk through, however, the original enquirer is replaced by a different person. Only 50% of the direction-givers noticed the change. Yet the two experimenters were of different heights, wore different clothes, had very different voices, and so on” (Andy Clark, “Is Seeing All It Seems? Action, Reason and the Grand Illusion,” Journal of Consciousness Studies, December, 2002, p. 185).

In another experiment, “subjects are shown a minute long video of a static scene of a carnival carousel in which a large and exceedingly obvious foreground object (the base of the carousel) very gradually changes from red to purple. In these experiments, subjects are asked to be attentive and to watch for any changes, but for pretty much all naive subjects it is very hard to notice that the colour change is occurring” (William Seager, “Transitivity, Introspection, and Conceptuality” Journal of Consciousness Studies, November/December, 2013, pp. 47-48).

Some have tried to minimize the significance of these studies, but the surprise and even shock felt by participants shows that we have dramatically overestimated the completeness of our own conscious visual experiences.

Roger Christan Schriner

Change the Brain, Change “Reality”

The brain creates our core sense of reality, and we can learn a lot about that by noticing what happens when it is damaged by aging, accident, or illness. For example, sometimes an injury or a stroke alters neural structures that help constitute our experience of reality, including concepts of front and back, left and right, clockwise and counterclockwise. Then it’s as if one of the stage sets for our personal Truman Show has suddenly collapsed. (See “Your own little Truman Show,” December 8, 2014.)

In rare cases after a stroke, a patient’s visual experiences become a mirror image of normal experiences. As a result, books can be read only if they’re held up to a mirror. Such individuals write the mirror image of their signature and want to drive on the left-hand side of the road! That’s a fairly basic reality shift.

Or consider hemineglect, in which people ignore half of their world as if it isn’t there (usually the left half). When asked to copy a drawing of a flower they only draw the right side. They still realize that every object has two sides, but the brain modules which structure their experience of the world in terms of left and right have been damaged.

Neurologist Oliver Sacks tells of a stroke patient who would only eat the right half of a plateful of food. She could, of course, have turned the plate around after eating half of her meal. Then the neglected left half would have become the effortlessly-noticed right half. But since it was so hard to focus on “leftness,” she would physically move, turning around in a circle to the right. Looking at the plate again she would see the remaining food – or at least the right side of what was “left” over. She would do this several times, consuming one half after another until only crumbs remained.

This patient was highly intelligent and could even joke about the conceptual predicament of hemineglect. “It may look funny, but under the circumstances what else can I do?” When she tries to rotate the plate rather than rotating herself, “it is oddly difficult, it does not come naturally, whereas whizzing round in her chair does, because her looking, her attention, her spontaneous movements and impulses, are all now exclusively and instinctively to the right.” “‘It’s absurd,’ she says. ‘I feel like Zeno’s arrows – I never get there’” (The Man Who Mistook His Wife for a Hat, pp. 77-78).

In mentioning Zeno, she was referring to a Greek philosopher who asked how an arrow could ever arrive at its target. After all, before the arrow lands it has to go halfway, before going halfway it has to go 1/4 the distance, before that 1/8, and so on ad infinitum. Thus the stroke patient consumed half of her food, 1/4 more, and so on. So even though she knew she was succumbing to an illusion, her compelling inner sense of the way things are overwhelmed her intellectual insight.

So when the brain changes, reality changes – or seems to.

Roger Christan Schriner

Are You Trapped Within Your Thoughts? Here’s a Way to Savor Your Senses

Today I’ll say more about the difference between having a conscious experience and just thinking about it. When I was working as a psychotherapist I noticed that people seemed to live in their concepts instead of in the flow of their own sensual experiences. When asked, “What are you feeling now?” clients would often say how they had usually felt in similar circumstances, how they thought they should feel, how they imagined that most people would feel, what they thought I wanted them to say they felt, how they had generally been feeling in the past few minutes, or what sort of feeling they thought others would applaud. Few were able to tap into the detailed, second-by-second flow of their own stream of awareness.

A woman might initially say, “I feel sad,” for example, but after carefully focusing on her experience she might discover to her own surprise that she was mainly feeling angry. (Perhaps she was taught by her parents that good girls don’t get mad.) Her beliefs about how things seemed to her experientially were quite different from her actual state of mind. Incidently, this could be one cause of the placebo effect. When people expect an inert “drug” to ameliorate their symptoms, they often report that it does. This may be partly because they are not competently monitoring their own sensations. They are living more in their concepts than in their own bodies.

I observe – I assume. This technique from Gestalt Therapy helps us “come to our senses.” It can be practiced in many different situations, but at first try using it in situations where you can observe other people without distraction. Sitting in a restaurant or watching TV are two excellent opportunities. Look at someone unobtrusively and notice something you observe about that individual. Then notice what you are assuming, inferring, or speculating about this person. “I observe that he is wearing a tie. I assume he is a rather formal fellow.” “I observe that she is wearing a red blouse; she probably likes that shade of red.” “I observe that he is laughing; I speculate that he has a nice sense of humor.”

Continue alternating between things you observe and things you guess or assume. See if you can catch yourself confusing an assumption for an observation. For example, “I observe that she is sleepy” is false. You aren’t inside of her head and you can’t be sure sleepiness is what she is experiencing. “I observe her yawning, and I assume she is sleepy” would be correct. But she might be bored. By practicing this technique you can begin to see how much of what we take as obvious facts are merely conjectures. It is remarkably easy to live within our concepts instead of our actual experiences.

Roger Christan Schriner