Core Mysteries of Consciousness – A New Paper

I’ve just learned that another paper of mine has been accepted for presentation at a major conference, the Eastern Division gathering of the American Philosophical Association (Savannah, Georgia, January, 2018). I want to talk about the paper on this blog, but it’s highly technical. My clever, catchy, compelling title is: “Sensory Experiences Are Ontologically Opaque.”

Here’s the abstract, which I’ll follow with some comments in ordinary English:

Abstract: This paper critiques the claim that introspection reveals the ontology of sensory phenomena. If we lack such ontological access, several problems of consciousness become easier to solve. For example, one of the most challenging explanatory gaps between experiential states and brain states disappears if we do not subjectively detect ontologically puzzling phenomena. Similarly, Frank Jackson’s well-known “Mary” scenario depends on the intuition that color experiences are ontologically remarkable. If that intuition is false, Mary’s new experiences are philosophically unproblematic. The paper offers five arguments supporting the claim that introspection fails to disclose the ultimate nature of sensory experiences. It concludes by considering the plausibility of this skeptical stance. [End of abstract.]

Actually it’s easy to offer a simple summary of this paper’s theme. At one time many or most philosophers thought that we directly and infallibly “apprehend” our own conscious experiences. We know them just as they are. In recent decades this idea has lost a lot of support. Even though introspection – paying attention to our own mental processes – may seem simple, it’s actually quite complex and subject to error. The beliefs we form based on introspection arise out of a labyrinth of complex, poorly understood, and mostly-unconscious mental processes. In this paper I am questioning whether introspection-based beliefs about the ultimate, basic, fundamental nature of sensory experiences are well founded. I claim that the answer is no.

You can play with this general idea by going back to my February 1, 2016 post, An Aggravating Mystery Named Mary. After you think about this famous thought experiment, ask yourself whether Mary’s new color experiences show her the ultimate nature of colors – their ontology. That’s what I’ll be grappling with in my paper next January.

Roger Christan Schriner

P.S. I recently mentioned that I’ll be speaking at The Science of Consciousness, in Shanghai, but this event has been moved to San Diego. My talk is slated for June 6.

For my main web site, click

What If Dogs Had Human Intelligence?

I’ve recently read a fascinating book called Fifteen Dogs, by André Alexis. In this fanciful, rather sobering tale, two Greek gods make a bet with each other about what dogs would experience if they were given human intelligence.

Although this story doesn’t focus on the issues I’ve addressed in this blog, it does highlight the fact that every mind shapes reality in its own way. Their new brain power radically alters their world-view, and this is quite disturbing to some of these canines. In fact one dominant dog named Atticus insists that those in his pack mostly suppress their new intellectual gifts. Continue reading

Crazyism and Consciousness

This week I attended a talk sponsored by the Center for the Explanation of Consciousness at Stanford University on “Crazyism about Consciousness and Morality,” by Eric Schwitzgebel. Eric is a philosopher at the University of California at Riverside. I’ve appreciated his work for some time, and I quote him in Your Living Mind: The Mystery of Consciousness and Why It Matters to You.

“Crazyism” about consciousness is the claim that to understand consciousness we will need to accept some idea that currently seems bizarre (bonkers, ludicrous, off the wall, ‘round the bend) and that has not yet been proven to be true. We do not yet know which crazy idea about consciousness will solve its deepest mysteries. We may not have even thought of it yet! But until we accept it, we will be totally unable to understand conscious experience.

As Schwitzgebel wrote in Perplexities of Consciousness, “it became evident in the late twentieth century … that all metaphysical accounts of consciousness will have some highly counterintuitive consequences. … Something apparently preposterous, it seems, must be true of consciousness.”*

Eric told us that he likes to open up new possibilities, to expand the range of alternatives. Many philosophers try to do the opposite. They concentrate on eliminating incorrect ideas, so as to zero in on The Truth. I tend to do this myself. I want to keep “cutting to the chase,” pushing to the bottom line, aiming for the bullseye. This attitude is often helpful, but Schwitzgebel’s work helps keep me from being too confident about my own pet theories.

I haven’t space to recap the arguments he marshalled for crazyism, but they were impressive, and I mostly agree with them. In my own work I’ve emphasized the idea that we make crucial mistakes in understanding our own minds, and that these errors make consciousness seem stranger than it really is. More broadly, we need to re-evaluate the relationship between:

What’s so

Our beliefs about what’s so

The words we use to express these beliefs

Many of our beliefs about consciousness are based on introspection. If there’s something kooky about our concept of consciousness, perhaps something has gone awry in our introspection-based judgments. So in what ways does introspection inform us about consciousness, and in what ways does it mislead us? In Your Living Mind I wrote:


For now, it seems likely that we usually do well at detecting, recognizing, and noticing changes in conscious sensory perceptions, including particular qualia. Sometimes we also make helpful comparisons among qualia. But we often make mistakes about other aspects of our experiences. Here are some errors that are particularly common and pernicious:

  1. Confusing our experiences with our judgments about experiences
  2. Thinking introspection reveals the internal structure of experiences
  3. Thinking introspection reveals the essential nature of experiences**


What do you think? By re-assessing introspection can we deliver ourselves from crazyism about consciousness? Your comments are welcome!

Roger Christan Schriner

*Schwitzgebel, Perplexities of Consciousness, p. x.

**Schriner, Your Living Mind, p. 155.

An Aggravating Mystery Named Mary

For the past few weeks I’ve been posting comments about some of the deepest mysteries of consciousness. I’ve been focusing particularly on “qualia,” the qualities of sensory experiences such as colors, sounds, tastes, and pains. In 1982 Frank Jackson published a paper called “Epiphenomenal Qualia,” following up in 1986 with “What Mary Didn’t Know.” In the past three decades more than a thousand scholarly papers and several books have responded to these articles. Jackson’s two little essays seem to have hit a very big nerve.

Jackson eventually decided that his argument was flawed, but many believe he was right the first time and should never have recanted. So here is Jackson’s conundrum, as I understand it:

Imagine that we can peer into the distant future, hundreds of millions of years from now. Science has advanced so far that many fields of study are essentially complete. And biotechnology has expanded our memory and intelligence so that a single individual can understand everything there is to know about some complicated subject. One of these people is Mary, a neuroscientist who knows all that can ever be known about color experiences by studying their physical aspects. Mary has soaked up everything about the physical aspects of color perception that books, teachers, and information technology can possibly tell anyone – but Mary has never seen a color. She grew up in a black-and white room, she was prevented from looking at her own skin, etc. Then one day she is released from her colorless home, free to see the whole range of hues for the very first time.

Let’s say that the first colorful thing Mary sees is a garden full of dazzling red roses. And here is the crucial question: When she sees a red rose for the first time, does Mary gain new knowledge? Jackson claimed that she does, and he cooked up the Mary scenario because at that time he was a dualist. Dualists believe that mind and matter are two very different sorts of stuff, and Mary helped Jackson argue that mind is not matter. He claimed that after her release Mary gains new knowledge over and above the complete physical knowledge she already possessed. She learns what colors are as we experience them.

If all things are physical, including our visual experiences, and Mary already knew everything about the physical aspects of color perception, then she would not have learned anything new when she walked into that garden. But if she did learn something new when she actually experienced color, then our experiences of color are not physical. They are not made of matter, and do not occur within the brain. This also implies that qualia in general are not physical.

“Physicalism” (sometimes called materialism) claims that everything that exists is made of physical matter, and so any facts about things that exist are facts about physical things. But Jackson’s argument implies that knowledge of physical facts is not complete knowledge, because after her release Mary learns new facts over and above the complete physical knowledge she already possessed. Therefore, physicalism is false.

So what do you think? Was Jackson’s argument correct? If not, what’s wrong with it?

Perhaps more importantly, do you see why this thought experiment is so challenging? Why has it stimulated so much discussion? When I have led workshops for the public on consciousness, many participants have a hard time understanding that it’s the qualities of conscious experience that are difficult to explain physically. Until one sees the depth of this problem, the mystery of consciousness may seem soluble, even trivial. Soluble it may be. Trivial it’s not.

Roger Christan Schriner

More on Zombies

In my last post I discussed David Chalmer’s idea of philosophical zombies – hypothetical creatures whose brains have precisely the same physical structures as ours and function in the same ways that our brains do, but without consciousness. Several people who read early drafts of my book, Your Living Mind, dismissed zombies as irrelevant. The whole idea is moot, one of them remarked, since it would be impossible for us to know that such a creature is a zombie. (Maybe the person sitting right next to you is one of them!) But Chalmers’ scenario is an example of both the value and the subtlety of thought experiments. If there actually could be such creatures, then conscious experiences are not brain events.

The zombie story asserts that if there could be a creature that is physically identical to you, but not conscious, then consciousness is not a state of your brain. We could dispute this claim by arguing that even though a creature physically identical to you could exist without being conscious, nevertheless consciousness is a state of your brain. But that won’t work. Let’s call your current brain state CBS. If your brain’s being in state CBS is sufficient for your being conscious, then if some other brain is in CBS, it would also have to be conscious. So you could not have a physically identical zombie twin. (What a relief!) On the other hand, if a brain’s being in state CBS is not sufficient for its being conscious, then consciousness is not a brain state. We would need a brain state plus something else to have consciousness – or we would just need the “something else.” So if zombies are truly possible, qualia are not brain states. Since there has been a strong trend toward saying that all real things are, in some sense, physical, that would be a revolutionary finding.

Michael Tye clarifies Chalmers’ idea with an omnipotent-being scenario. “One way to picture what is being claimed here is to imagine God laying out all the microphysical phenomena throughout the universe. Having done so, and having settled all the microphysical properties of those phenomena along with the basic microphysical laws, God did not then have to ask Himself ‘Shall I make lightning flashes or caterpillars or mountains … ?’ No further work was needed on His part.” Why? Because a lightning flash simply is a group of microphysical entities operating according to certain laws. By making all these particles and deciding how they would interact, the Creator would have ensured that lightning flashes, caterpillars, etc. would exist.

But what if consciousness is not physical? In that case zombies are possible. “Even if God had no further work to do in determining whether there would be a tree in place p or a river in place q or a neuron-firing in place r, say, having settled all the microphysical facts, God did have more work to do to guarantee that we were not zombies.”*

Tye is not trying to show that a deity created consciousness. That’s not the point. He’s just noting that this is one way of understanding Chalmers’ scenario. Conceivably, then, there could be an exact physical duplicate of you, right down to the last whirling electron, that does not enjoy a single millisecond of conscious experience.

Chalmers emphasizes that he is not trying to prove that a zombie duplicate of you or me could really exist in this universe – only that this sort of thing is conceivable. But what does “conceivable” mean? Now the fog drifts in. There are several types of conceivability, including a contentious notion called “ideal conceivability.” Philosophical professionals have not yet sorted out these intricacies.

In trying to solve the hardest problems of consciousness we seem to be perpetually stuck at square one. Nagel has stated bluntly that “we have at present no conception of what an explanation of the physical nature of a mental phenomenon would be. Without consciousness the mind-body problem would be much less interesting. With consciousness it seems hopeless.”** And William Seager concludes his book, Theories of Consciousness, with this dispirited admission: “It is indecent to have a ragged and unpatchable hole in our picture of the world. Cold comfort to end with the tautology that an unpatchable hole is … unpatchable.”***

To some it seems as if these scholars are worrying about trivialities, as irrelevant as asking how many angels can dance on the head of a pin. But some questions about the nature of reality actually are quite difficult. I have my own ideas about how to understand consciousness, but on some level I must also bow to this great mystery.

Roger Christan Schriner

*Michael Tye (2009) Consciousness Revisited: Materialism without Phenomenal Concepts. (Cambridge, MA: The MIT Press), pp. 25-26.

**Thomas Nagel (1974) “What Is It Like to Be a Bat?” Philosophical Review, October, 1984, Vol. 83, No. 4, p. 436.

***William Seager (1999) Theories of Consciousness: an Introduction and Assessment. (New York: Routledge), p. 252. Ellipses are in the original text.

The Philosophical Zombie

Can old bedraggled zombies reflect logically on their condition and calmly resign themselves to their fate? Perhaps, but that’s not what this post is about. In the study of consciousness, philosophical zombies were first described in a famous thought experiment by Australian philosopher David Chalmers. His discussion helps underscore the mysterious nature of qualia (the qualities of sensory experiences).

Chalmers proposed the zombie idea to highlight the Hard Problem of consciousness, the problem of understanding how conscious experiences result from (or are identical to) brain activities. A philosophical zombie is a hypothetical creature whose brain has precisely the same physical structures as ours and operates in the same ways that our brains do, but without consciousness.

Here’s an important point that is often overlooked: This creature would be conscious in the ways that psychology understands the structures, abilities, and functions of consciousness. “He will be awake, able to report the contents of his internal states, able to focus attention in various places, and so on.”* Furthermore a psychologist studying you and your zombie twin would discern no difference in behavior. But even though it would be conscious in a certain sense, it would lack conscious experiences. It would be utterly devoid of qualia, and it would never be in any state that is “like something.”

Thus, as Philip Goff notes, when it screams it is not in pain. “Its smiles are not accompanied by a feeling of pleasure. Its negotiation of its environment does not involve a visual/auditory experience of that environment.”**

Although zombies would have thoughts, these thoughts would not involve conscious perceptions or sensations. A zombie that is screaming might think, “I’m in pain!” but it would have no pain qualia, no conscious sensations of pain. This is an example of the important difference between aspects of consciousness that do and do not seem “present.” The philosophically puzzling states are the ones that seem thus-there-now, and zombies don’t have them.

I’ll allow a few days for comments about these hypothetical organisms, and then journey further into zombieland.

Roger Christan Schriner

*David Chalmers, The Conscious Mind. (Oxford: Oxford University Press), p. 95. Technical note: Chalmers was suggesting that there is an ontological gap between conscious experiences and brain states, not just the sort of epistemic gap that Joseph Levine has discussed. In other words, qualia and brain states don’t just seem different; they really are quite different. In this way Chalmers was following in the footsteps of Saul Kripke, whereas Levine was trying to avoid Kripke’s ontological conclusions.

**Philip Goff, “The Zombie Threat to a Science of Mind,” Philosophy Now, May/June, 2013: Goff provides an engaging and detailed explanation of the zombie problem, graced with charming color illustrations of non-philosophical zombies.

The Dreaded “Hard Problem”

I’ve been posting thoughts about “qualia,” the qualities of sensory experience. Qualia figure prominently in one of the most baffling enigmas even discussed, and the history of this issue is wonderfully described by Oliver Burkeman. I’ll quote some of his essay, but I urge you to read the whole thing:

“One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness…. the young Australian academic was about to [discuss] a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

“The scholars gathered at the University of Arizona … knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. ‘Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,’ recalled Stuart Hameroff, the Arizona professor responsible for the event. … ‘But then the third talk, right before the coffee break – that was Dave.’ With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. … ‘But then he speaks. And that’s when everyone wakes up.’

“The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all ‘easy problems’, … given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, … why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? …’

“What jolted Chalmers’s audience from their torpor was how he had framed the question. ‘At the coffee break, … everyone was like: “Oh! The Hard Problem! The Hard Problem! That’s why we’re here!”’

Here’s one way of considering this issue. Suppose in the distant future neuroscience has discovered precisely which brain structures and processes are correlated with specific conscious experiences. They can even read people’s minds: Experimental subject C79 reports that she is recalling a teenage love affair. But a brain scanning machine had already printed out a report, just before C79 spoke: “subject is remembering a high school sweetheart.” Isn’t it clear that we now understand the neural basis of consciousness? Aren’t the neural structures and activities that the scanner detected simply identical to the memory-experience that C79 reported?

Not necessarily. We need to know why this configuration of neural structures and activities constitutes consciousness. “Even if every behavioral and cognitive function related to consciousness were explained,” writes Chalmers, “there would still remain a further mystery: Why is the performance of these functions accompanied by conscious experience? It is this additional question that makes the hard problem hard.”*

Next: the menace of philosophical zombies.

Roger Christan Schriner

*Cited by Uriah Kriegel, Subjective Consciousness: A Self-Representational Theory, p. 271, emphasis added.