“Illusionism” – Is Consciousness Real? My Upcoming Talk in Shanghai

I’ve just been notified that my proposal for a presentation on consciousness has been accepted by organizers of The Science of Consciousness, Shanghai, China, June 5-10. As I begin to draft my paper, I’ll share some passages on this site. Here’s the abstract of my paper, Dueling Skepticisms: Strong Fallibilism Versus Illusionism:

Are conscious experiences real or illusory? In particular, are sensations and perceptions such as pains and visual phenomena actual or fictional? Daniel Dennett and other eliminativists argue that these “qualitative” sensory experiences simply do not exist. Dennett’s eliminative materialism, along with several related approaches, has now been re-christened illusionism. A recent issue of Journal of Consciousness Studies was entirely devoted to this topic, featuring a lead article by Keith Frankish.

Frankish distinguishes strong illusionism, weak illusionism, radical realism, and conservative realism. I will support a version of realism that is radically skeptical and ontologically conservative – strong fallibilist realism. Although fallibilist realism maintains that qualitative sensory experiences are introspectively accessible, it also contends that we make important errors in thinking about such phenomena. Some of these errors may generate seemingly insoluble conundrums, such as the hard problem of consciousness and various explanatory gaps.

In advocating fallibilism I will show how this approach can close two particularly challenging explanatory gaps: (1) explaining how qualitative differences among our experiences could be constituted by differences among neural states and (2) explaining how neural states could constitute any sort of sensory experience whatsoever. In dealing with the second gap, I will consider some intriguing possibilities that involve the conscious interpretation of language. I will specifically consider the conscious cognitive states within an English speaker and a Mandarin speaker when they hear, respectively, the English sentence, “Welcome to Shanghai” and the similar Mandarin greeting, “Huānyíng guānglín Shànghai.” Surprisingly, reflecting upon language-interpretation sheds light on some of the deepest puzzles about the nature of consciousness.

Roger Christan Schriner

To subscribe to this blog, click the “Follow” link on this page. For my main web site, visit http://www.schrinerbooksandblogs.com/

Advertisements

More on Zombies

In my last post I discussed David Chalmer’s idea of philosophical zombies – hypothetical creatures whose brains have precisely the same physical structures as ours and function in the same ways that our brains do, but without consciousness. Several people who read early drafts of my book, Your Living Mind, dismissed zombies as irrelevant. The whole idea is moot, one of them remarked, since it would be impossible for us to know that such a creature is a zombie. (Maybe the person sitting right next to you is one of them!) But Chalmers’ scenario is an example of both the value and the subtlety of thought experiments. If there actually could be such creatures, then conscious experiences are not brain events.

The zombie story asserts that if there could be a creature that is physically identical to you, but not conscious, then consciousness is not a state of your brain. We could dispute this claim by arguing that even though a creature physically identical to you could exist without being conscious, nevertheless consciousness is a state of your brain. But that won’t work. Let’s call your current brain state CBS. If your brain’s being in state CBS is sufficient for your being conscious, then if some other brain is in CBS, it would also have to be conscious. So you could not have a physically identical zombie twin. (What a relief!) On the other hand, if a brain’s being in state CBS is not sufficient for its being conscious, then consciousness is not a brain state. We would need a brain state plus something else to have consciousness – or we would just need the “something else.” So if zombies are truly possible, qualia are not brain states. Since there has been a strong trend toward saying that all real things are, in some sense, physical, that would be a revolutionary finding.

Michael Tye clarifies Chalmers’ idea with an omnipotent-being scenario. “One way to picture what is being claimed here is to imagine God laying out all the microphysical phenomena throughout the universe. Having done so, and having settled all the microphysical properties of those phenomena along with the basic microphysical laws, God did not then have to ask Himself ‘Shall I make lightning flashes or caterpillars or mountains … ?’ No further work was needed on His part.” Why? Because a lightning flash simply is a group of microphysical entities operating according to certain laws. By making all these particles and deciding how they would interact, the Creator would have ensured that lightning flashes, caterpillars, etc. would exist.

But what if consciousness is not physical? In that case zombies are possible. “Even if God had no further work to do in determining whether there would be a tree in place p or a river in place q or a neuron-firing in place r, say, having settled all the microphysical facts, God did have more work to do to guarantee that we were not zombies.”*

Tye is not trying to show that a deity created consciousness. That’s not the point. He’s just noting that this is one way of understanding Chalmers’ scenario. Conceivably, then, there could be an exact physical duplicate of you, right down to the last whirling electron, that does not enjoy a single millisecond of conscious experience.

Chalmers emphasizes that he is not trying to prove that a zombie duplicate of you or me could really exist in this universe – only that this sort of thing is conceivable. But what does “conceivable” mean? Now the fog drifts in. There are several types of conceivability, including a contentious notion called “ideal conceivability.” Philosophical professionals have not yet sorted out these intricacies.

In trying to solve the hardest problems of consciousness we seem to be perpetually stuck at square one. Nagel has stated bluntly that “we have at present no conception of what an explanation of the physical nature of a mental phenomenon would be. Without consciousness the mind-body problem would be much less interesting. With consciousness it seems hopeless.”** And William Seager concludes his book, Theories of Consciousness, with this dispirited admission: “It is indecent to have a ragged and unpatchable hole in our picture of the world. Cold comfort to end with the tautology that an unpatchable hole is … unpatchable.”***

To some it seems as if these scholars are worrying about trivialities, as irrelevant as asking how many angels can dance on the head of a pin. But some questions about the nature of reality actually are quite difficult. I have my own ideas about how to understand consciousness, but on some level I must also bow to this great mystery.

Roger Christan Schriner

*Michael Tye (2009) Consciousness Revisited: Materialism without Phenomenal Concepts. (Cambridge, MA: The MIT Press), pp. 25-26.

**Thomas Nagel (1974) “What Is It Like to Be a Bat?” Philosophical Review, October, 1984, Vol. 83, No. 4, p. 436.

***William Seager (1999) Theories of Consciousness: an Introduction and Assessment. (New York: Routledge), p. 252. Ellipses are in the original text.

The Philosophical Zombie

Can old bedraggled zombies reflect logically on their condition and calmly resign themselves to their fate? Perhaps, but that’s not what this post is about. In the study of consciousness, philosophical zombies were first described in a famous thought experiment by Australian philosopher David Chalmers. His discussion helps underscore the mysterious nature of qualia (the qualities of sensory experiences).

Chalmers proposed the zombie idea to highlight the Hard Problem of consciousness, the problem of understanding how conscious experiences result from (or are identical to) brain activities. A philosophical zombie is a hypothetical creature whose brain has precisely the same physical structures as ours and operates in the same ways that our brains do, but without consciousness.

Here’s an important point that is often overlooked: This creature would be conscious in the ways that psychology understands the structures, abilities, and functions of consciousness. “He will be awake, able to report the contents of his internal states, able to focus attention in various places, and so on.”* Furthermore a psychologist studying you and your zombie twin would discern no difference in behavior. But even though it would be conscious in a certain sense, it would lack conscious experiences. It would be utterly devoid of qualia, and it would never be in any state that is “like something.”

Thus, as Philip Goff notes, when it screams it is not in pain. “Its smiles are not accompanied by a feeling of pleasure. Its negotiation of its environment does not involve a visual/auditory experience of that environment.”**

Although zombies would have thoughts, these thoughts would not involve conscious perceptions or sensations. A zombie that is screaming might think, “I’m in pain!” but it would have no pain qualia, no conscious sensations of pain. This is an example of the important difference between aspects of consciousness that do and do not seem “present.” The philosophically puzzling states are the ones that seem thus-there-now, and zombies don’t have them.

I’ll allow a few days for comments about these hypothetical organisms, and then journey further into zombieland.

Roger Christan Schriner

*David Chalmers, The Conscious Mind. (Oxford: Oxford University Press), p. 95. Technical note: Chalmers was suggesting that there is an ontological gap between conscious experiences and brain states, not just the sort of epistemic gap that Joseph Levine has discussed. In other words, qualia and brain states don’t just seem different; they really are quite different. In this way Chalmers was following in the footsteps of Saul Kripke, whereas Levine was trying to avoid Kripke’s ontological conclusions.

**Philip Goff, “The Zombie Threat to a Science of Mind,” Philosophy Now, May/June, 2013: http://conscienceandconsciousness.com/2013/06/14/the-zombie-threat-to-a-science-of-mind. Goff provides an engaging and detailed explanation of the zombie problem, graced with charming color illustrations of non-philosophical zombies.

The Dreaded “Hard Problem”

I’ve been posting thoughts about “qualia,” the qualities of sensory experience. Qualia figure prominently in one of the most baffling enigmas even discussed, and the history of this issue is wonderfully described by Oliver Burkeman. I’ll quote some of his essay, but I urge you to read the whole thing:

http://www.theguardian.com/science/2015/jan/21/-sp-why-cant-worlds-greatest-minds-solve-mystery-consciousness

“One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness…. the young Australian academic was about to [discuss] a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

“The scholars gathered at the University of Arizona … knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. ‘Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,’ recalled Stuart Hameroff, the Arizona professor responsible for the event. … ‘But then the third talk, right before the coffee break – that was Dave.’ With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. … ‘But then he speaks. And that’s when everyone wakes up.’

“The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all ‘easy problems’, … given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, … why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? …’

“What jolted Chalmers’s audience from their torpor was how he had framed the question. ‘At the coffee break, … everyone was like: “Oh! The Hard Problem! The Hard Problem! That’s why we’re here!”’

Here’s one way of considering this issue. Suppose in the distant future neuroscience has discovered precisely which brain structures and processes are correlated with specific conscious experiences. They can even read people’s minds: Experimental subject C79 reports that she is recalling a teenage love affair. But a brain scanning machine had already printed out a report, just before C79 spoke: “subject is remembering a high school sweetheart.” Isn’t it clear that we now understand the neural basis of consciousness? Aren’t the neural structures and activities that the scanner detected simply identical to the memory-experience that C79 reported?

Not necessarily. We need to know why this configuration of neural structures and activities constitutes consciousness. “Even if every behavioral and cognitive function related to consciousness were explained,” writes Chalmers, “there would still remain a further mystery: Why is the performance of these functions accompanied by conscious experience? It is this additional question that makes the hard problem hard.”*

Next: the menace of philosophical zombies.

Roger Christan Schriner

*Cited by Uriah Kriegel, Subjective Consciousness: A Self-Representational Theory, p. 271, emphasis added.

Colored numbers, tasteable shapes

I’ve been posting thoughts about qualia, the qualities of sensuous experience. One way to reflect upon qualia is by considering synesthesia, a remarkable syndrome in which a perception that typically occurs through one sensory system (such as hearing) can also be represented in another (such as sight). For instance, some people both hear and “see” sounds. They experience the same auditory inputs with two different types of qualia. “Some synesthetes hear what they see, others see what they hear. One of them felt tastes with his hands. The taste of mint, for instance, felt to his hands as smooth, cool columns of glass. Every taste had its systematically associated feel, and he found this quite useful as an aid to creative cooking.”*

Synthesthetes sometimes see strange colors that they only perceive in association with numbers. How I wish I could see those atypical colors!

Let’s play with the concept of synesthesia by using a thought experiment. Thought experiments are imaginary and often bizarre scenarios that are intended to shed light on philosophical problems. Sometimes these scenarios invoke the concept of God as a metaphorical way of erasing practical difficulties which are irrelevant to the basic idea behind the experiment.

Suppose an all-powerful being altered our bodies so that we started detecting pain as tastes. Instead of feeling a stabbing sensation, a person who stepped on a tack might notice a terribly bitter taste in the bottom of her foot. This taste would represent the damage done by the tack. If something like this is possible, then perhaps when we notice the distinction between tactile and taste sensations, we are noticing something which goes beyond detecting the features of bodily states – something about the mental states that represent these body-states. This would support the internalist claim that we experience states of our own minds rather than just states of the outside world.

To all my readers, may your holiday season be memorable and fulfilling.

Roger Christan Schriner

* Davies, T. N. et al. (2002) “Visual Worlds: Construction or Reconstruction?” Journal of Consciousness Studies, Vol. 9, No. 5-6, p. 75.

Opening a Window into Philosophy of Mind

No doubt there are still cocktail-party conversations about Descartes, Nietzsche, and Sartre, but I wonder how many Bordeaux-sipping intellectuals discuss Dretske, Nagel, and Kripke. The relationship between academic philosophy and the general public is nearly non-existent. Professors mostly speak to each other, in a technical language full of confusing terms with multiple definitions – “qualia,” “intentionality,” “representationalism,” “epiphenomenalism,” and so on. A few, such as Daniel Dennett and Nicholas Humphrey, have written for a wider audience, but most seem comfortable remaining within their own ivory towers.

I have been a member of the American Philosophical Association for nearly 25 years, reading books and professional journals and regularly attending conferences and colloquia. So I have spent years as the proverbial fly on the wall, listening to professorial interchanges within these lofty retreats. I am impressed with the need for competent philosophical analysis, and one of my life goals is to open a window into contemporary philosophy of mind for interested non-philosophers.

But I have sincerely wondered whether this is possible. When I tell people about my book, Your Living Mind: The Mystery of Consciousness and Why It Matters to You, I cannot sum it up in a sound bite. In the book itself, it takes the Introduction and the first five chapters just to explain the key problems.

Last Sunday, however, I had a very encouraging experience. I presented Part One of a workshop called Your Mysterious Mind: New Insights into Baffling Enigmas at the Unitarian Universalist Church of Palo Alto. The program concludes with Part Two on February 15. About 35 showed up, an excellent turnout for an early Sunday afternoon program, and participants seemed interested and engaged.

It was especially heartening to see that some attendees had an intuitive feel for the problem of consciousness and its possible solutions. One person (“K”) dealt with Frank Jackson’s Mary-scenario by proposing what academicians call the ability hypothesis – after seeing colors for the first time, Mary acquires new abilities but does not acquire new facts. “M” suggested that sensory experiences are memories, perhaps implying that they involve cognitive responses to recent (not current) perceptual inputs. And “E,” who has a strong science background, wondered whether some consciousness-conundrums are merely pseudo-problems. I could imagine Daniel Dennett cheering her on: “Right! There isn’t any special Problem of Consciousness. There just seems to be.”

I’m under no illusions that conveying contemporary philosophy of mind will be easy, but I am now more hopeful that my project will make a positive difference.

Roger Christan Schriner

The Complexity Trap

It’s hard to prove that the conscious mind is located in the brain. One problem is that both mind and brain are incredibly complicated, and it’s hard to map one sort of complexity onto the other. Popular media sometimes imply that science has accomplished this feat, but this isn’t so.

Suppose we ask subjects to visualize a square, then a circle, then a square again, while their brains are scanned for signs of neural activity. And suppose this experiment enables us to print out colorful pictures showing that brain regions 1-2-3 are especially active while subjects visualize squares, and regions 4-5-6 are especially active while they’re imagining circles. Does this show that the experience of fantasizing squareness is located in 1-2-3 and fantasizing circularity is located in 4-5-6?

Not at all. It’s a start, but barely that, and I’ll just mention two of the many difficulties.

1. How much of a lit-up region is the experience of the item, and how much of it is a motley assortment of non-experiential accompaniments? Visualizing a square may call up all sorts of associations with square items and with the word “square” – square meal, square deal, square mile, and “you’re so square.” Perhaps activity in linguistic regions involves verbal associations only, and is never part of the mental image itself. Perhaps. But we don’t know for sure.

2. It’s also hard to know which aspects of a brain’s activity are conscious processes and which are the unconscious accompaniments. A great deal of the brain’s visual processing, for example, never reaches the level of awareness.

Someday we may be able to detect precisely which neural activities constitute, say, a visual experience of seeing a single cherry blossom, but this will certainly not be easy. Compare the task of identifying precisely which electromagnetic waves in the signal from a TV satellite constitute an image of the seams of a football being passed during the last five seconds of the 2015 Superbowl. We assume that this part of the video signal is a physical event, and our inability to precisely specify it does not make us philosophically puzzled. But the difficulty of knowing just which brain activities constitute a particular experience may make us wonder whether this experience could be in the brain. Complexity confuses us, so beware of the complexity trap.

I recall a lecture in which the speaker announced that he was going to display his model of the neural correlates of consciousness, or NCC. The NCC is whatever cluster of neural activities correlates with conscious experiences, and finding such a correlation would be a big step toward showing that experiences are constituted by neural processes. He then showed us a diagram with about 50 arrows going in all sorts of directions.

He was joking, of course, because we have no idea how to sketch the NCC. We need to remind ourselves that the brain is much more complex than we can comprehend, and that we are in this convoluted mish-mash.

Roger Christan Schriner