Book

Book


What’s in it for me? Unravel the mystery of consciousness.

Imagine you had the opportunity to replace your brain with an identical, but mechanical, one. The replica would work exactly like your brain – you’d still behave, think, and dream the same way you do now. Oh, and you’d also be immortal. Would you take the deal?

Probably not. Each of us has a distinct feeling of what it means to be ourselves: we’re self-aware. Somehow, this feeling of consciousness is connected to our flesh-and-blood bodies. Could you really be sure that the machine brain would feel like “you”?

These blinks delve into neuroscience, psychology, and philosophy to explore what it means to be you – and present a new theory of consciousness that will make you question everything you thought you knew.

During this ride, you’ll learn

  • how scientists measure consciousness;
  • why all your experiences are hallucinations; and
  • why pigs ended up in court in the Middle Ages.
We can’t solve the hard problem of consciousness, but we can dissolve it into real problems.

What does it feel like to be a bat?

In 1974, philosopher Thomas Nagel published a famous essay asking just this question. Nagel wasn’t really interested in bats – his interest was in the nature of consciousness. He argued that for every conscious organism – a bat, for example – being alive feels like something. This may seem obvious now, but it wasn’t back then. Many scientists at the time were still confusing consciousness with intelligence, language, or other human-like characteristics.

Today, most agree with Nagel. Consciousness is its own, unique thing; and all living beings share it to some degree. Even more, it has a certain subjective “flavor” to it. But there’s one big question that remains: Why does it even exist?

This question is known as the “hard problem” of consciousness.

Here’s the gist of this blink: We can’t solve the hard problem of consciousness, but we can dissolve it into real problems.

Many scientists believe that the hard problem of consciousness is virtually impossible to solve. Think about it: Even if they knew all the biological mechanisms that give rise to your thoughts, behaviors, and emotions, they still couldn’t explain why they’re accompanied by this strange, ever-present feeling of “being you.”

After all, you could easily imagine a “zombie” version of yourself that walks, talks, and acts exactly like you – but that has no inner life.

Consciousness seems like some kind of secret special sauce the universe added to all living beings.

But that doesn’t mean we can’t find out what the ingredients are. Science already has many tools and theories to explain different conscious experiences – we just need to put it all together.

Sound lofty? Well, science has done it before. Before the twentieth century, the biological property of life was just as mysterious as consciousness. The philosophy of vitalism proposed that there was some special, supernatural energy present in all living beings. But as biology progressed, and scientists began studying extreme cases like single-celled organisms and viruses, they started to understand that “being alive” wasn’t some mysterious all-or-nothing property. It was more like a scalable collection of different biological processes.

So, while we might not be able to solve the hard problem of consciousness outright, we might be able to dissolve it—by studying different aspects of consciousness. The author calls those the “real problems” of consciousness. For example, we could study how activity in the visual cortex gives rise to the experience of seeing something dark red versus seeing something light red.

The more we can explain about how physical patterns in the brain correspond to conscious experiences, the less mysterious consciousness will become.

There are different levels of consciousness, and scientists are close to measuring them accurately.

Consciousness is a complex thing. Like being alive, being conscious is probably not a singular property, but many biological processes taken together. That means that if we really want to understand consciousness, we’ll have to look at it from a lot of different angles.

In these blinks, we’ll start with the different levels of consciousness, then we’ll look at the contents of consciousness, and finally, we’ll investigate our self-consciousness.

Let’s start with the conscious level. Most of us would agree that there are different degrees of consciousness. For instance, we may have the intuition that a dog is somehow more conscious than a fruit fly; or that certain psychedelic drugs produce higher levels of consciousness. Our brain also seems conscious differently when we’re awake or asleep, or even in a coma.

But can consciousness really be measured by degrees? Many scientists think so – in fact, they’re already doing it.

The key message is: There are different levels of consciousness, and scientists are close to measuring them accurately.

The most common measure for consciousness is the “bispectral index” monitor. It’s used by doctors during surgery, and combines several brain-scan measures into a single number that helps guide the anesthetist. In theory, that’s a great idea. But in practice, the bispectral index number is sometimes inconsistent with obvious signs of consciousness – some patients have opened their eyes during surgery, for example, or later remembered what the doctors talked about while they were under anesthetic.

But there are promising new ways to measure consciousness.

Italian neuroscientist Marcello Massimini has developed something called the “perturbational complexity index,” or PCI. Massimini and his team magnetically stimulate the brain in one region, and then monitor the signal as it spreads across the brain to other regions. Then they use an algorithm to compress the complexity of the signal pattern throughout the brain. In unconscious states, like when a person is under anesthesia, the signal dies quickly, and the PCI is low. But in more conscious states, it echoes longer and more widely across the brain, and the PCI is high. The PCI value has turned out to be much more accurate than the bispectral index number. For example, it gives quite similar values for REM sleep, with its rich dreams, and being awake.

New types of consciousness meters like the PCI promise to help doctors give more reliable diagnoses. They could help identify people with “locked-in” syndrome, for instance, who can’t move their bodies at all anymore, but are still completely conscious.

IIT makes the compelling case that consciousness is simply integrated information.

We’ve made a pretty good case that there are different levels of consciousness, and that those levels can even be measured. But we still don’t know what consciousness really is.

Modern science offers a few different theories. One of the most compelling ones is called Integrated Information Theory, or IIT. It comes from the scientist duo Tononi and Edelman, who proposed that the fundamental characteristic of conscious experience is that it’s both informative and integrated.

Each conscious experience is informative because it’s specific and newit’s different from any other experience you’ve had before. If you were to see a red bird at this moment, it would still be different from any other time you’ve seen a red bird.

But conscious experiences are also integrated, because we experience them as one, unified thing. We don’t experience the color of a bird independently from its shape, for instance.

Here’s the idea: IIT makes the compelling case that consciousness is simply integrated information.

The main claim of IIT is that consciousness is integrated information, which means that our brain is an information integration machine. This allows it to combine maximum input with maximum order.

IIT also offers a new way of measuring consciousness. It uses a measure called phi to assess the extent to which a system integrates information, meaning the extent to which it is conscious. Phi measures how much information a system generates, compared to the information generated by its individual parts. You might think that a system can only ever generate as much information as its individual components. But for most complex systems, that’s not the case. In those systems, each individual part is in a complex relationship with the other parts.

Just think of a flock of birds. It's made up of individual birds, but the flock itself seems to have a life of its own. Similarly, a computer can create very complex output from a few simple equations. The more extra information a whole system generates, the higher phi will be – and, according to IIT, the more conscious the system is.

Naturally, systems that score low on information and systems that score low on integration will score poorly with phi, too. The human brain, with its billions of interconnected neurons, should score extremely high.

There’s just one catch: measuring phi in practice is virtually impossible. In mathematics, information is generated when uncertainty is reduced. Rolling a die generates more information than flipping a coin, for example, because the numbers on the die rule out other possibilities. So in order to measure the information our brain generates, we would have to know all the possible ways it could behave. But at present, we can only record what the brain does – not all the things it could possibly do.

So the most important measure of IIT is still scientifically untestable. But as a piece of philosophy, IIT makes for a compelling theory of consciousness.

The contents of our consciousness are simply our brains’ controlled hallucinations.

What if we told you that you’re hallucinating all the time?

This theory is actually not as crazy as it sounds. In this blink, we'll turn to a second aspect of consciousness: content. And we’ll discover that what we consciously experience as real is just our brain’s best guess at reality.

Whenever we are conscious, we are conscious of something: sights, sounds, smells, or our own thoughts – often all of them at once. And usually, we have a distinct feeling that what we are conscious of corresponds directly to what's out there in the world.

But that's not really the case. Our senses are not like windows that allow our brain an unfiltered view of the world. Far from it! Our brain is actually blind, deaf, and unfeeling. It just does its best to make sense of the nerve signals coming in from our sensory organs.

Here’s the key idea: The contents of our consciousness are simply our brains’ controlled hallucinations.

Scientists have long understood that our senses are not perfect instruments for processing reality. But for a long time, they were at least sure of the direction in which the process worked: our senses pick up signals from the outside world, pass them on to the brain, and then the brain tries to make sense of them. But what if it is really the other way around?

In the nineteenth century, German scientist Hermann von Helmholtz came up with a radical new idea. He suggested that any perception was a process of unconscious inference. Drawing on past experience, our brain is constantly forming hypotheses about what’s out there in the world, and then uses sensory signals to correct those hypotheses. Conscious perception doesn't work from the outside in, but from the inside out.

This is what makes the contents of our consciousness like hallucinations; our brain is essentially making them up. Luckily, these are usually controlled hallucinations. It’s true that our brain constantly makes predictions about the world – but it uses sensory signals to correct those predictions. In real hallucinations, our perceptual expectations are so strong that they become untethered from external reality.

In many ways, what we call reality is just the perceptual hallucinations we all agree on. So, is the whole world just a dream? No. There’s a physical world out there, and its physical objects have primary qualities like texture, motion, and the occupation of space. But their secondary qualities – like color, taste or smell – all depend on our perceptions of them.

Our brains are Bayesian prediction machines, using prior beliefs to make best guesses about the world.

So the world we perceive is just a controlled hallucination of our brain, based on best guesses. But how does our brain make those guesses?

Let's look at a bit of mathematical logic to explain this.

In the eighteenth century, Reverend Thomas Bayes came up with a method of reasoning he called “inference to the best explanation.” Bayesian reasoning is all about using probabilities to find the best explanation for an observation.

Imagine you wake up one morning, look outside your window, and discover that your lawn is wet. Did it rain last night, or did you forget to turn off the sprinklers? The probability for each explanation will be different, based on your prior beliefs and the likelihoods of certain individual scenarios.

As a good Bayesian, you might first ask yourself: Do I live in Las Vegas or Thailand? Am I a forgetful person? Did the weather report mention a storm? Following Bayes, you would pick the explanation that has the highest total probability for your particular situation.

The key message here is: Our brains are Bayesian prediction machines, using beliefs to make best guesses about the world.

Bayesian reasoning is used everywhere in science, from medicine to military strategy. Its strength is that it allows us to assess complex situations, and take the reliability of information into account.

In the last blink, we learned that our brains are prediction machines – constantly guessing at what’s out there in the world. And they use Bayesian reasoning to make their predictions. They draw on prior beliefs about the world to make best guesses about it, and then update those predictions through incoming sensory signals. Prior beliefs can be as concrete as “my dog is small, brown, and furry” or as abstract as “light comes from above.”

The idea that our beliefs shape our perceptions goes back many decades. At the beginning of the twentieth century, art historian Ernst Gombrich popularized the term beholder’s share. Gombrich was interested in the part that the viewer played in interpreting artworks. He believed that there was no innocent eye – that all perception was colored by preexisting beliefs and concepts.

Today, Gombrich's philosophy has been corroborated by science. Author and researcher Zair Pinto, for instance, showed that we perceive things more quickly when we expect them. In his experiment, rapidly changing geometrical patterns were flashed in one of the participant’s eyes, while a picture of a house or face was slowly faded in before the other eye. When people had been told to look for a house, they were quicker to detect the house than the face, and vice versa, demonstrating that expectation influenced the speed of perception.

Your sense of self is the product of many controlled hallucinations working together.

Now we turn to the third and final component of consciousness: our self-perception. Just as Thomas Nagel suggested in his essay, being you feels like something. So does that mean that there's an actual self, some immaterial soul, hidden deep inside of you? Well, not really.

Your sense of self is just another controlled hallucination of your brains. And it’s actually not a single sensation – it’s made up of many different aspects.

The first is embodied selfhood. In some important way, we feel that our body belongs to us. Then there’s perspectival selfhood: we perceive the world from a particular point of view. Next, there’s volitional selfhood – or free will: we believe that our actions are under our control. Then there’s the narrative self, our sense of personal identity, built from our unique history and experiences. And lastly, there’s the social self, the part of us that is aware of how others might be perceiving us.

Here’s the main idea: Your sense of self is the product of many controlled hallucinations working together.

Of course, you don’t perceive these different aspects as distinct from each other. They’re all mingled together in an overall sense of being you. But that unified sense of self doesn’t mean that there’s an actual, immaterial soul. There are many examples of the different types of selfhood coming apart, becoming dysfunctional, or overpowering each other.

In “alien hand syndrome,” for instance, people’s embodied and volitional selfhood is compromised: they feel as if the actions of their own hands aren’t caused by them. During anesthesia, epileptic seizures, or under the influence of drugs, many people have out-of-body experiences that challenge their perspectival sense of selfhood. New experiments with Virtual Reality show how easy it is to feel ownership over another body. At the BeAnotherLab in Barcelona, for example, you can temporarily swap bodies with another random visitor, and soon feel like you’re inhabiting another body. Finally, there are patients whose brain hemispheres have been surgically divided to relieve symptoms of epilepsy who sometimes develop two distinct personalities.

These extreme experiences show that our sense of self is not as stable as we perceive it to be under normal circumstances. Rather, it’s a mix of self-related beliefs, values, memories, and the brain’s perceptual best guesses that can easily come apart. The experience of being you is just another one of our brain’s clever hallucinations.

Of course, our brains don’t just hallucinate for fun. As it turns out, having a unified sense of self is super useful for our survival, too.

Consciousness is a natural function of our animal bodies.

In the seventeenth century, French philosopher René Descartes argued that all living creatures are “beast-machines.” Humans, however, are the only beast-machines equipped with a divine, immaterial soul.

Descartes was at least partially right: the human body does work a bit like a machine. But we don’t need an immaterial soul to explain why we’re conscious. Our consciousness is an intrinsic part of our living, breathing body-machine. We experience the world with, through, and because of our living bodies – not in spite of them.

Consciousness is a product of our biological evolution, just like the rest of our body. Of course, evolution didn’t design us so we could investigate ourselves – it designed us to stay alive and procreate. Consciousness, including self-consciousness, is just another useful tool in our survival kit.

Here’s the key message: Consciousness is a natural function of our animal bodies.

In order to understand why consciousness is useful for survival, we need to take a detour to a long-forgotten field of science – cybernetics. In the 1970s, cybernetics was a trending research subject exploring how animals and machines control their bodies and communicate.

Cybernetics enthusiasts William Ross Ashby and Roger Conant, for instance, pioneered the idea of control-oriented perception. Animals need to regulate their essential bodily functions, like body temperature and oxygen levels, which must stay within strict limits. Control-oriented perception helps us control present and future bodily states by guiding our perception in a certain direction: reaching for food when we’re hungry, for instance, or running away from a predator when we’re in danger.

Indeed, all animals are constantly fighting the second law of thermodynamics: the tendency of any system to disintegrate over time, toward higher entropy or chaos. Being alive means being in low entropy. This is why all animals create perceptual models of their environment that allow them to make predictions, take action, and minimize entropy.

This is also true for humans: our controlled hallucinations are also controlling hallucinations.

Take self-consciousness as an example. Our sense of a volitional self is useful, because it makes us feel that we could have done things differently than we did in a situation. Given all the physical and biological factors that influenced us at that time, this probably isn’t true: we likely couldn’t have acted differently. But the feeling of volition is useful for learning: it means that next time, we actually will be able to do things differently.

Our experience of volition helps us navigate the world, and learn from our previous actions. On this reading, our sense of free will might be just another controlled and controlling hallucination – albeit a very advanced one.

But what about the other creatures out there?

All living beings are conscious to some degree, and it’s likely that consciousness depends on biological processes.

A few centuries ago, it wasn’t all that uncommon for unruly animals to end up in court. Pigs were executed for murdering children, eating holy biscuits, or abetting a crime – by grunting in approval.

Today, the idea of dragging a pig to court seems ridiculous. But so does Descartes’ idea that animals are just “beast machines” with no internal world.

So, how conscious are animals, really?

Sadly, there’s no perfect way to test animal consciousness. For a while, scientists used the “mirror test”: they marked an animal with a red dot in a spot that it usually couldn’t see, and then had it look in a mirror. If the animal reacted to the spot, that meant it recognized its reflection as itself. Humans from the age of three onwards easily pass this test. But with the exception of some great apes, a few dolphins, and a single elephant, every other mammal has failed the test. Even dogs and monkeys flunk.

So, what’s the key message? All living beings are conscious to some degree, and it’s likely that consciousness depends on biological processes.

In the mirror test, self-recognition is meant to serve as proof of consciousness. But there could be many reasons why animals fail to pass – and even if they don’t have self-awareness, they’re probably still conscious.

Most people have the intuition that mammals, even birds and fish, have some degree of consciousness. Indeed, all of them show similar brain activity to us in sleeping and waking states. But we can’t just go by how similar they are to humans.

There’s increasing evidence that animal consciousness can be very different from our own. The most famous example is the octopus. Octopuses are extremely clever, but their nervous system is very different from ours. It’s almost like their brains are spread throughout their entire bodies, and their limbs have a consciousness of their own. Still, no octopus has ever passed a mirror test.

But what about non-biological creatures? Will our computers ever be conscious? Sci-fi writers, futurists, and fans of AI, or Artificial Intelligence, think it’s possible – and that it will, in fact, happen any minute.

But if consciousness is the sum of the human brain’s controlled hallucinations about itself, our bodies, and our environments, it’s not entirely clear how a machine could ever replicate this dynamic. Our consciousness is deeply entrenched in our existence as living, breathing, biological beings. Every cell of your body contributes to the feeling of being you.

Final summary

The key message in these blinks:

Consciousness is a complex phenomenon, but it’s not the unknowable, divine spark that some philosophers make it out to be. Rather, it’s a natural function of our living breathing bodies. It’s the sum of our brains’ “controlled hallucinations” about the world, our bodies, and ourselves. As science continues to study all the different aspects of conscious experience, we will slowly unravel the mystery of consciousness.

Report Page