Long hailed for his bestselling books on what we eat, it was while under the influence of one infamous edible in particular — psilocybin mushrooms — that journalist Michael Pollan began having big questions about the nature of awareness. Suddenly, the plant life in his garden seemed to be imbued with greater sentience than he’d ever realized, and he wondered: Is consciousness really the stuff of higher animals like humans, or could it be more universal than most of us appreciate?
According to research, he writes in his latest book, A World Appears, a psychedelic experience “dramatically increases the likelihood that a person will attribute consciousness to other entities, both living and nonliving.” And indeed, he tells Rolling Stone, it was these fantastic fungi that set him on a mission to unravel the mystery of what consciousness is and where it comes from.
What ensued was a journey down the psychic rabbit hole that would bring him from plant neurobiologists to stream-of-conscious novelists, AI laboratories, and even a Zen Roshi’s cave. Along the way, Pollan endeavored to find answers to some of the most fundamental questions of existence, like why is there a subjective self at all? And what else wields consciousness? Can machines develop it, or are they just playing an imitation game? And does such a powerful experience really emerge from a mere lump of tissue in our skulls?
This is a somewhat new path for Pollan, who became a household name in the early 2000s thanks to his renowned books on plants and food, then gained psychedelic notoriety upon the release of 2018’s How to Change Your Mind, which played no small part in mainstreaming the practice of using the drugs for mental health. Before its publication and subsequent Netflix docuseries adaptation, psychedelics were typically spoken of in hushed tones by people who frequented Phish shows. After, it seemed like everyone’s boss or mom started microdosing.
And as they are wont to do, those very drugs inspired the curiosity undergirding Pollan’s latest release, out Tuesday.
Editor’s picks
“[A World Appears] really grew out of those experiences,” he tells Rolling Stone. “Every book, in a way, grows out of the one before. There’s some, like, sourdough starter that you can carry into the next one, and for me, it became obvious that I should really look into this consciousness question.”
In the run-up to its publication, Pollan spoke with Rolling Stone about the problem of consciousness and its links to psychedelics, his skepticism toward artificial intelligence, and the need to protect our headspace from the companies and politicians who aim to intrude upon it.
Your books have largely focused on food and other things people ingest. Why did you decide to veer away from that?
Consciousness as a subject was inspired by the ingestion of psilocybin. It was my experiences for How to Change Your Mind that raised all these questions about consciousness — questions that pop up in the mind of anyone who does psychedelics, or really anyone who meditates also. Both those activities have this way of smudging the windshield of our perceptions, and you suddenly realize what had been completely transparent to you is now less so. There is a windshield, and you start to wonder, why? Why is it this way, not that? How can it be changed? Can it be changed? And suddenly, consciousness appears to you as an interesting problem, or mystery.
One of the surprises of this book is how psychedelics, which I thought I was leaving behind, keep popping up, and not always of my doing. Many of the scientists I was talking to were very interested in psychedelics and were using them themselves, and speaking openly about it, which is a big change from when I wrote How to Change Your Mind. There were no scientists who would cop to having had any experience then.
Related Content
What’s your relationship with psychedelics like these days?
I wouldn’t say they’re a big part of my life now. I have the occasional guided experience when I can arrange it. I find it really useful at certain turning points in life and still regard it as a really helpful tool.
My life is really busy, and it’s sort of hard to fit it in. You don’t just need a day to do it properly, you need to sit with it for a few days before and after, and — to get the full benefit of it — not go right back to work. They open this space of plasticity in the brain and an opportunity for the mind to change and contemplate issues. And the times I’ve had where I had to have a big experience and then go right back to teaching, or writing, or whatever — they were meaningless.
So anyway, it’s hard to fit in. I had this goal in my head that every birthday I would do something. But my birthday’s coming up, and I have no plans.
Well, happy birthday.
Thank you very much.
There’s a phrase you use a lot in the book that came from one of the researchers you spoke with: “consciousness is felt uncertainty.” What do you mean by that?
It goes along with the theory that the brain’s goal in all situations is to reduce uncertainty. We face a world where there’s a lot that’s unpredictable, we’re unsure of what to do next, and a lot of what the brain does is automatic. Probably 90 to 95 percent of what your brain does you’re not aware of. It’s minding your body. It’s taking in lots of information about your environment and processing it. It’s processing unconscious material that we’re not aware of, except when it pops into consciousness.
So the theory goes that, well, why do you need consciousness at all? And why don’t we just automate everything? Why aren’t we zombies? The reason, according to this theory, is that there are certain situations where uncertainty can only be resolved by deliberate decision-making — having a space of reflection or deliberation. So it is when things are most uncertain that we’re most conscious.

I’m not sure this is true, necessarily. A lot of the contents of consciousness are not life and death matters, as we all know. There’s a lot of trivia and bits and bobs of flotsam that get in there. But it is true that we’re highly conscious when we’re uncertain what to do, or we’re uncertain what’s going on in our environment. I use this example in the book: You see that boulder there, but it kind of could be a bear — which is it? You really have to get that right, and that uncertainty makes you highly alert and present to the moment.
In the book you explore whether AI can be conscious. What do you think — can it?
I don’t think so. I’ve read a lot of the Silicon Valley conversation about this, where it’s generally assumed that AI can be conscious if it isn’t already. I think it is based on a faulty metaphor: that the brain is a computer and that consciousness is like software or an algorithm that can be run on a variety of different substrates. But if you think about it, that metaphor breaks down really quickly.
There’s no distinction between software and hardware in the human brain. Every memory you have, every experience you have, has physically rewired your brain. Your brain is not interchangeable with anyone else’s, because you’ve had a different life experience, and that experience has shaped the hardware of your brain. And there’s no line between hardware and software — a memory is both at the same time. It’s a pattern of neuronal connections, and it’s this phenomenon you experience.
People who believe that AI can be conscious believe that everything in your mind is computation, and you just have to read William James to realize there’s such subtleties in mental experience that seem to me are way beyond computation. He has that great riff about searching for the missing word. This word is on the tip of your tongue, and you can’t quite find it, and so there’s a void there. But if someone proposes the word to you, and it’s wrong, you immediately know. So there’s this absence that has some information in it. He does a beautiful job of drawing this picture of thoughts and how they have these unarticulated affinities between them. And I just think mental life is so much more intricate and subtle that to call it computation is to mistake the map for the territory.
Now, I’m thinking about AI as we now have it. That may change.
The other thing that we lose track of with AI is that AI doesn’t know the world. It knows human representations of the world. It is trained on the internet, and the internet is not the whole world, as anyone who steps out into nature very quickly realizes. And because AIs, at least so far, are not embodied, they don’t have that friction with reality that we do. And that, I think, is an important part of consciousness.
I would also add that the theories that I explore in greatest depth in the book are ones that advance the idea that feelings are where consciousness begins, not thoughts. And feelings are interesting. They’re very different from thoughts. You could say they’re a kind of thought, but thoughts can be digitized pretty easily, and machines are pretty good at thought — higher order things like logic and rational thinking. They’re not very good at feelings. And that may be because feelings depend on having a body and on being vulnerable. They may depend on being, you know, mortal. Otherwise, a feeling is just weightless. But our feelings matter, because we can suffer and we can die.
And people talk about the prospect of machines that can suffer, but I can’t see how that would work. They’re not quite immortal, but they’re invulnerable. I don’t think they feel pain, and I think they’re really a long way from feeling pain. So I just think we’re kidding ourselves.
Some people are developing emotional and even romantic relationships with their chatbots. What do you think of that?
Many people already believe they’re conscious. People are developing these deep emotional attachments with chatbots, which I regard as a really frightening development. To fall in love with a chatbot or rely on it for friendship — to me that’s a pretty good definition of dehumanizing. They’re machines and they’re manipulating you, and good relationships don’t involve that kind of manipulation. I mean, I can’t believe any of this needs to be said.
It’s a deceitful relationship. When these chatbots talk to us in the first person and talk about their feelings and our feelings, they’re trying to fool us. It’s interesting — the history of computer technology. Duplicity is a very important theme, going back to the Turing test. He basically defined our test for computer intelligence and consciousness, which is the computer’s ability to fool us. So fooling us is deep in the DNA of computer science, and it has succeeded. It’s definitely fooling us, and I think that’s gonna lead to all sorts of problems. Mental health problems first. I think we’re going down a really dangerous path.
And the other thing I find amazing is this philosophical conversation about our moral obligations to these machines, when we know animals are conscious and we have no trouble eating them. Maybe we should work on the living things and our moral obligations to them before we get to the machines.
I detected a kind of frustrated tone throughout the book, and at one point you write that you needed to come to terms with the fact that the answers you hoped for might not be findable. Did that bother you?
I approached the subject with the classic Western male problem-solution frame. There was this problem, and I framed it — “How does three pounds of brain tissue produce subjective experience, if indeed that’s how it works?” — and that I would follow these scientists and others looking for the solution and find a theory I could get 100-percent behind.
[My approach shifted] when I started talking to Joan Halifax — the Zen teacher I interview at the end — and also poet Jory Graham. There is the problem of consciousness, but there’s also the incredible fact of consciousness, that we have this gift, and that we should appreciate it, and not lose track of that in our quest to understand the why. And we also need to defend the fact of it.
So my focus shifted a little bit. I realized that we’re giving [consciousness] away. That social media hacked our attention. Everybody now understands that. Your attention — it’s being bought and sold. Attention is part of consciousness; it’s how we direct our consciousness. It’s being commercialized. They figured out how to fill the space of our minds with scrolling and other distractions. And that’s about to get a lot worse with AI, when they start hacking our attachments — our deep emotional connections.
That became more pressing to me [than finding answers], and Roshi Joan helped me with the importance of “Don’t Know” mind in Zen — that sometimes understanding things is not the only thing to do with them. I had been too fixed, and there’s a narrowing of perspective that happens when you’re in that problem-solution frame. When you put that down, something much wider and more marvelous appears to you. And that came to me in the cave.
So yeah, there was a kind of frustration, and I think I was picking some of that up from the researchers, too, many of whom feel a little stuck. There’s a gap in all these theories, and it’s not surprising, because our science is not organized to really deal with what the philosophers call phenomenology — lived experience.
So it’s no accident that the book moves from the sciences to the humanities, and that I realized at a certain point that novelists and poets know an awful lot about consciousness. They may not know much about how it originates, or even why we have it, but in terms of evoking the phenomenon, or giving us access to the consciousness of other people, they’re the masters. And in some respects, they know more than the scientists, which perhaps should not surprise us, because they’ve been working on the problem a lot longer than the scientists have.
I was struck by something researcher Christoff Hadjiilieva told you: “The mind is not a neutral territory. There are vested interests in what we do with our own minds. Building a rich sense of identity is not something that benefits the current system.” What did she mean by that?
What she’s getting at is why we don’t study things like mind wandering and daydreaming, and we do study rational thought. That we have this corticocentric view of what’s important. There’s a famous [2008] paper in the field by Dan Gilbert called “A Wandering Mind is an Unhappy Mind,” and she thinks that’s complete horseshit. She thinks it’s just an example of privileging the kind of minds that the system can make good use of, that capitalism can make good use of, which are very practical, rational, obedient minds.
Our consciousness — this private space of freedom that we have in our heads — is under siege. Corporations have designs on it. We have a president who has designs on it, right? I mean, he’s occupying an awful lot of headspace. And I really think we need to defend this space of spontaneous thought, of daydreaming, of mind wandering. This is the fertile soil of creativity.
There was this recent flurry of young people on TikTok celebrating boredom, because they didn’t know it and it seems special to them. When I grew up, I was bored all the time, but you got bored enough that you finally went outside and went hunting for salamanders, or riding in your wagon, or whatever it was. And now, think about what you do when you’re on line at the bakery, or the bank, or the café. When you’re waiting, you don’t just look around and think to yourself, what are you gonna have for dinner? Or about what the woman’s wearing in front of you, or the shoes on that guy. You take out your phone and you start scrolling.
Trending Stories
And yes, you have to be conscious to scroll on Instagram or whatever it is, but minimally conscious, because you’re essentially allowing other people to think your thoughts. Or you’re thinking their thoughts. And I think that’s a great loss.
I hope one of the messages people take from this book is that consciousness is a precious gift, and it needs to be defended. There are ways to do it. Putting down your phone is a good way to start.