The TED AI Show
Could AI really achieve consciousness? w/ neuroscientist Anil Seth
December 10, 2024
Please note the following transcript may not exactly match the final audio, as minor edits or adjustments could be made during production.
[00:00:00] Bilawal Sidhu: Hey, Bilawal here. Before we start the show, I have a quick favor to ask. If you're enjoying the TED AI Show, please take a moment to rate and leave a comment in your podcast app. Which episodes have you loved, and what topics do you want to hear more of? Your feedback helps us shape the show to satisfy your curiosity, bring in amazing guests, and give you the best experience possible.
In the rush to develop increasingly sophisticated artificial intelligence, a big question keeps floating around. You know this question: how long will it take before some massive breakthrough, some kind of singularity, emerges, and suddenly AI becomes self-aware? Before AI becomes conscious? But we're getting way ahead of ourselves.
Lately, reports from AI researchers suggest that AI models are not improving at the same rate as before, and are hitting the limits of so-called scaling laws, at least as far as pre-training is concerned. There's also worries that we're running outta useful data, that these systems require better quality and greater amounts of data to continue growing at this exponential pace.
The road to a machine that can think for itself is long, and it's starting to sound like it may be even longer than we think—for now. Clever interfaces like ChatGPT's Advanced Voice Mode, the one I experimented with in an earlier episode this season, helps give some illusion of a human at the other end of this conversation with an AI. I was surprised by how much it actually delighted me, and even kind of tricked me at least a tiny little bit into feeling like ChatGPT was really listening like a friend would.
The thing is though, it's a slippery slope. We're building technology that is so good at emulating humans that we start ascribing human attributes to it. We start wondering, does this thing actually care? Is it actually conscious? And if not now, will it be at some point in the future? And by the way, what even is consciousness anyway?
The answer is trickier than you might think. To unpack it, I spoke with someone who's been tackling this question from the inside out from the perspective of the one thing we know is conscious: the human brain.
[00:02:20] Anil Seth: One of my mentors, the philosopher Daniel Dennett, we sadly lost earlier this year. He said, “We should treat AI as tools rather than colleagues, and always remember the difference.”
[00:02:33] Bilawal Sidhu: That's Anil Seth. He's a professor of cognitive and computational neuroscience at the University of Oxford. He studies human consciousness and wrote a great book about it. It's called Being You: A New Science of Consciousness, and that quote from his mentor? It's something that sticks with him.
[00:02:49] Anil Seth: It sticks with me because I think we have this tendency to always project too much of ourselves into the technologies we built.
I think this has been something humans have done over history, and it's always got into trouble because we tend to misunderstand, then, the capabilities of the machines we build and also we tend to diminish ourselves in the process. And I think the recent explosion of interest in AI is a very prominent example of how we've fallen prior to this problem at this moment.
[00:03:22] Bilawal Sidhu: So this is why Anil's on the show today. He's come to share why he thinks it's imperative we see AI as a tool, not as a friend, and why that difference matters to not only the future of this technology, but also the future of human consciousness. I'm Bilawal Sidhu, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything.
So Anil, I've been thinking about how not long after we invented digital computers, we started referring to our human brains as computers. Obviously there is a lot more to it than that, but what is helpful and not helpful about describing our brains as computers?
[00:04:13] Anil Seth: It's clearly very helpful. I mean, there's my title, my academic title is Professor of Computational Neuroscience, so I'd be rather hypocritical to say that it was not a useful way of thinking to some extent.
And there's a very lively debate, uh, mainly in philosophy rather than neuroscience or in tech, about whether brains actually do computation as, as well as other things. And in fact, the metaphor of the brain as a computer has clearly been very, very helpful. If you, if you just look inside a brain, you find all these neurons and chemicals and all kinds of complex stuff, and computation gives the language to think about what brains are doing that, that means you don't have to worry so much about all that.
And of course at the beginning of AI there was this idea that intelligence might be a matter of computation. Alan Turing famously asked the questions about whether, whether machines can think, and universal Turing machines were, were specified theoretically, which can do any computation, and the idea that, well, that might be what the brain is doing becomes very appealing.
Also, at the birth of AI, Walter Pits and Warren McCulloch realized that neural networks, these simple abstractions of artificial neurons, uh, that are connected to each other, that underpin a lot of the, the modern AI we have now, actually service universal Turing machines. So we have this, this temptation, this idea to think, yeah, the brain is a network of neurons, networks of neurons can be universal Turing machines, and these are very powerful things, so maybe the brain is a computer. But I think we're also seeing the limits of that metaphor and all the ways in which actually brains might perform computations, but they may also do other things. And fundamentally, you know, we, we always get into trouble too when we confuse the metaphor for the thing itself.
[00:06:07] Bilawal Sidhu: I love that and I, I, I think a big chunk of that is also we talk so much about sort of these supercomputing clusters and just how fast technology is moving, and we're almost, you know, losing some appreciation for the intelligence that's inside our craniums. And to put it very plainly, how much more complex is the brain today compared to even the most advanced AI systems?
[00:06:30] Anil Seth: I mean, it's, it's a totally different thing. I, I think we, we really do the brain a great disservice if we think of it purely in terms of sort of number of, of neurons. But even then, there are 86 billion neurons in the human brain, a thousand times more connections. It's incredibly complicated even at that level.
Also, the, the brain is, is so intricate like the, what, the connectivity in one area might be slightly different from the connectivity in another area. There are also neurotransmitters washing around. The brain changes every time a neuron fires. Synaptic connectivities change a little bit. It's not a stable, um, architecture.
And then there are all the glial cells and all the supporting gubbins that, that we often don't even think about, but are turning out to actually be significantly involved in, in the brain's function. There was, there was a recent paper in science, I think, that had this gargantuan impressive effort to unpack in as much detail as possible one cubic millimeter of brain tissue in the human cortex. In this one cubic millimeter, you've got 150 million connections, nearly 60,000 cells, and, and to store all that data in a standard computer was just an enormous amount to characterize. And even this is just a, you know, it's not everything. Right?
[00:07:52] Bilawal Sidhu: Yeah.
[00:07:52] Anil Seth: This is just a very detailed model. The brain is very complex. Very complex.
[00:07:56] Bilawal Sidhu: That is quite amazing. What's also interesting about the shared complexity in the brain is, uh, the brain doesn't sit in a vat, right? At least not usually. And of course the brain works in concert with the rest of the body. Does that aspect of being embodied give humans any advantages over AI systems?
[00:08:12] Anil Seth: I think it depends what you want the system to do. You're absolutely right, brains didn't evolve in isolation. They evolved in response to certain selection pressures. What were those selection pressures? They weren't sort of, write computer programs or write poetry or, or, you know, solve complex problems. Fundamentally, brains are in the business of keeping the body alive, and later on, moving the body around.
So control of action, things like that. And those imperatives are fundamental to me, to understanding what kinds of things brains are, they are part of the body. They're not these kind of meat computer that moves this body around from, from one meter to another. Chemicals in the body affect what's happening in the brain. The brain communicates with the gut. Um, even with the microbiome, we're seeing all these kinds of effects that transpire within the body. And then of course, the body is embedded in a world, and there's always this feedback from the world. And understanding these nested loops of how the brain is embedded within a body and the body is embedded within a world, I think that's a, a very different kind of thing than the abstract, disembodied ideal of computation that that drives a lot of our current AI and of course is also represented in a lot of science fiction here. We. Things like, um, HAL, which are 2001, which, okay, there's a body as the spaceship, but it's a kind of disembodied intelligence in many ways.
[00:09:46] Bilawal Sidhu: Then how important is it that we're embodied to have consciousness and intelligence? And we'll get to the definitions in a bit because I'm curious what happens when you embody an AI and if I'm, of course, thinking of all the humanoid robot demos that we've seen lately where it seems to be this crude representation of kind of what we do.
Like we've got sensor systems that perceive the world and we build a map of it, and then we can figure out how to take action in it.
[00:10:09] Anil Seth: This is a very, this is a fascinating question. I mean, so far, you know, the, the AI systems that we have, the ones we tend to hear about mainly anyway, language models and, and generative AI, they, they tend to have been trained and then deployed in a very disembodied way.
But this is, this is changing. Robotics is improving too. It's lagging behind a little bit as it always does, but it is improving and there are fascinating questions about what difference that makes. One possibility that strikes me as plausible is that embodying an AI system so that you train it in terms of physical interactions, don't just drop a pre-trained model into a robot, but everything is trained in an embodied way, might give us grounds to say that AI systems actually understand what they say, if it's a language model, for instance, or understand what, what they do, because these abstract symbols, words that we use in language, and there's a good argument that ultimately their meaning is grounded in physical interactions with the world.
But does this mean that AI systems not only are intelligent and possibly understand, but also have conscious experiences? That's a separate question, and I think there's many other things that might be, um, necessary, uh, for us to think seriously about the possibility of AI being conscious.
[00:11:32] Bilawal Sidhu: I think that brings me to the logical next question, which is, what is the difference between intelligence and consciousness?
Uh, perhaps let's start with intelligence.
[00:11:42] Anil Seth: Both intelligence and consciousness are tricky to define, but most definitions immediately point to the fact that they're different. And if we think about a broad definition of intelligence, it's something like doing the right thing at the right time. Um, slightly more sophisticated definition might be the ability to solve problems flexibly. And whether it's solving a Rubik's cube or complex problem scientifically or navigating a social situation, adeptly, I mean, these are all aspects of doing the right thing at the right time. And importantly, intelligence is something you can define in terms of function, in terms of what a system does, what its behavior ultimately is.
So there's no deep philosophical challenge for machines to become intelligent in some way. I mean, there may be obstacles that prevent machines from becoming intelligent in this sort of general AI way, which is the way that that humans are intelligent. But intelligence fundamentally is a property of systems.
Now, consciousness is different. Consciousness again, is very hard to define in a way that that's gonna get everyone signed up to. But fundamentally, consciousness is not about doing things. It's about experience. It's the difference between being awake and aware, and the profound loss of consciousness in, in general anesthesia.
And when you open your eyes, you know, your brain is not merely responding to signals that come into the retina. Now there's an experience of color and shape and shade, uh, that characterizes what's going on. A world appears and and itself within it. And Thomas Nagle, I think has the, the nicest philosophical definition, which is that for a conscious organism, there is something, it is like to be that organism and it feels like something to be me feels like something to be you. Now you can finesse these distinctions as much as you want, these definitions, but, but I think it's already clear. They are different things.
They, they come together in us humans. You know, we, we know we're conscious and we think we're intelligent, so we tend to put the two together. But just because they come together in us doesn't mean they necessarily go together in general.
[00:13:55] Bilawal Sidhu: As you describe consciousness in this sort of subjective experience, a term that keeps getting thrown around in AI circles now is like qualia, right?
This notion of subjective conscious experiences and figuring out if large language models can actually have this. Um, certainly they're good at like making it seem like they do, especially the jailbroken models. But it also takes me back to something else that you've talked about, which is our perception of reality is sort of this controlled hallucination that we don't fully, like, perceive reality in this completely objective sense. I don't know if that's the best characterization, but I'm, I'm trying to connect the dots there where it seems to be like even our experience of reality is kind of hard to grok and fully explain. And so I wonder, doesn't that point to us not being able to create a very, you know, clear definition to measure that in a synthetic system.
[00:14:44] Anil Seth: Yeah, I, I think you can go even further, actually. I think there's very little consensus on, well, there's no consensus on what would be the necessary and sufficient conditions for something to have subjective experience, to have quaia in this sense, when you and I open our eyes, we have a visual experience.
It's the redness of red, the greenness of green. This is the kind of thing that philosophers call qualia. And there's a lot of argument about whether this is actually a meaningful concept or it's a, it's just something that we think is profound and it actually is a, is just a wrong way of looking at the problem.
But for me, there is a, there, there, you know, when we open our eyes, there is visual experience, however we label it as quaia or or something else. But for a camera on my iPhone, well. No. Um, we, we don't think there's any experiencing going on. Um, so what is the difference that makes a difference and could it be that some kind of AI that's a glorified version of my camera on the phone would instantiate the sufficient condition so that it not only responded to visual signals, but have subjective experience too. I think that's the challenge we need to to face because as you say, AI systems, especially things like language models can be very persuasive, uh, about having conscious experience, and again, especially the ones where you, you ask them to, to whisper and, and get around the guardrails in one way or another, um, they can really seduce our biases. And, uh, so we can't just rely on what a language model says. You know, if a language model says, yes, of course I have a conscious visual experience, that's not a great evidence for whether it's it's there or not. And so we need to think, I think, a little more deeply about what it would take to ascribe conscious experience to the system that we create out of a completely different material.
[00:16:45] Bilawal Sidhu: Material is an interesting point. You're bringing up sort of the substrate from which intelligence and perhaps consciousness can emerge because what you are saying is that, you know, I think it, it seems clear that we could have a super human intelligence level, AI system that isn't necessarily conscious, but I do wonder when people make arguments like, Hey, well if we just keep throwing more data and compute at this thing, and it keeps getting more and more intelligent, consciousness will be this emergent property of this system and it almost has this like techno religious kind of fervor to it.
Why do you think consciousness might be uniquely biological? Why does the nature of the substrate matter?
[00:17:21] Anil Seth: I don't know that it matters, but I think it's a possibility worth taking seriously. Now, in a sense, the opposite claim is, is equally odd. You know, why should consciousness be a property of a completely different kind of material?
[00:17:36] Bilawal Sidhu: Yeah.
[00:17:37] Anil Seth: You know, why would computation be sufficient for consciousness? You know, after all, for many things, the, the material matters. You know, if we're talking about a rainstorm, you know, you need actual water for anything to get wet. If you have a computer simulation of a weather system, it doesn't get wet or windy inside that computer simulation.
It's only, uh, ever a simulation. And the way you set it up is also very informative because there has been this, this implicit assumption, at least in some quarters, that indeed if you just throw more compute and AI gets smarter in, in ways which can be very, very impressive and very, very sometimes unexpected too, that, at some point, consciousness just arrives, comes along for the ride, and the inner lights come on and you have something that is also experiencing as well as, um, being smart. And I think that's a reflection more of our psychological biases than it is grounds or having credence in, in synthetic consciousness.
Because why should consciousness just happen at a particular level of intelligence? I mean, you can, you could make an argument that some forms of intelligence might require consciousness. Um, and those may be the kinds of intelligence that we humans have, but that's a, it's a bit of a strange argument because there are plenty of other species out there that don't have human-like intelligence that are very likely conscious.
And there may be more ways to achieve intelligence than through what evolution settled on for, for human beings, which is having brains that are also capable of consciousness. So the question for me, the fundamental question is, is computation sufficient for consciousness? If we try to design in the functional architecture of the brain as it is and run it in a computer, would that be enough for consciousness or do we need something much more brain-like at the level of being made of carbon, of having neurons, of having neurotransmitters washing around, of being really grounded in our living flesh and blood matter. And I don't think there's, well, there's, there's just not a knockdown argument for or against either of these positions, but there's, to me, good reasons to think that computation is likely not enough, and there are at least some good reasons to think that the stuff we are made of really does matter.
[00:20:08] Bilawal Sidhu: Given all of this, you do believe that it's unlikely that AI will ever achieve consciousness. Why is that?
[00:20:14] Anil Seth: I think it's unlikely, but I have to say it's not impossible. And the first reason it's not impossible is that I may very well be wrong, and um, if I'm wrong and computation is sufficient for consciousness, well then it's gonna be a lot easier than, than I think.
But even if I'm right about that, then as AI is evolving and as our technology evolves, we also have these technologies that, that are becoming more brain-like. In, in various ways. We have these whole areas of neuromorphic engineering or neuromorphic computing.
[00:20:48] Bilawal Sidhu: Mm.
[00:20:49] Anil Seth: Um, where we're building systems, which are just sticking closer to the properties of real brains. And on the other side, we also have things like cerebral organoids, which are made out of brain cells. They're little mini brain type things grown in the dish. Then they're derived from human stem cells and they differentiate into neurons which clump together and show organized patterns of activity. Now they don't do anything very interesting yet.
So it is the opposite situation to a language model. You know, a language model really seduces our psychological biases 'cause it speaks to us. But a, a clump of neurons in a dish just doesn't, because it doesn't do anything yet. You know, for me, the possibility of artificial consciousness there is much higher.
Because we're made out of the same material. To the specific question, why should that matter? Why does the matter matter? It comes back to this idea about what kinds of things brains are and the fact that they're deeply embodied and embedded systems. So brains fundamentally, in my view, evolve to control and regulate the body, to keep the body alive.
And fundamentally, this imperative goes right down, you know, even into individual cells, individual cells, uh, continually regenerating their own conditions for survival. They don't just take an input and transform it into an output. And, and in doing this, you know, I think there's a, a pretty much a direct line from the metabolic processes that are fundamentally dependent on particular kinds of matter, flows of energy, transformations of carbon into energy, things like that, all the way up to these high level descriptions of the brain making a perceptual inference, or as we said earlier, a controlled hallucination, a best guess about the way the world is. So if there is this, this through line from things that are alive and, and why we call them alive, all the way up to the neural circuitry that seems to be involved in, in visual perception or conscious experience generally, then I think there's, there's some reason to think that consciousness is a property of, of living systems.
[00:23:02] Bilawal Sidhu: As you were answering that, in my head, I have this visualization. Maybe the future of this conscious AI system that we finally create isn't going to be a bunch of Jensen's NVIDIA GPUs in some data center, but perhaps this like giga brain that we build out of the very things that our brain is made out of.
That's, uh, one hell of a visual, I gotta say.
[00:23:21] Anil Seth: Yeah. I, I, I think that's, and that, that's a possible future, right? Because we're already on that track with, with neuro technologies, um, and, and hybrid technologies as well. I mean, people can plug organoids, you know, into rack servers. People are beginning to do this already, to sort of leverage, you know, the dynamical repertoire that, that these things have.
Um, and nobody knows. How biological a system needs to be in order to move the needle on the possibility for consciousness happening. It may be not at all, or it may be a great deal indeed.
[00:24:08] Bilawal Sidhu: So I have to ask the question, can artificial neural networks then also teach us something about biological neural networks? And, and the reason I asked this, I was reading the philanthropic CEO's rather extended blog. And he brought up this example where basically like a computational mechanism was discovered by AI interpretability researchers in these AI systems that was rediscovered in the brains of mice.
And I was just asking that question: wait a second, so like, like an artificial like system, like a very simplified simulation is still telling us something about the the organic in a more complex representation. What's your thoughts on that and do you think this trend will continue?
[00:24:47] Anil Seth: Oh, absolutely. I, I think this is, for me, the, the line of research that was certainly the line that I'm following.
The use of, well, computers and in general, and AI in particular, they're incredible tools. They're incredible general purpose tools for understanding things. And you know, even in my own research, this is what we do. I mean, we'll build computational models of what we think is going on in the brain and we'll see what.
These models are capable of doing, and we'll also see what predictions they might make about real brains that we might then go and test in in experiments.
[00:25:22] Bilawal Sidhu: I have to imagine the advances in technology, both on the sensing and the computation side is making a huge difference, and I'd love to hear some examples.
[00:25:29] Anil Seth: There are examples in many different levels. So for instance, there are, there are algorithms involved in generative AI that might really map onto things that brains do. So one level it's about discovering what the functional architecture of the brain is through developing these, these new kinds of algorithms.
But then there are other levels too that there's the levels in which we might use AI systems as tools for modeling or understanding some higher level aspects of the brain. So for instance, we use some generative AI methods to simulate different kinds of perceptual hallucination. So the visual hallucinations that people have in, in different conditions, like in psychosis or in Parkinson's disease or, or after psychedelics. And this goes back to some early algorithms by, by Google in their Deep Dream when they turned bowls of pasta into these weird images with doghead sprouting everywhere. But, you know, we can, we can use those in a, in a more serious way to get a handle on what's happening in the brain when people experience hallucinations.
And then right at the other end, and I admit this is something that, for me anyway, is still, uh, uncharted territory and something I'm really interested to explore, is when we actually leverage the tool set that AI is, is delivering. Now, you know, the, the language models, the virtual agents, and I was reading a paper just the other day about a whole virtual lab that was discovering new compounds to, to bind to the, the covid, um, virus, virus particles. And, you know, this virtual lab was, was basically doing everything from searching literature to generating the hypothesis, to critiquing experimental designs and proposing new experimental designs and so on. So I think there's a lot of utility in AI for accelerating the process of, of scientific discovery.
[00:27:25] Bilawal Sidhu: I think AlphaFold is such a great example of that, right? Like what took like a PhD, you know, their, their, the entirety of their PhD to, to to, to figure out a couple molecules, we've mapped out a huge, huge opportunity space and kind of just put it out there.
[00:27:40] Anil Seth: I, I mean, that, that is such a beautiful example because also it just exemplifies the way in which, um, I think it's productive for us to relate to these kinds of systems because AlphaFold intuitively seems like a tool, right?
We, we treat it, we use it as a tool, or rather the, the biologists do, to just rapidly accelerate the hypotheses they can make at the level of protein binding. So on. Um, we never think of AlphaFold as another conscious, a scientist. It's not, it doesn't, it doesn't seduce our intuitions in the same way that that language models do.
Um. So I don't think there's anything quite comparable to, to AlphaFold, you know, in the neuroscience domain. Yeah. And I'm trying to think what one, what, what the equivalent problem would be. You know, one, one thing might be, and this is, this is very speculative. Maybe somebody's working on this already. You know, one of the big unknowns in the brain is, is really how it's wired up.
There was, uh, you know, another recent paper looking at the full wiring diagram of the brain of the fruit fly, and, and this is an incredible resource already. It was computationally incredibly difficult to, to put this together from the little bits of data that you might get in individual experiments. So there could well be a role for, for AI in helping amass large amounts of data to give us a more comprehensive picture of what kind of thing a brain is.
Um, and there may be, you know, many other creative ideas out there too. Uh, but all of them I think would treat the AI as, you know, in the, in, I think, the most productive way as a kind of tool.
[00:29:20] Bilawal Sidhu: I agree. There's a, there's a lot of, you know, inclination to, I call it the co-pilot versus captain question. A lot of people are like, yeah, this is like my personalized Jarvis and I'm gonna be like Tony Stark in the lab and just like, you know, doing what I need to do and it just like preempts my needs.
And it's cool that it's not constrained sort of by wall clock time, right? That you can just throw more compute at it and they can move faster. But fundamentally to me it feels like humans are still doing the orchestration. Um, what do you think are the risks of going the other route where we start feeling like these systems should be the captain and let's build the grand AGI system and ask it what to do, and then let's do it blindly.
[00:29:57] Anil Seth: Yeah, I, I think, I mean, who there, there's, there's, of course, there's a huge amount of uncertainty. Maybe it's not a, not a terrible idea in some ways, but it does strike me as something that is certainly not guaranteed to turn out very well, and human intuition still seems very important in interpreting the, the suggestions that that might come from, from AI or just what AI will deliver in whatever context. Having a human in the loop still seems to be very, very important. But there are some larger risks here that to the extent we do this, then I think we are moving back towards imbuing artificial intelligence with properties that it may not in fact have. You know, things like, oh, it really does understand what it's doing. Or, or it may be indeed be conscious of, of, of what it's doing as well. I think if we misattribute qualities like this to AI, that can be pretty dangerous because we may fail to predict what it will do. Um, another concept from Daniel Dennett is something called the intentional stance, and it's a beautiful idea about how we interpret the behavior of other people is because we, I attribute beliefs and knowledge and, and goals to, to you or to whoever I'm interacting with, and that helps me predict what they're going to do.
Now if we, if we do this with AI systems, and this is what language models in particular encourage us to do, then we may get it right some of the time, but we may get it wrong some of the time too, if the systems don't actually have these beliefs, desires, goals, and so on. And that can be, that can be quite problematic.
[00:31:39] Bilawal Sidhu: There's the other side to all of this too, where, you know, technology's also advancing to a degree where, um, we can kind of coarsely figure out what's going on in people's minds. And so earlier in their season we had Nita Farahani on and, and she touched on the concept of cognitive liberty. And we were basically nerding out over how we're basically putting all these like neural biometric recorders on ourselves. And yes, right now they can coarsely read our brains, and what was even trippier to learn about is manipulating our dreams with targeted dream incubation. What keeps you up at night when you think about sort of the ethical considerations from AI kind of making our minds more of an open box than they have been in the past.
[00:32:20] Anil Seth: One of the things that, that I think about, and I was recently writing a, a paper with a philosopher, Anna Gordon, about brain computer interfaces as well, is, is really why is the skull this, this boundary that we think of as particularly significant here? I mean, we've already given our data privacy away in so many ways.
[00:32:37] Bilawal Sidhu: I mean, that's, it's true.
[00:32:38] Anil Seth: Not a good thing, right? But, but in many ways, at least for people who've been around for a while, the cat's already outta the bag. But the idea of getting inside the skull does seem to be significant, partly because there's no other boundary that's left. And while we're very used to the ideas, the importance of things like preserving, um, freedom of speech, then there isn't really the same degree of attention paid to something like freedom of thought, right? So we, we are just not used to what kinds of guardrails and moral guidelines we might need in this case. And then there are also, I think, some more subtle worries, certainly in this space of, of brain computer interfaces because let's imagine a situation where we each have, or there's, or brain computer interfaces are widely used. A lot of brain data is extracted. It's used to train models, which are then used to, um, underpin the utility of brain computer interfaces so that they can predict what someone wants to say or do, you know, on the basis of brain activity. Now, there are some extraordinarily powerful and compelling use cases for this kind of thing in medicine, in treatment for people with
um, brain damage or paralysis or blindness. But if we generalize that to enhancement of everybody and we try to think, okay, these things are not just to solve specific clinical problems, but they become part of our society more deeply, then there's a potential that there's a kind of enforced homogeneity.
[00:34:08] Bilawal Sidhu: Yeah.
[00:34:08] Anil Seth: You know, we might have to learn to think a particular way in order to get the, the brain computer interface to work, and that may be a completely unintended consequence, but it strikes me as a, as a worrisome consequence as well. There may also be, you know, just kinds of social inequity that start to happen too about, okay, people with access to these systems can do more or, or will be allowed to train them so they can think in their own distinctive way and not have to think in, you know, in the way that, that the mass market BCIs require.
So I think there's a lot of, there's a lot of sort of feedback cycles that can start to unfold, uh, in this case. But fundamentally, it's that really there's nothing more to privacy once you go inside the skull. And then there's a stimulation thing as well. You know, once brain computer interfaces can be bidirectional, and if they're bidirectional and you start implanting thoughts, goals, intentions now, then we're definitely in a very ethically troubling situation.
[00:35:10] Bilawal Sidhu: That last bit to me is, is the stuff that keeps me up, but it's like, it's like giving a bunch of companies rewrite access to your mind, right? And to your brain. And, and in a sense it,
the, the point you brought up about sort of, you know, homogeneity, sort of like lack of intellectual diversity. We're already kind of seeing that where people are using LLMs and it's all kind of like the same milk toast prose and, you know, people are almost losing the ability to, uh, write and think and yeah, I think there's something kind of disconcerting about that.
[00:35:42] Anil Seth: Yeah, I mean, there, there might be a more optimistic view of this too, that the, the sort of milk toast homogeneity of large language model output may cause us to really value human contributions more. You know, just as in, in other situations where there's a kind of, there's a value attached to the handmade, the bespoke, and, and we may end up living in a situation where we just view these two kinds of language quite differently.
And you know, just as someone who grows up in a, in a bilingual household and will naturally learn to speak two different languages, future generations might, might become accustomed to, okay, well that's, that's kind of large language model language, and this is, this is human language. And they just innately feel very different even though they're using the same words.
[00:36:27] Bilawal Sidhu: Oh yeah. Kind of like just forming code switching and, you know, different contexts how exactly to behave. I think that's a, that's a valid point. Perhaps I have a slightly more jaded take on this 'cause I'm like, yeah, people are gonna want the Whole Foods experience, but a vast majority of people are like, gimme the free, ad funded Mountain Dew straight to the vein.
And I deeply, deeply worry about that. So let's leave the lab for a second here. What are the kinds of AI tools that you yourself are using, um, outside of the context of the lab?
[00:36:56] Anil Seth: I'm pretty a light user of, of AI tools, at least the ones that I know about, because of course, one of the thing is, you know, AI is hidden beneath the surface of many of the things we used.
You know, every time I use Google Maps, it's, there's, you know, there's machine learning or AI happening there. I, I do use language models. Increasingly sort of as verbal sparring partners rather than as sources of text that, that I will then edit or use directly. And you know, it's kind of as glorified search engines in in that sense.
And yeah, I find them more and more useful, but I still don't trust them. I think it's a case of, of using them to help us, to help humans think more clearly rather than to outsource the business of thinking itself.
[00:37:39] Bilawal Sidhu: So have you ever, whilst interacting with all these large language models, felt yourself forming a connection with these systems?
Or are you able to keep that separation and distance, like, almost like you're forgetting it's a tool and kind of more like a colleague? Does it ever feel like that?
[00:37:55] Anil Seth: It does. And you know, this is another of the things that, that keeps me up at night. Back to that, that question. Um, because there's something so seductive about the way we respond to language, that even if at one level we can be very, very skeptical that there's anything other than just sort of statistical machination happening, the feeling that there's a mind that understands and might be conscious is extremely powerful. And I'm thinking, you know, one way of thinking about this is that there are plenty of cases where knowledge does not change experience. So, for example, lots of visual illusions. Um, there's a famous visual illusion called the the Muller liar illusion, which is a, a visual illusion where two lines, um, look different lengths because of the way the arrows point at the end.
But you measure them, they're exactly the same length. And the thing is, even if you know this, even if you understand what's happening in the visual system that gives rise to this illusion, they will always look, you know, the way they do.
[00:39:01] Bilawal Sidhu: There's no like firmware, there's no firmware fix for our brains to fix that.
[00:39:05] Anil Seth: That's right. And so the worry for me is that there will be similarly cognitively impenetrable illusions of artificial consciousness.
[00:39:14] Bilawal Sidhu: Mm-hmm.
[00:39:15] Anil Seth: That if we're dealing with sufficiently fluent language models, especially if they get, you know, animated in, in deep fakes or even embodied in humanoid robots, that, you know, we won't be able to update our own wetware, uh, sufficiently in order to not feel that they are conscious. We will just be compelled to have those kinds of feelings. And that is a very problematic state to land in too, because if we are unable to avoid attributing, let's say, conscious states to, to a system, then again, we're gonna be in the business of attributing with qualities it doesn't have and mis predicting what it's gonna do and leaving ourself more open to coercion, um, and more vulnerable to manipulation. Because if we think a system really understands us and cares about us, but it doesn't, it's actually just trying to sell us Oreos or something, then that, that, that's a problem.
And I think that the most pernicious problem here is something that goes right back to Emmanuel Kant and, and probably before, which is the problem of brutalizing our own minds. Because here, if we are interacting with an artificial system that we can't help but feel is conscious, we have two options broadly.
We can either be nice to it anyway and care about it and bring it within our circle of moral concern. And that's okay. But it means that, you know, we will waste some of our moral capital on things that don't need it and potentially care less about other things 'cause we humans have this ingroup outgroup dynamic.
If you're in, you're in, and if you're out, you're, you are out. So we might either do that or we learn to not care about these things and sort of treat them in the same way that we might treat a toaster or a radio. Um, and that can be very bad for us psychologically, because if we treat things badly, but we still feel they're conscious,
and that's the point that Kant made. That's what brutalizes our minds. It's why we don't poke the eyes out of teddy bears or pull the limbs off, off dolls. You know, the, the science fiction film and then series Westworld dealt with this beautifully. You know, how dangerous it does for us, uh, to take this, this perspective.
So this keeps me up at night because. There's no good option here. We need to think very carefully, not only about the possibility of designing actually conscious machines, which even if it is unlikely, if it happened, would be very ethically problematic because of course, if something actually is conscious, it's a moral subject and we would need to be very careful about how we treat it.
But even building systems that give the strong appearance of being conscious is also problematic for different reasons, and this scenario is, is basically already with us or will be very soon unless we, you know, we think very carefully about how we design these systems and design against giving that impression in some way.
[00:42:11] Bilawal Sidhu: I think you very beautifully paint this picture of why it's problematic on both ends, right? Like treating it like the Rick and Morty, hey, this robot wakes up. What is your purpose? Your purpose is to put butter on my toast. That is your purpose. Just get, get back to please putting butter on my toast. And it has this existential crisis.
And I think on the other end, the, the Westworld example is, is very valid too, where you have things that are indistinguishable from humans and we go act out all these sort of lower urges or whatever the right way to put that. And we suddenly start bringing that sort of behavior to interactions with actual humans.
But the real question I come at is where you end it, which is from a user experience standpoint, right? A lot of people think that it is important to have these systems be as human-like as possible and meet the users sort of where they are. Do you wanna talk about why we need to be more nuanced? And, and do you have any ideas for what that sort of, um, what would be a better way to build these systems?
Because it, it seems like either extreme kind of sucks.
[00:43:11] Anil Seth: I think this, this is super interesting and, and in fact, I think talking to you just now is, is helping, you know, give focus to this is a serious design challenge and I'm not sure it's one that's being well addressed so far. Um, because of course, yes, there, there is a good reason to build systems with which we can interact very fluently.
It can also be very empowering and if we can, if we can have a machine generate code by by talking to it about what we want a program to do, that that's hugely empowering for, for many people as long as it does the thing that it's supposed to do. And not, not something else. But is there a way of having the benefits of that designing systems so that we can preserve the kind of fluent interaction that natural language gives us, but in a way that still at least pushes back to some extent on the psychological biases that then lead us to make all these further attributions of consciousness, of understanding, of caring, of emotion and, and all of these things. I don't know what the solution is and I'm not, but I think it's a, it's, it's a really important problem. And one, the simple solution would be okay, these things just have to watermark themselves to say, you know, I am not conscious. I don't have feelings. And of course, language models do that, un un until you, you know, play around with them, press them a little bit. Um, but that, that may not be enough. I mean, there, there may have to be other ways where we design interfaces, which through practice or through education, or through some other manipulation are shown.
And this is really a question. As much for psychology as it is for technology. What kinds of things, uh, preserve fluid interaction, but do not come to our, you know, psychological biases in, in what properties we actually know. I, I, I would love to see progress, focus on that problem because that would show the line we need to walk.
[00:45:12] Bilawal Sidhu: And you're right that there aren't any solutions. Do you think we can build those antibodies?
[00:45:18] Anil Seth: I think we have to try. I mean, that also brings up another point, which is again, very contentious in the tech sphere, which is what should we do about regulation? You know, what kinds of systems should people just put out there?
And you know, what I've come back to in that conversation is, is always the fact that in other domains of, of invention and, and technology, we're very cautious. I mean, we don't. Put a new plane in the sky without being fairly sure it's not gonna fall out. We don't release a new drug on the market unless we can be very sure it's not gonna have unintended side effects or, or consequences.
And you know, there does seem to be an increasing recognition that, you know, AI technologies are in the same ballpark. It doesn't mean that, you know, we don't wanna stifle innovation of course, but, but we can help shape and guide innovation. I think there's, there's again, a sweet spot to be, to be found there.
And then on the other side, the education that, one of the challenges there of course is that things are moving so fast, um, that it's very hard to, to keep up, but it's important to try. One thing that strikes me here is the very term artificial intelligence is part of the problem. It brings with it so much baggage that there's some kind of magic and it's like, you know, a science fiction mind, whether it's, you know, Jarvis, or whether it's Skynet, Hal from 2001 or whatever.
It's yeah. Whatever your favorite, conscious, intelligent robot is, right? That's what we think of, and artificial intelligence has this, this brand quality, which I think is a little bit unhelpful. It may be, have been incredibly successful in raising large amounts of venture capital, but it's, you know, it's not a particularly helpful description of what the systems themselves are doing.
Of course, most people working in this, at least. They used to it anyway. Talk about machine learning rather than artificial intelligence. And then another level of description, you can just say, well, these things are, are basically just applied statistics. And I think you start describing something as applied statistics.
You know, it, it, it's, even that is educationally valuable because it highlights how much we, when we load onto these systems by the words, we use. One other very simple example here, which I think the, you know, the horse has already bolted here too, but it's always annoyed me how people describe language models as hallucinating.
[00:47:43] Bilawal Sidhu: Mm.
[00:47:43] Anil Seth: When they make stuff up.
[00:47:44] Bilawal Sidhu: Yeah. It's giving them too much credit.
[00:47:46] Anil Seth: It's giving them way too much credit and it's, it's doing something more specific than that. It's cultivating the implicit idea that indeed language models experience things because that's what a hallucination is. A hallucination, if you apply it to a human being, it means well, they're, they're having a perceptual experience of something that that's not there.
And the fact that that linguistic term caught on so quickly, and I think is itself telling because it just revealed like, okay, there's some implicit assumptions, I mean, about what people think these things are doing, but it's also in a positive feedback. It's unhelpful because it leads us to, again, project qualities in, and if we're gonna use a word I, I, I wish they'd used confabulation.
'cause in human psychology, confabulation is what people do when they make stuff up without realizing that they're making stuff up. And that to me, is an awful lot closer to what language models, uh, do. But I, yeah, I don't think it's gonna catch on now, but we should be careful about the language we use to describe each system for exactly this reason.
[00:48:48] Bilawal Sidhu: What advice would you have for people that are listening to this, uh, so that they can take advantage of the tools at their disposal today and not get sucked into, into perhaps the, I don't like the pseudoscience and the, you know, fake spirituality that kind of comes as a package deal with AI today.
[00:49:07] Anil Seth: Part of it is exactly recognizing these often implicit motivations that, that drive all these, these associations that, that lead us to think of these things as being more than they are, or different than they are. And you, in the extreme it gets, it gets pretty religious, right? I mean, there's, there's the idea of the singularity, which is a sort of the techno optimist moment of rapture and, you know, uh, the possibility of uploading to the cloud and, and living forever with the promise of immortality.
I mean, the story is, is very, it's textbook religion, right? Um. So that in itself, I think is useful to, to bear in mind that there's a, there's a larger cultural story behind this. It's not simply an objective description of where the technology is and then caching that out further. It is, for me anyway, it's just a matter of, of continually reminding myself of the differences between us and the technologies that we build to resist the temptation to anthropomorphize, you know, to project human-like qualities into things, to retain a a slightly critical attitude to what's going on behind the interface, that it is not an alternative person.
Um, this can be easier said than done because as we were discussing before, one of the things that keeps me up at night are these cognitively imperishable illusions of intelligence, consciousness and so on that AI systems can, can bring to bear. But yeah, I think it's just at its most basic, reminding ourselves that if we think an AI system is a reflection of us, that it's something in our image, what we're probably doing is overestimating its capabilities and underestimating our own capabilities.
[00:50:58] Bilawal Sidhu: I love that. That's punchy. And that brings me to the last question, which is, given our discussion thus far, it makes me very curious.
What is your idea of the sort of ultimate, final form of AI, if you will, that appeals to you as a neuroscientist? Like what excites you the most about the potential for this future, you know, where, um, you know, AI can serve human intelligence and consciousness?
[00:51:23] Anil Seth: This is a very, very good question. I mean, I, I think my vision, optimistic vision about this is not some sort of single super intelligence, like deep thought and Hitchhiker's Guide to the Galaxy or whatever it might be, that's, that's, that's a single, super intelligent entity. Maybe AI in the future is gonna be a bit more like electricity or water. You know, it's, it's a basic,
[00:51:48] Bilawal Sidhu: Utility.
[00:51:48] Anil Seth: Property of the, of the, it's a utility.
It's a utility, and it's used in many, many different ways, in many, many different contexts to do many, many different things. And it's, in this world, we, we face the challenge of, of recognizing that there are some things which we once thought were distinctively or uniquely human, which aren't, and so there will be a social disruption to that.
This happens, of course, in, you know, in all technological revolutions. But the flip side of that is that the space that's opened for massive innovation, creativity, the ability to solve all sorts of problems. So I think it's not a single thing, it's, it's many things. One last thought on this. I've heard the idea many times that the distinctive thing about AI is that it could be humanity's last invention because AI systems can design, develop, improve themselves.
[00:52:42] Bilawal Sidhu: Oh, they'll invent everything else.
[00:52:43] Anil Seth: Or, or we, we lose the ability to have dominion over what they may end up being. And that's something that I'm still, you know, I'm still a little bit unsure how to think about that. Whether that's a, that's a real difference or whether it's something that we can still need, be careful to, to manage.
But my, yeah, my optimistic view of AI is as some kind of utility that's drawn on in many, many ways.
[00:53:07] Bilawal Sidhu: Yeah. That permeates everything versus the, the singular, all encompassing AI in the sky, which again, gets start sounding very religious. Um, Anil, thank you so much for joining us. Thanks for the conversation.
I really enjoyed it.
Wow. What a conversation. Anil Seth reminds us that the story of AI isn't just a tale of machines gaining power. It's a mirror reflecting our own biases, aspirations, and fears. We project so much of ourselves onto these tools. We anthropomorphize the algorithms. We give meaning to their outputs as if they share the complexity of human experience.
But as Anil said, we overestimate their capabilities and underestimate our own. And that's something worth meditating on. For all the dazzling feats AI can pull off, it's still us, humans, who design, direct, and decide what these systems become. And it's our responsibility to tread carefully, not just for the sake of innovation, but for the future of what makes us human.
There's another fascinating question here too. If a truly conscious AI system does emerge one day, will it even look like the systems we built so far? The unique, messy biology of the human brain. Neurons, synapses, and glial cells doesn't just power intelligence. It creates the rich, subjective experience we call consciousness.
Silicon and software might never be enough. Consciousness may demand a substrate that mirrors what we're made, of a construct that's alive, pulsing, with the same kind of vitality that flows through us. And this is wild to imagine. A future where conscious AI doesn't emerge from humming server racks or massive data centers, but from living organic systems, giga brains we create from the very building blocks of life itself.
In trying to recreate the sparks of consciousness, we'd be stepping closer to understanding what makes it so mysterious and so uniquely human. For now, though, that's all in the realm of speculation. What's not speculative though is this, the choices we make today, how we design, interact with and regulate these systems will shape not just the future of AI, but the future of our own humanity.
And as much as we might marvel at the power of these tools. It's our responsibility to stay grounded, to remind ourselves that these systems are reflections of us, not replacements for us. It's truly an incredible moment to be alive, and a terrifying one too. And as we grapple with the unknowns ahead of us, perhaps the best question we can keep asking is what kind of future do we want to create, not just for AI, but for ourselves?
The Ted AI Show is a part of the Ted Audio Collective and is produced by TED with Cosmic Standard. Our producers are Dominic Gerard and Alex Higgins. Our editor is Banban Cheng. Our showrunner is Ivana Tucker, and our engineer is Aja Pilar Simpson. Our technical director is Jacob Winik, and our executive producer is Eliza Smith.
Our researcher and fact checker is Jennifer Kim. And I'm your host, Bilawal Sidhu. See y'all in the next one.