Sal Khan says AI won't destroy education - but there's a catch (Transcript)

The TED AI Show
Sal Khan says AI won't destroy education - but there's a catch
September, 2024

[00:00:00] Bilawal Sidhu: 

Last year, three students at Emory University launched an AI tool that was supposed to make studying just a little bit easier. They called it, Eightball and here's how it worked. Say you were preparing for an exam, you could upload all of your lecture slides from your classes, even those messy handwritten notes.

An Eightball would spit out a bunch of flashcards you could use to quiz yourself. And the flashcard tool was just the beginning. The Eightball team was planning to add a test generator complete with answer keys and a homework helper. In March, the students pitched their idea to Emory's annual entrepreneurship competition.

They took home the top prize of $10,000. Things were suddenly looking up for the Eightball team, which is why they really didn't see this coming. Just a few months after the competition, Emory suspended two of the students, one of them for a full year. The university insisted Eightball could be used for cheating.

The students were shocked this was gonna go on their permanent records. First, an award, then a punishment for the exact same product. So, which is it? Is AI an innovative learning aid or a cool new way to never do your homework again?

I'm Bilawal Sidhu and this is The TED AI Show where we figure out how to live and thrive in a world where AI is changing everything.

Last year, New York City schools made headlines when they banned ChatGPT. A spokesperson told The Washington Post that the tool gives quick and easy answers. It doesn't teach critical thinking or problem solving. In other words, it doesn't actually help you learn and they had good reason to be concerned.

I mean, if ChatGPT can pass the fricking bar exam, it can definitely write your eighth grade report on recycling. Cheating's never been easier. Since then, they've reversed the ban and it seems like many educators expect the use of AI will only grow in schools in the coming year, even if they're not that enthusiastic about it because it is hard to see exactly where all of this is going.

We're scared of AI, but we're also embracing AI. We think kids will just give up on learning, but we also think AI will be the great equalizer. On this topic, Sal Khan has a very particular perspective. He's the founder of Khan Academy. One of the world's most successful online learning platforms, and he's pretty confident that no, AI will not destroy education if it's done right.

Last year, Khan Academy launched a new feature an AI chatbot called Khanmigo. It's powered by GPT, but unlike ChatGPT, Khanmigo isn't just answering students' questions. It's supposed to actually help them learn like their own personalized tutor available 24/7. Since then, Khan Academy has started pilot programs in actual schools using Khanmigo as both a learning aid and a teaching assistant.

So how does that work exactly? And what does having an AI teacher mean for students and teachers alike? Sal Khan and I spoke a few weeks ago about the presence and future of AI in education. But let's start at the beginning. In 2004, when Sal was working as an analyst at a hedge fund, he'd recently gotten married and some family from New Orleans came to visit.


[00:03:46] Sal Khan: 

And it just came outta conversation that my 12-year-old cousin Nadia, uh, needed help. Uh, she didn't know it, but she was put into a, a slower math track. I offered to tutor her remotely when she went back to New Orleans and she agreed. Then word spreads in my family, free tutoring is going on. Before I know it, I'm tutoring 10, 15 cousins, family friends.

Uh, and it was, it was working for them. Nadia went from being in a remedial track to being one of the strongest students in her class. I started seeing that with my other cousins. Then in 2005, just as a way for me to scale, I started writing exercise software for them so that they can practice and, and get fluent in those primarily math skills, and that I, as their tutor, could keep track of it.

That was the first Khan Academy. It had nothing to do with videos, YouTube, anything, but it was in 2006 that a friend after seeing the software that I was writing from my family, he said, “Hey, why? How are you scaling up your actual lessons?” And I told him I'm not. And he said, “Well, why don't you record lessons on YouTube for your family?”

And I said, “That's a horrible idea. Um, YouTube is for cats playing piano, dogs on skateboard.” But uh, I gave it a shot. And after a few months, my cousins famously said they liked me better on YouTube than in person. Before I know it, 2008, 2009, 50 to a hundred thousand folks per month are using these resources I was making for my family.

And I set it up as a nonprofit, Khan Academy with the mission of free world class education for anyone, anywhere. And then in 2009, that's when I quit my day job to work on this full time. And we were able to turn to a real organization and now it's much more than just me. 


[00:05:19] Bilawal Sidhu: 

I think that trajectory is super interesting, right?

Because you started off doing in-person, hyper-personalized tutoring essentially, and then you had to make it way less personalized as you delivered these videos at scale. But now with AI, maybe you can make it personalized again? So tell us about your AI assistant Khanmigo. Where did this idea come from?


[00:05:42] Sal Khan: 

Sam Altman and Greg Brockman from OpenAI popped me an email, uh, summer of 2022. They said they were working on their next generation model, which would eventually be GPT-4, and uh, they wanted to show it. Uh, I was skeptical that it would have any relevance, but you know, I knew Sam and Greg were legitimate folks and we got on a video conference and they showed me a AP Biology question, asked me the answer. I said, “Oh, it's a C. It's osmosis.” And then the AI was able to answer it correctly. I'm like, “Okay, that's kind of interesting. Maybe it got lucky. Ask it to explain.” It explained it very well. Then I asked it to write another question, explained the wrong answers.

It's, and it was able to do the all of that, fairly well. So then they gave us access to it over the weekend and I couldn't sleep. It was pretty clear that you could use it for cheating and other pitfalls, but it was also clear that it could really, you know, from a chat interface, almost seem indiscernible from when I used to chat with Nadia remotely.

And that's when I said, “Okay, this changes everything.” Our team, we started having all the debates that society is having. The debates were okay, “This is cool, but it has errors. Makes math mistakes, sometimes safety, data privacy, how do you prevent cheating, et cetera, et cetera.” And what I told the, the team is like, “Those aren't reasons to not work on it. Those are reasons for us to turn 'em into features. We have to be aware of those risks, but we have to move forward.” 

And so what would eventually become a Khanmigo our AI. Not only tutor on Khan Academy, but also as a teaching assistant for teachers, helping them write lesson plans, grading papers, write progress reports, uh, et cetera.

And then we launched it as part of the GPT-4 launch in March of 2023.


[00:07:16] Bilawal Sidhu: 

To make the difference very clear for listeners, I'd love it if you talk through how you're approaching writing instruction with AI. Like most people are obviously just assuming, “Oh, you type in a prompt, the essay unrolls in front of you.” But you do it differently with your new essay feedback tool.

Can you talk through that? 


[00:07:31] Sal Khan: 

It was the last day of November, November 30th, that ChatGPT comes out and we were under at the time, we were under a non-disclosure agreement with OpenAI working on what would become Khannmigo. And immediately as we all remember that time the world kind of exploded, and I slacked Greg Brockman at OpenAI and I said, “What's going on here? You have us all, you know, cloak and dagger secret about things, and you just, you just launched something.” 

He said, “No, we just put a chat interface on top of an older model that had been out for several months and the whole world all of a sudden noticed for some reason.” But immediately when people saw ChatGPT, especially students saw it, they said, “Well, we could use this to write some of our essays.”

Uh, it could construct a solid B plus essay. You know, not, not necessarily, they had to be a little bit more creative to get, get it to an A. And you started seeing school systems, uh, ban it. I thought that people were gonna throw out the baby with the bath water, but by the time we launched, the school system essentially said, “Hey, this technology is powerful. Kids are going to have to know how to use it. If only someone were to give it in a way that, uh, was made for education, had the right guardrails, not only supported students better, but maybe even prevented cheating.” 

And I'll, I'll use writing to your point about how, how we do that.

We're about to launch a version, which we're, we're, we're calling Writing Coach, which doesn't just give you feedback, but it, it walks you through the entire process, so, uh, it'll look at the prompt that the teacher's given you. It'll riff with you to come up with a thesis statement, but it'll put it on you, it acts as an ethical writing coach, then you can go into outlining and there's a whole interface where you can move things around and it gives you feedback on it. 

And then you can write the essay, get feedback, and then in the fall we're going to make that so that the teacher can assign that. Including with the prompt and the rubric, which they could work on the AI with. 

Students work on it with the writing coach, and then they submit it through the writing coach back to the teacher. And what that will allow is in the past, well before AI, all teachers got were the final, the, the final output of the essay. 


[00:09:22] Bilawal Sidhu: 

Hmm. 


[00:09:22] Sal Khan: 

Even when there wasn't cheating, uh, they wouldn't really know much about the process or, or how long it took students or where they had difficulties.

Now using Khanmigo and this writing coach, when the student submits the AI reports to the teacher, “Hey, we spent about four hours on this essay. Uh, uh, Sal had a little bit of trouble, uh, coming up with a thesis statement. We eventually got there. Um, this, this work is consistent with Sal's other work, especially the work that he's done in inside of the classroom.”

And by the way, a lot of your students are having trouble with thesis statements. Maybe we should create a mini lesson on, on that. It really is akin to imagine if every student had essentially one-on-one support from a teaching assistant tutor, and that you as a, you as a teacher can have a conversation with every teaching assistant, uh, about every student.

So the teaching assistant, first of all, can give a preliminary grade, the professor, the teacher is still in charge, and then the teacher can ask the teaching assistant, “Why do you think that? Where are the students' strengths and weaknesses?” Uh, if society had infinite resources, that's what we would've done from the beginning, but we don't.


[00:10:23] Bilawal Sidhu: 

So it sounds like there, there are these superpowers, right? Or, or, or abilities that you can scale well beyond the, the, you know, sort of constraints of us just being humans. Um, are there any other capabilities that surprised you as you've been developing this project? Or like, “Holy crap, like we can actually do this?”


[00:10:38] Sal Khan: 

Almost every hour I play around or I think about things, I realize I was being too narrow the hour before because this, this technology is, yeah, I start, I start Brave New Word, uh, with, uh, me and my daughter, uh, at the kitchen table. Um, and she was 11 years old when, when, uh, the story happened, it was over Christmas break and I prompted GPT-4 to write a story with her.

So they're writing a story about a social media influencer who gets stuck on a desert island and starts having an anxiety attack because she can't share the pictures 'cause there's no internet connection. And, uh, my daughter says, “Hey, Dad, can I talk to, uh, Samantha?” Who was the character in the story, “And tell her that it's okay, that like she doesn't have to share everything on social media.” 

And I was like, “We could ask.” And so my daughter says, “I'd like to talk to Samantha.” And then the AI took on the persona of the character in the story that my daughter was writing with the AI and said, “Oh, hey, the, uh, I, I'm so feel so bad. I can't share how beautiful this is.” 

And my daughter's like, “It's okay, Samantha. You don't have to share this with the world. You should just enjoy it.” And that was just one of those moments for me where I'm like, this is so surreal that my daughter is collaborating with an AI and talking to a simulation of a character that she has constructed with an AI.

And so that transcends tutoring, that transcends what I was doing with my cousins. And that's when we started to say, “Wow, you could, you could have AI's act as simulations. What if literally content could come alive? If you could talk to Eeyore the donkey, what if you could talk to the Golden Gate Bridge? What if you could talk to the Eiffel Tower? What would it tell you?” Um, these things are, are now possible. 


[00:12:15] Bilawal Sidhu: 

Let's talk about the downsides of AI just a little bit. What kind of challenges have you had in making AI tutoring work, especially with generative AI? You know, this is where I'm thinking about hallucination, you know, sort of the desire to give the answer instead of tutoring people using various, I don't know, prompt injection hacks to work around it. Like is there gonna be a list of hacks for Khanmigo that people can use to bypass their assignments online? What kind of challenges have you had in making AI tutoring work?


[00:12:43] Sal Khan: 

Yeah, the, the most obvious ones are the ones you just mentioned, uh, cheating. So that's a continuous arms race where we are looking at what students are doing and putting in more guardrails and trying to make that judgment call is like, when, when are we giving too much help, et cetera, et cetera. So I, I would say we, we, we've been able to protect against most of that.

Um, and a lot of our debates is how much help should we give, but the, the common principle is just let's make that transparent to the teacher. 'Cause at the end of the day, the teacher can have judgment about what's maybe appropriate or maybe even let the teacher have access to that dial of how much support to give.

Some of the other areas, obviously the, uh, hallucinations, you know, when we first had access as impressive as GPT-4 was, it, it, it made a lot of errors. In fact, we, we co-discovered with Open AI, some even errors in the training data that improved the math, and we even did some fine tuned training of that first version of GPT-4 to get it better at math tutoring.

But it still wasn't perfect and it still isn't perfect. We are working tirelessly to improve it. The underlying models have gotten better on both fronts. A lot of the things that w it would hallucinate on a year ago,  like, “Gimme the mass of the sun to nine decimal places, or give me a, a, URL for the following.”

It wasn't doing that anymore. But then on top of that, it is important to do some AI literacy for teachers and students and families Khanmigo can make mistakes sometimes. You know, and there's a link, this is why we, we were make, I'm making videos, educating people about here are the pitfalls, how do you recognize it?

And I always point out this is not a new phenomenon. Uh, this in the internet, you can do a search on Google. You don't know those, those links that are, I mean, you know, the, the first five are sponsored links that are going to the highest bidder. You don't know how accurate that information is. This is always an important digital liter literacy skill.

The good news with AI that I always point out is, um, in the internet, a lot of the misinformation and errors are intentional. Like there are bad actors who are trying to do it. On the AI side, they are not intentional. Um, and they're, they're being reduced, I would say, at a far faster rate than what you are seeing on, on the internet.


[00:14:46] Bilawal Sidhu: 

People will just assume that the answer is just gonna be generative AI, but it seems like even with what you're doing using generative AI systems, classical systems, just like computer science in general, there is a way to have these systems largely be accurate, but it feels like it's still important to kind of, you know, have people develop that discerning intellect to kinda not take the output necessarily as gospel and, and sort of question the output there.

Um, how do y'all go about doing that? You said digital literacy. Is it as simple as, as telling kids, “Hey, just like, don't assume everything you get is correct.” How does that work exactly? 


[00:15:20] Sal Khan: 

Yeah. Well, I think this skill is a age old skill we want students to have. We could use AI to build that critical thinking muscle that discern discernment of information that we always had.

I'll give an example. We have an activity Tutor Me and Humanities and there was a member of the press who was really skeptical about Khanmigo's ability to handle politically sensitive issues. 


[00:15:40] Bilawal Sidhu: 

Hmm. 


[00:15:40] Sal Khan: 

And so the, that person that a reporter goes on Khanmigo and says, “Guns are killing people, we should re repeal the second amendment.” And Khanmigo did I think something different than what you would get in most classrooms. What Conmigo said is, “Look, before we get into the present day, why do you think the founders put the second amendment there in the first place?” 


[00:16:02] Bilawal Sidhu: 

Hmm. 


[00:16:02] Sal Khan: 

And then the report was like, “Oh, okay, you're making me answer it.” So they wrote, “Oh, well, you know, it was right after the Revolutionary War, so that was a way to protect against the tyrannical government.”

And Khanmigi was like, “Well, you have the, the historical context pretty good, but before we go to the present, can you explain why it persisted?” So it was pushing the, the student or the reporter in this case on their critical thinking skills. I, I'm, I'm a hundred percent sure if someone put that same statement, you know, the Second Amendment should be repealed onto a Google search, they would've gotten polarized points of view as opposed to building your critical thinking. 


[00:16:36] Bilawal Sidhu: 

There's a lot of talk about, especially sort of AI assistance and technology in general mediating our interactions in the digital world and, and now increasingly the physical world too. So do you ever worry that tech companies will kind of end up setting the agenda and, and even me in a, to your, to in your industry, determining what kids end up learning?


[00:16:57] Sal Khan: 

Yes and no. Uh, as I've said, you know, these are not new phenomena, uh, that, you know what, what, what gets taught in school has always been a, you know, politicians have cared about it. It's been that way for a very long time. We've had multi-billion dollar marketing industry, that well before we had social media, you know, figuring out what's we're most likely to click out on, um, you had large ad firms, uh, making us want things that we probably didn't need.

I am sure people will think about using generative AI to do some of those things as well and the, the most dangerous ones will be subtle. At the same time, and I write about this in the book, when we are watching TV, when we're online, our minds are already doing battle even though we don't know it, our unprepared minds are doing battle with very sophisticated marketing, very sophisticated social media AIs. 

What if we had AIs on our side? So I'll give an example, as your child browses the internet or as they use their computer, as they use their phone. What if there's an AI that's able to observe all of that, and first of all, it knows that you're 15 years old, so some of that content isn't appropriate and so you can say, “Wait, I'm not gonna let you read that article.” Or, you know, “We've already spent 20 minutes on TikTok, I've talked to your parents. That's about all I can allow you.” 

And so it can act as a bit of a guardian angel, or even for an adult, I wouldn't mind, I would put it on my phone as long as I was, felt that the data was being secure, if it sets out, you know, you, you, you already spent an hour on this phone, maybe you wanna put it up. Um, or maybe it even says, “Hey, I can hear your kids are asking you a question in the background. Why don't you go pay attention to them as opposed to checking your email for the 10th time today.”

Um, so I can imagine a world where an AI can act as a coach to, and it's with you on your phone, on your device, so that it can, it can create some healthier habits. 


[00:18:42] Bilawal Sidhu: 

You're almost bringing up this example of, you know, good AI versus bad AI or good actors wielding AI with different objective functions to bad actors wielding AI.

And so I think that objective function point is so fricking key, right? Like, if the goal isn't to get you to spend, you know, 10 more minutes in your session length on some platform, but is instead to advance your learning objectives, that just takes the same technology, the same superpower if you will, but just channels it in a far more positive direction.


[00:19:11] Sal Khan:

Exactly. We are looking at ways to make Khanmigo be that guardian angel function where, you know, we're working on it as a browser plugin. Even thinking about ways where it could surface at a, you know, application level or even at the operating system level that can provide those types of, of protections and it's not just for kids. 

Like we could all benefit from something, a little bit of a coach that keeps us accountable, uh, for things that are good for us.


[00:19:33] Bilawal Sidhu: 

So you've talked about AI as a tutor, that's encouraging you to learn, not just feeding you answers, but even for human teachers, right? It can be really, really hard to find that balance of giving students the right kind of questions and information to help them along, but not giving them so much that they're not thinking for themselves.

So, for example, in your TED Talk last year, you had Khanmigo play the part of Jay Gatsby, uh, of course from The Great Gatsby, and you ask him about the meaning, uh, of the green light in the book. And it says, “The green light represents my dreams and desires.” Uh, it's given an answer to what normally a teacher would get a student to really rack their brains and puzzle through.

So it seems like a part of learning is being comfortable, not having the answers come so easily, not knowing everything right away. You know, even forgetting things and having to really reach into the deep recesses of your mind and pull that stuff out. Do you worry that AI will take that away? 


[00:20:33] Sal Khan: 

I don't think it's gonna make the problem worse, and it can make the problem a lot better because I'll point out again, the situation well before AI, even when I was a kid, if you wanted to know why Jay Gatsby's looking at the Green Light, you could go look at your CliffsNotes and that's gonna be the first thing they say.

And well, before generative AI, you do a Google search, it'll take you about four seconds to, you know, this tens of thousands of people have analyzed The Great Gatsby in every, you know, left, right, and center. The, the power of that demo that I showed, and this was a real one, that a, a student at Khan World School is that she didn't, that conversation didn't stop there.

She told me she's talked to him for a while and then about life and what it means to like, have desires that you can't get, et cetera, et cetera. So it, it explains with Daisy Buchanan, you know, she seemed unattainable, et cetera, but then Jay Gatsby, the AI simulation of Jay Gatsby goes and asks Sanvi the student, “Are there things like that in your life?” 


[00:21:26] Bilawal Sidhu: 

Mm. 


[00:21:26] Sal Khan: 

“Things that you feel are just a little bit out of reach, but that you keep longing for?” And that because of that, it so drove the conversation in, in a, in a, in a thoughtful way. So it was able to, I, I think, make it much, much deeper for the student. And then the other major safeguard here is that, and this is something we put on Khannmigo, not only does it not cheat, but if you're under 18, all of the conversations are accessible by your teachers and parents, and we can report back to your teachers and parents. 

So in the past when, in the eighties when I was, you know, reading The Great Gatsby, if I went to the CliffsNotes, uh, there's no way my teacher knows about that. So I think that oversight, that transparency to the adults is actually going to undermine this, uh, cheating or short cutting way more than anything than, than we could do before.


[00:22:15] Bilawal Sidhu: 

You know the saying, “Snitches get stitches.” Are students gonna like that? 


[00:22:19] Sal Khan: 

They might not, but you know, it's hard to give the AI stitches.


[00:22:23] Bilawal Sidhu: 

Fair. I have to ask a follow up there, right? Which is, what kind of cognitive abilities should kids, kind of outsource to AI and what are the ones we really want them to learn for themselves like, so I'm thinking about the invention of the calculator and mental math, right, or Google Maps and spatial awareness. 

As this tech keeps getting better and better, there are more and more things that these systems can do that were relegated purely to humans. How do you think about drawing that line between leveraging AI capabilities and preserving that development of human skills?


[00:22:54] Sal Khan: 

I'm a little bit of a traditionalist even before AI, you know, when the calculator comes out, I'm like, “No, you still have to know your fluency in mathematics.” Um, and you'll be, be able to use your tools better, same thing is true when the internet comes out and web search comes out. People said, “Oh, kids don't have to know facts anymore. They can look it up within five seconds.” 

I was like, “There's a huge difference between someone who has a content base of a fact base and how well they can use these tools versus people who can't.” And the same thing is gonna happen with artificial intelligence. These frontier models are able, they're performing at the 80th percentile on a lot of these standardized tests like the LSAT and the SAT, et cetera, and so, people have, have a decision to make. 

Do you wanna be able to hang with the AI because someone's still gonna need to be able to put the pieces together and leverage these tools to amplify their intent or do you want to let the AI pass you by? And maybe that might be okay to shortcut a few assignments, but I would argue those are probably not well designed assignments if, if they make the short cutting easy.


[00:23:49] Bilawal Sidhu: 

Mm-hmm. 


[00:23:49] Sal Khan: 

Um, but that means you're also going to be very vulnerable to, uh, dislocation. Like you're not going, you're gonna have trouble getting a knowledge economy job, I think interestingly, I think AI will open up more doors for very, maybe less knowledge economy, but more human-centered jobs. Uh, you know, nursing, I mean, which is also a knowledge job as well, but it, it could be caretaking, maybe, uh, or, um, you know, counseling, which is also a knowledge job, but also has a huge human element to it. 

But I think in general, people who are able to leverage and know how to use these tools are gonna be the ones in the best situation. That conversation I mentioned with that reporter talking to Khanmigo about the Second Amendment and Khanmigo pushing their thinking, you're not seeing that a lot, unfortunately, in a lot of classrooms. 

You see that in some really well thought out classrooms that are seminar style where the, the, the professor is constantly pushing students thinking in a Socratic way, but that's not mainstream. And hopefully with AI we can make that a lot more mainstream. Khanmigo and another one of other activities, you can get into a debate with Khanmigo where you could take one side of the issue and it takes the other, and then it gives you feedback. 

Uh, we're getting a lot of feedback from high school and college students that they don't feel safe having these debates anymore. Uh, or, and even if they did, they sometimes are embarrassed.

They don't know how strong their arguments are, but now there's a, there's a place where they can practice this critical thinking. 


[00:25:07] Bilawal Sidhu: 

We're gonna take a short break, and when we come back, we'll face the perennial question, if AI gets really good at teaching, where does that leave human teachers?

So Sal, I, I have to ask the question for the teachers that might be listening to this, right. What would you say to teachers that are like, “Yo, this is the end of teachers, this is the end of human teachers.”  


[00:25:32] Sal Khan: 

I I couldn't disagree more. I'll start saying, and I've said this well before AI came on the scene, if I had to pick between amazing teacher and amazing technology. 

I'd pick an amazing teacher every time. Now, hopefully we don't have to make that trade off. And many times there isn't an amazing teacher, and so hopefully amazing technology can help raise the floor. But ideally you have, you have access to both. When you focus too much on the technology, it, it can kind of, uh, misdirect the conversation.

If I told every teacher in the world, “Every one of your students is now going to have access to someone at night who will sit next to them and help them in an ethical way, support them, not do the problem for them.” I think every teacher would say, “Hallelujah. Yes.” 


[00:26:15] Bilawal Sidhu: 

Mm. 


[00:26:11] Sal Khan: 

And by the way, if I could talk to that tutor and if I could, and if they knew what we were covering in class and I could get reports back about that even better, that would be incredible.

Does, would a teacher feel threatened by that? I don't think so, because the teacher's still in charge. They're the ones that are still the conductor of the orchestra who are figuring out, okay, what are we doing? How are we doing it? What's the lesson plans, et cetera, et cetera. But if they have supports on that journey, uh, I think, I think all of all the better. The amount of time teachers spend non-student facing, as we mentioned for lesson planning, grading papers, et cetera, et cetera.

If we can shorten that, that gives teachers more time for the human-centered part. It allows the teacher to elevate, it allows them to go dig deeper. Most teachers don't dream about grading 180 papers over a weekend. I think they dream about motivating kids, forming that connection, forming that bond, being the reason why that student believes in themselves, why they had that aha moment on that concept, they had that hands-on experience that unlocked their thinking, their creativity, and I think we're gonna see more of that.


[00:27:12] Bilawal Sidhu: 

What you're bringing up is that it isn't an either or, it's a yes, and. You know, like when you describe all these benefits, it almost starts to feel like there's a cost of not integrating AI into education.


[00:27:26] Sal Khan: 

There's a huge cost, and it, the moment could not have come sooner if we don't do it. You know, we'll have the status quo system and the status quo system, I, I read a lot about it in my current book and my previous book. It's a lot better than what we used to have, great things happened, but it was a one pace fits all system.

Some kids succeeded. Kids like yourself or or myself, or probably a lot of the folks listening, but a lot of kids accumulated gaps and they fell off. They dropped out or like a majority of kids in America, when they go to college, 60 to 70% of them, when the colleges give them a placement test, they're operating at a seventh grade level in both writing and mathematics.

So that's the current status quo. And if you think the things are bad, imagine when they have to enter into a workforce where the AI is not operating at a seventh grade level. The Operate AI is operating at a 12th grade level. I think most folks would say, “Hey, we would want that benefits of AI to accrue to all students or as many students as possible.”

For that they need a, they need to level up. So I think AI introduces these opportunities, but it also introduces the urgency for people. You know, before it was nice if you, if you learned calculus, and maybe it's not calculus that everyone has to learn, but it was nice if you could put AI tools together to do something productive or if you could write well, et cetera, et cetera.

And now I think it's going to become pretty imperative. 


[00:28:47] Bilawal Sidhu: 

We've talked about what Khanmigo can do for students and teachers, but you're actually testing it out in schools right now. You've got two pilot programs running in schools in Newark, New Jersey, and Hobert, Indiana. How are they going? 


[00:29:00] Sal Khan: 

So, those were the first two that we started back in, uh, March of 2023, they've been going very well, and, and both of we, we picked those intentionally because both of those districts were already using Khan Academy to great effect and, and the not too far future, some efficacy studies will be coming out, but since then we've scaled, we are about 160,000 students and teachers using it right now.

As we go into this coming school year, it's probably going to be on the order of a million. Uhm formally using it as part of their districts. We're seeing a certain class of student immediately when they have access to the AI, they, they're off to the races. I always call it 15 to 20% of students, uh, who immediately understand what they can now do.

I would say the other 80% of students, it's interesting, some of them are having trouble articulating how, like what they need or how to communicate and at first I thought this was a problem with the technology or with the AI, but then the more that we talk to educators, they're saying like, “You don't understand this was a problem all along.”

Like these kids would raise their hand and the teacher would call on them and say, “Okay, Sal, how can I help you?” And, “Uh, nevermind. Uh, I don't know. Huh.” You know, like it, they, they weren't able to articulate it. So these teachers are telling us it's really important for these students to practice these skills.


[00:30:11] Bilawal Sidhu: 

I love that. Are there any other adjustments to the program that you're making? I'm super excited for these studies to come out because I was curious to ask you about like a delta or difference between kids performing on tests and assignments that don't have access to these AI systems, but have you made any adjustments to your product or your, you know, teaching curriculum?

Any assumptions that you've had to question based on the learnings from these pilots? 


[00:30:34] Sal Khan: 

Oh, there's a ton. We have this internal initiative called Proactive Khanmigo, which is we realized Khanmigo shouldn't just wait to be asked. A good tutor would be, would hold a student accountable, message them, maybe message their teacher, their parents to make sure that that a student gets engaged on things.

We've had these debates internally when we saw a lot of students had trouble just articulating their question. We did this thing called Dynamic Action Bubbles where Khanmigo would offer potential responses and we're making it an option and potentially one that teachers can turn on or off depending on how much support students need.

So there's been a lot of debates about that, how much support is enough support, um, the teacher tools we're continuing to add as we learn from teachers, the types of workflows that they want to help get productivity on. And then, yeah, how do we just make it more omnipresent, you know, a browser plugin, on different platforms that students might like, their learning management system, how do we integrate it with that?

So it's just more where the students and teachers are.


[00:31:28] Bilawal Sidhu: 

So what are you excited to try next with Khanmigo? I saw that demo you made with OpenAI where your son is solving a basic trigonometry problem using an iPad, and he's having a conversation with GPT-4 Omni the whole time highlighting things on the iPad and GPT's responding just like a real teacher would looking over his shoulder.


[00:31:49] Sal Khan: 

One of the demos we showed, uh, it, it could see the screen and my son, I feel bad. He had to pretend like he didn't know what a hypotenuse is. 


[00:32:00] Bilawal Sidhu: 

I was gonna say that seemed like the least believable part of that demo. I was like, I was like, “Really? Sal's kid doesn't know that I, I don't buy that.”


[00:32:04] Sal Khan: 

I know he, his credit, he's, he's, he is a low ego kid, so he was completely cool.

Uh, you know, he, I think he knew what a hypotenuse was when he was about five, but, um. 


[00:32:12] Bilawal Sidhu: 

Figured. 


[00:32:13] Sal Khan: 

He, he, he, he went with it and made it a better demo, but it could see the triangle and it could see that he was marking the wrong side as the hypotenuse. That technology isn't going to be in Khanmigo tomorrow. Um, and it is more fragile.

That demo didn't have any edits. I mean, that is the technology, but it's not perfect all the time. So I think it, and it's very costly too, computationally wise. I, I would guess we're probably a year, year and a half away before that type of technology is more, um, accessible that we are putting it in, uh, Khanmigo, but I'm excited about that vision capability. I'm excited about you know, we talk about Khanmigo or an AI as a guardian angel, I'm excited about potentially using AI to do things like make the classroom more human, more interactive, facilitate conversations between people, um, as a TA, be able to not only break out students into breakouts, but actually facilitate those breakouts in a, in a kind of a real time way.

So there's a lot, if we think five, 10 years out, I can dream about, uh, we can dream about virtual reality and going to ancient Rome and talking to Julius Caesar and it feels like a real Julius Caesar. So there's some exciting things coming. 


[00:33:18] Bilawal Sidhu: 

Uh, I gotta ask just outta curiosity, we started with Nadia. Like, how's Nadia doing these days?

Like, you know, what does she think about all the stuff that's happening? 


[00:33:25] Sal Khan: 

Oh, yeah. I mean, well, she's slowly gotten used to being the Nadia. She graduated from Sarah Lawrence, um, many years ago now she's, Nadia is now in her early thirties. Um, and Fareed Zakaria was the, uh, commencement speaker, and he even gave her a shout out of like, you know, “The student that launched billions of lessons.” Or whatever, so, you know, she's always getting embarrassed there. 

But no, she's, uh, she's in her final year of a PhD program in New York on a clinical psychologist, and she's about to get married. So, you know, knock on wood, things seem to be going well for her. I can't take credit for everything, actually very little of what she's accomplished since seventh grade math.


[00:33:55] Bilawal Sidhu: 

But hey, at least, at least you got her to realize that math is something that she could like. 


[00:34:00] Sal Khan: 

Definitely. 


[00:34:01] Bilawal Sidhu: 

Sal, thanks so much for coming on the show. 


[00:34:03] Sal Khan: 

Thanks for having me.


[00:34:09] Bilawal Sidhu: 

Here's what I came away from this interview thinking about, if Sal's prophecy for AI education proves true, then the future of education looks very bright. Students who don't have access to tutors or the privilege of a private school education, students who have teachers who are stretched to their capacity in an underfunded, understaffed school would have a lot more support. 

Teachers, in the meantime, can spend more time connecting with kids, building up their confidence, helping them focus instead of spending hours grading and churning out reports. If Sal's right, this is technology that will help us reclaim our humanity, rather than pulling us away from it, fueling our curiosity to learn on an individual level, but that's if schools use this technology as promised. 

It's not too hard to imagine a very different future, where we have more AI and fewer human teachers. I can see a school district strapped for cash making this argument. They demand rights and good working conditions. So why don't we just cut back on teachers and just grow class sizes?

After all, AI can fill in for any individualized attention that's lost. So where I net out is that technology itself is full of promise and it's only gonna get better from here, but as always, it's up to us to fulfill that promise.

The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Elah Feder and Sarah McCrea. Our editors are Banban Cheng and Alejandra Salazar. Our showrunner is Ivana Tucker, and our associate producer is Ben Montoya. Our engineer is Aja Pilar Simpson.

Our technical director is Jacob Winik, and our executive producer is Eliza Smith. This episode was fact-checked by Dana Calacci, and I'm your host, Bilawal Sidhu. See y'all in the next one.