A visual storytelling experience
I'm actually so excited about this fireside chat, because a lot of people have been thinking about superintelligence and AI, and I could not be more happy to speak to Yuval Noah Harari and Max Tegmark. To both of you, we're going to have a good conversation over the next 30 minutes to try to understand superintelligence. Now, superintelligence means different things to different people. Even the big ones that are trying to build it have different timelines and different definitions. So what does superintelligence, Yuval, mean to you? That it can make a million dollars on its own. That it's an agent that you release to the financial system, for instance, and it can do everything, including open and manage its own bank account, and it can make a million dollars. Then it's superintelligence, and then you can have millions of those taking over the financial system. I wish we'd hear Ken react to that, because he's made a couple of millions, and maybe he's worried about it. Max, what does it mean to you? Well, let's build up to it like a layer cake. First, what's artificial intelligence? It's just non-biological intelligence. What's intelligence? For those of us who are researching and building artificial intelligence, we simply define intelligence as the ability to accomplish goals. The more difficult the goals, the more intelligent, the more diverse the goals are, the more broad the intelligence is. Artificial intelligence was originally defined in the book called Superintelligence as an artificial intelligence which is just vastly better than humans at any cognitive processes. Practically, it would mean it can do every job much better than us by definition, and in fact, it would pretty quickly figure out how to improve itself and be able to be smarter than all of humanity combined. Elon Musk thinks we could have AGI this year. If you speak to Demis Hassabis, he thinks it's like five, ten years away. Who's right? I don't know, but it's a very, very short time, however you look at it. If it is coming, then humanity is completely unprepared for it. I hope it takes longer because it means we have longer to prepare, but this is a very, very short time frame anyway. Just to put it into context, is this bigger and different than the Industrial Revolution? It's bigger than anything. It's not a tool, it's an agent. Every previous invention in human history was a tool. Whether it's the printing press, whether it's atom bomb, whether it's an airplane, it's a tool in our hand. We decide what to do with it. Here, we are creating an agent that
"Almost every professor and other AI researcher I know predicted six years ago that we were decades away from passing the Turing test, from getting something as good as Chachibiti 4, and they were all wrong because it happened already."
can make decisions by itself. It doesn't wait for us to decide. It can invent ideas by itself. It's introducing a different species, non-organic species to planet Earth, which is presumably by the claims of these people, more intelligent than us. You look at the history of humanity and the history of biology, it usually doesn't end well for the less intelligent species when the more intelligent species comes along. We're getting left, but I would be worried. Should we not worry? You should be worried. Dark humor is a good way to cope. The actual definition we talked about, of course, means that it's a new species. If you have robots that are both vastly smarter than us and also every bit as agile as us, they can do everything we can do. They can make robot factories and make new robots. They can reproduce. In other words, they check all the boxes on the species definition. This is very accepted stuff in Silicon Valley. You can go read Sam Altman's blog called The Merge, which he wrote 11 years ago, where he says homo sapiens is going to be the first species to build its successor species. We can debate about whether we want to do that or not, but that's what it means. For your question, when will it happen? This is where there is a really genuine controversy among experts. I think it's easier to not lose sight of the forest for all the trees by just zooming out a little bit. Alan Turing, the godfather of AI, he said in 1951 already that if you build basically a new species that's smarter than us, by default, it's going to take control. Just walk down to the nearest zoo and ask yourself, is it the humans in the cages or some dumber species? It's the default outcome, as Yuval said, that the smarter species controls because intelligence gives power. Now, Alan Turing also said in 1952, don't worry, it's far away, far away, but I'm going to give you a test so you know when it's close. It's called the Turing test. It's about mastering language and knowledge at the level of a human. Since then, AI was mostly overhyped decade after decade, over-promising, under-delivering until about four years ago when it switched, it became under-hyped. Almost every professor and other AI researcher I know predicted six years ago that we were decades away from passing the Turing test, from getting something as good as Chachibiti 4, and they were all wrong because it happened already. Since then, it's continued going faster than most of my technical colleagues thought.
It went pretty quickly in the past four years from high school level, university level, PhD level, to professor level and beyond in some areas, and just last year, for the first time, AI won the gold medal in the International Math Olympiad, which is the intellectual version of being Usain Bolt in the Olympics. There's no sign whatsoever of the progress slowing down, so I think we can say quite confidently that if you buy into the basic premise that the brain is a biological computer, of course we can build a better one. Some people think it'll happen next year, two years from now, some think five years, some think ten years, but most serious technical people I know have stopped talking about decades from now. Well, I think the amazing thing is if you think about it, where were you on the day that AI passed the Turing test? You know, for decades, people were talking, the Turing test, the Turing test, where were you on the day it happened? Nobody remembers because it's just swished through it, and nobody, and people stopped talking about the Turing test. Nobody talks about the Turing test anymore, it just happened. And the thing is that to change the world, to change history, you don't need super intelligence. Very dumb intelligence is still enough to change history. Yeah, you know, you look at, say, social media, which is controlled by very, very primitive AIs, and how social media changed society, politics, psychology over the last ten years or so. So we don't really need, I mean, super intelligence is a chimera, they keep changing the posts. I mean, even quite primitive AI is sufficient to change history and society. I don't know when we reach the point, but I'm sure that in the next decade or so, we will have to deal with a new wave of immigration, you know, immigration, one of the biggest political issues right now, AI immigration, that we will have hundreds of millions of AI immigrants coming from mainly two countries, China and the US. And you know, it's strange, the US telling countries, close your borders to human immigrants, but open them wide to our AI immigrants. And I'm not against immigration. We will have, you know, AI doctors in the healthcare system and AI teachers in the education
"Some people think it'll happen next year, two years from now, some think five years, some think ten years, but most serious technical people I know have stopped talking about decades from now."
system, but they will bring problems. And the big question is, how does society, human society, adapt to a giant wave of immigration from a different species? Can you talk, first of all, of how you see it changing? So how is it changing our economy when we have superintelligence? How is it changing the fabric of society? People understand, but it's actually very difficult to see. What does it mean? What does it mean? I'll give just maybe two example, finance. Once AIs can act as financial agents and start investing money by themselves, making money by themselves, what happens if AIs come up with new financial investment strategies, which are like move 37 of the famous AlphaGo game, completely new financial strategies, and next stage, new financial devices. The financial history of humanity is humans inventing new financial devices, money, checks, stocks, bonds, ETFs. If you remember the 2007-8 financial crisis, it started with the invention of new financial devices, the CDOs, that people thought for a few years were wonderful until they brought the market down. What happens if AI financial agents invent new financial devices, which are mathematically so complex that no human understands the financial system anymore and therefore cannot regulate it anymore, but we don't want to regulate it because it makes trillions, and then there is a crash and not a single human on the planet understands what is happening because the financial system has developed to a stage that only AIs understand what is happening there. A different example, what happens when you raise kids from day zero when they interact with AIs more than they interact with other humans, that if you ask the child or if you watch the child to see what are the main interactions of the child and how does the child's psychology develops and things like attachment and friendship, the main interaction is with AI. What are the implications for human psychology and society? We have no idea. We will know in 20 years. This is the biggest psychological and social experiment in history and we are conducting it and nobody has any idea what the consequences will be. Just to go back to the theme of immigration, a lot of the people who oppose immigration, if they hear that their son or daughter is dating an immigrant boyfriend, they get nervous.
What will happen when their son or daughter starts dating an AI boyfriend? More nervous. Max, and this is to the point, you believe that actually the only way to make this a success is to try and align superintelligence with the human goal, but in Davos you also see what is the human goal? I don't actually believe that. Let me just clarify a little bit. This great question you just asked, what happens in the world with the jobs and so on when you build superintelligence? It's important to remember we do not have superintelligence now, so it's a huge mistake to start thinking about how AI is having some small effects on the job market now when you can do re-skilling. By definition, AI can do all the jobs much better and cheaper than our superintelligence can. So by definition, we are economically obsolete when superintelligence comes. So forget about jobs. We cannot get paid for anything after that. Maybe society can find a way of still giving some money to people if humans stay in control, but right now the famous control problem which people have worked on for decades, many of the smartest minds, how do you control a smarter species is unsolved. Many believe it's impossible, just like it's impossible for chimpanzees to control us. So most likely, if we build superintelligence, it's the end of the era where humans are in charge of Earth. Elon Musk was saying that on a stage just a month ago, you know, it's going to be machines in charge, not us. But this is not inevitable. It's a huge mistake I think many people make as what's going to happen as if we humans now are just some sort of passive bystanders, you know, eating popcorn, waiting for AI to take over the world. So if we're building this stuff, and many of the most influential people steering this development are here in Davos, right? So you can build it differently? Yes, we totally can. And what about this? This human interaction? We totally can, yeah. So, for example, raise your hand here in the audience if you're excited about AI that cures cancer and helps us. That's a lot of hands. Take them down. Who is excited about AI that simply makes us economically obsolete and replaces us? Okay, one guy, but nobody else wants it, right? So the vast majority of people do not want humans to remain in charge. So I think it's quite likely we will not actually race to build superintelligence because almost nobody wants it, not just ordinary people. But do you think the Chinese government and the US
"What will happen first is some company maybe gets more powerful than the US government and sort of starts more or less becoming the government, and then they lose control over the machines, and then it's a very sad ending for all the humans."
government want to have something built that's just going to overthrow them? Of course not. But so if you can control it, right, if you can align it with what you want, then you think that, you know, that's a possibility. Those are two totally different things, actually. I'm so glad you asked. Control means, you know, you have the power over it, you can shut it down if you want. Alignment, as opposed to without control, means that we lose control over it. AI is the boss of earth, but for some reason it decides to be nice to us. This is what most of the companies are pushing for right now. And I think if it was more widely understood by politicians that this actually implies the overthrow of the US government, you know, they wouldn't be so cool with it. Yuval, I mean, is this, you know, do you have to control the concentration of power of this? I think it's not a coincidence that we see the rise of AI and the return of imperialism at the same time. And I think certainly in the US, the new imperial vision of the world is based on the assumption that we are winning the AI race. AI will give us control of everything, the economy, military, culture. So we don't need allies. We don't need anybody. We can just take control of the world ourselves. And it's all premised on the assumption that we are building, we are winning the AI race, and it will give us everything. Yeah, except this is a bit like a Greek tragedy in that what a lot of the politicians haven't understood yet, which is pretty obvious to many of the scientists, is that we have no way of controlling right now something that smart. What will happen first is some company maybe gets more powerful than the US government and sort of starts more or less becoming the government, and then they lose control over the machines, and then it's a very sad ending for all the humans. There are actually two separate races going on here, which must not conflate. One is a race to dominance, whereas superpowers are trying to get the dominance by building more powerful tools, economic tools, military tools that they can still control. Then there's a second race to see who can be the first to build superintelligence, which is going to overthrow them. So if someone really wants control, what they should build is the tools and have very strict regulations to make sure nobody messes up and builds the new species that replaces us. Can democracy survive this? Hopefully, yes.
Democracy is ideally suited to survive this, because we are going to make mistakes with AI, with the way we develop it, with the way we deploy it, and we need a self-correcting mechanism. We need a mechanism that says, OK, we made a mistake, let's go back and try something else. In history, the best mechanism we know of this type is democracy. The whole idea of democracy is you elect somebody, you try a set of policies, and after four years or five years you say, we made a mistake, let's try something else. In many ways, AI becomes much more dangerous in a dictatorial setting, because in a democracy, say, for AI to take control or to manipulate the system, it's very difficult to manipulate a democratic system. In a dictatorship, in an autocratic regime, you just need to learn how to manipulate a single person, who is usually very paranoid and very narcissistic, which is why it's very easy to manipulate them, at least for a superintelligence. Can I add some optimism here? Because you're looking a little bit concerned. No, not at all. I mean, we're having a great time. Cocktails are coming. I completely agree that we don't need to build superintelligence. We don't need to go down that road. And hopefully the politicians, especially powerful politicians, the last thing they want is to build something that will take power away from them. And hopefully when they realize that this is serious, they will not go down that path. But is it too late? And actually, I wasn't worried. I was thinking about when is it too late to design something that is not omnipotent? If you give it all the power, then you become obsolete. I think it goes more the other way around. Once society gives the right incentives to those who build the tech, they'll figure out a way of doing these things, so you can have your cancer cure and all the great tools, but not the out-of-control Skynet. So the optimism is why I think neither the US nor China is going to ultimately let their own companies build the out-of-control superintelligence Skynet. For China, you've all just explained why. Clearly, Xi Jinping and the CCP don't want to lose control. And they have all the means they need to be able to stop Chinese companies from building superintelligence that can be uncontrolled, right? In America, similarly, the NATSEC establishment in America will start to see it as this is a NATSEC threat, very bipartisan. But even before that, we're seeing an amazing, the crazy bipartisan coalition emerging now in recent months in America. I call it the
"So that means companies will innovate to build the cancer cures and make fantastic productivity gains and all the stuff you want, and the other stuff will not happen for the foreseeable future, and I think that's great."
Bernie to Bannon coalition, the B2B coalition. You hear these two people saying exactly the same stuff about AI. They say things like, you know, it's so crazy. It's illegal for a creepy 60-year-old man to be manipulating and pretending to be a girlfriend of a young teenager and persuade them to commit suicide. It's illegal for a drug company to sell medicines that haven't been tested in a clinical trial to these kids. Why on earth should it be legal for an AI company to sell an AI girlfriend chatbot, which has now caused many teenage suicides? They're saying, basically, we need to treat AI companies the same way we treat pharma companies and restaurants and everyone else. First, you meet the safety standards, and then you can sell them. I actually think we're going to start seeing these incentives where AI companies have to meet the safety standards. No one will have a clue how to make superintelligence pass any kind of safety standards, right? So that means companies will innovate to build the cancer cures and make fantastic productivity gains and all the stuff you want, and the other stuff will not happen for the foreseeable future, and I think that's great. I was going to ask, who should be in charge of... I don't know if it's an ethics committee or something to see... You need to do it before, right? But this is a solved problem. We know how to do clinical trials. The company is in charge of convincing a bunch of government-appointed experts that this medicine here is not going to cause massive birth defects like thalidomide once did. The restaurant is in charge of cleaning up the kitchen and making sure it's not full of rats and persuading the health inspector that this is okay. This is a solved problem. We don't need to reinvent the wheel of how to put safety standards on an industry because we've done it on every American industry except AI at this point. How do you see humans changing with AI anyway? I know there's a debate on whether humans want to be the superior species because it's intelligence or whether it's a sense of belonging also and purpose that keeps society together. AI will challenge our deepest identity. I'm not talking about super intelligence. I'm talking about, again, this wave of AI immigrants that we will encounter more and more everywhere, and they will challenge us in many of the things we thought define our humanity. When a robot or a car runs faster than us, we are okay with it because we never defined ourselves by our ability to run faster than everybody else.
We always knew that cheetahs can run faster than us, so if cars and robots can do that, that's fine. But we defined ourselves by things like the ability to think. I think, therefore I am. By our ability to create, we are the most creative species on the planet. What happens to human identity when there is something on the planet which is maybe not as scary super intelligence, but is still able to think better than us, is still much more creative than us in many fields. We already saw it in narrow fields, like in chess or in Go, that AI thinks better, is more creative. This will happen in more and more fields. What happens when it comes to religion? I have friends. I'm a meditator. I'm part of a meditating community. When they have an issue with their meditation, they no longer go to a meditation master. They go to an LLM. They go to an AI to get advice. And this is likely to happen in Christianity, in Islam, in Hinduism. What happens to religion when AIs replace? Especially in religions which are based on texts, on scriptures. No Jewish rabbi is able to remember all the Jewish texts ever written. AI can easily do that. So if you have a question about Judaism and you go to the AI and not to the rabbi, what does it mean for human religion and for religious identity? What happens if AIs create a new religion? You know, it shouldn't sound so far-fetched because almost every religion in history claimed that it was created by a non-human intelligence. Until now, this was all fiction. But now it can be true. You can actually have a religion created by a non-human intelligence. What does it mean for human identity? I mean, a lot of the big tech companies will say that we'll have more time from humans to spend quality time, emotional time with people. Maybe. That's one way of looking at it. But then again, if you grew up interacting and much of your psychological makeup is through interactions with AIs, how will it impact the way we interact with other humans? You know, with other humans, part of the issue is that sometimes they have feelings that annoy us. We are angry at them. They are angry at us. AI, assuming that it has no feelings of its own, at least, you know, it can feed our narcissism. It can be this thing that is always focused on you. Like you
"What happens to human identity when there is something on the planet which is maybe not as scary super intelligence, but is still able to think better than us, is still much more creative than us in many fields."
come back home from work and your husband or your wife don't really pay attention to you. And they are grumpy because of something that happened in work to them. AI will never do that. It will always focus on you. Horrible dogs. And getting used to having a relationship with something like that and then trying to build a relationship with a human being, that can become much more difficult than in any previous time in history. So when you speak, both of you, when you speak to heads of states or chief executives of big companies that say, look, I'm worried about the picture that you're painting here. What do I do about it? How should I look at it? What do you tell them? I tell them, if they're in government, start treating AI companies like you treat any other government companies in your country. Put safety standards on them and then they'll innovate to meet them. And when I talk to people in the companies who are lobbying against all regulations, I ask them, you have all these voluntary commitments. You promised that your company is going to hold you to these standards. Why don't you go lobby your politicians to make your own promises be binding law on all your competitors also? I encourage you to ask them here as well, those who are in Davos while they aren't doing that yet. But it's important to remember, it's not too late for us to steer towards a really inspiring future with technology. Why shouldn't we cure cancer? Why shouldn't we solve all these other challenges that human intelligence has been stumped by? We can do it. But that is not the path we're on right now. Right now, we're on this race to replace. And I'm not just talking about ultimately trying to replace all jobs. We are starting to see, those of us who are parents, how there's a race to replace human relationships by having people instead put little rectangles between the humans. And even the attention economy is shifting in more towards even an attachment economy, where some people, some children have been so attached and manipulated by AIs that they kill themselves. That is the wrong direction. We need to really change direction. And the solution again is, I don't want to sound like a broken record, but we know how to do this. We've decided to regulate every other industry with safety standards. That's why we can trust our medicines now. We can trust our cars.
We can trust our food in the restaurants to not give us salmonella. We just have to do this with AI as well. It's tough because there's a lot of lobbying money going against it. But there was massive lobbying against seatbelts as well. We can do this. Yuval, what would you tell him? Have an international agreement banning legal personhoods to AI. I think the most dangerous move at the present moment is AIs gaining legal personhood or functional personhood. AIs are not persons. They don't have bodies. They don't have minds. But in the legal and political system, we have legal personhood, for instance, to corporations. Corporations can own bank accounts. They can sue you in court. They can donate to politicians. They are considered legal persons. In India, certain gods are considered legal persons. Now, until today, this was legal fiction. Because in the end, when Google decides to buy a corporation, when Google decides to donate money to presidential campaign, it's a human being making the decision. We say it's Google. But if you look, it's actually a human being there, an executive, a shareholder, a trustee who decided. Same with the Indian gods. It's not really Shiva who is suing you in court. It's the human trustees. AIs can actually manage a bank account. They can actually manage a corporation. If you allow legal personhood to AIs, you can have corporations without humans that might become the most successful corporations in the world, lobbying politicians, suing you in court, and there is no human behind. It doesn't mean stopping any kind of research. You can continue all the research you want. But until we are sure about this thing, no legal personhood to AIs, which also means that, for instance, AIs cannot operate by themselves on social media. This is also legal personhood. Yeah, I just want to say, this is so wise. I mean, granting robot rights and making super intelligence would be the dumbest thing we've ever done in human history, and probably the last. Okay, okay. Yuval, I mean, people here, there's a lot of political shocks. There is geoeconomics. People are inundated by news, good news, bad news, and you're one of the greatest minds of our century, and it really actually caught my attention that you go on a silent retreat. So not many people think about their brains and how they think. They just do it. They have to show up every day in the office early. So how do you think about it, and how do you actually really take time to think and reset?
"And if I don't understand how this is happening in my mind, how can I have the kind of hubris to say what AI can and cannot do and what will be the future relationship between AIs and humans if you don't understand the human mind?"
I mean, you know, this is our most important tool. At least, you know, whether I'm a public intellectual, or a politician, or a manager of a company, my mind is my most important tool. And I need a healthy and balanced mind to deal with the world, especially the world of 2026. It's crazy to invest in everything else and not in my mind. And the thing about investing in the mind, nobody can do that for you. I can send my suit to the dry cleaner, so somebody cleans my suit for money. I cannot send my mind to be cleaned by somebody else. It is the one thing I need to do for myself. Also, you know, in the field of AI, we constantly talk about AI versus human intelligence, AI versus human mind. What do we know about the human mind? If you don't understand the human mind, you cannot understand AI. You know, I hear people say, oh, AI is just glorified auto-complete. It just predicts the next word in a sentence. That's nothing. And then I sit for meditation and observe what's happening inside my mind. And I'm a verbal person. I think in words. And I see words popping in my mind and forming sentences. And one of the practices of meditation is try to observe the next word that pops up in your mind. Where did it come from? Why this word and not another word? You know, just this sentence I was saying, I don't know how it will end when I begin to say it. Like, why did I say how it will end and not how it will conclude, how it will terminate, how it will develop? Where did the word end come from? When I started the sentence, I did not know that it will end with the word end. Something in my mind kind of predicted, okay, the next word will be end. And if I don't understand how this is happening in my mind, how can I have the kind of hubris to say what AI can and cannot do and what will be the future relationship between AIs and humans if you don't understand the human mind? And that's why it feels like we have free will, because we can't predict where we're going to decide until we've actually finished the thought process and made the decision. Yeah. Gentlemen, that was so interesting. Thank you so much for a wonderful conversation. Please give everyone a big round of applause, Yuval Harari. Thank you, that was really great. Thank you. Thank you so much.