“What we are creating now, is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only for military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have.”
The above is a quote by John von Neuman, speaking about the atomic bomb. Today we are building a different kind of monster - one that, unlike the atomic bomb, can be used for creation as well as destruction. A genie that brings the promise of fulfilling all of humanity's desires, if only we are willing to pay the costs. A demon, which once summoned will go on to drastically change the world we live in. Not just existing as a passive threat, like a nuclear warhead, but actively reshaping reality. And just like nuclear weapons, once we build it, there is no going back.
I'm talking of course, about AGI - Artificial General Intelligence. An AI that can perform any task a human can do at the level of at least an average human. We are still not there - ChatGPT has many limitations and in some ways, it is no smarter than a first grader. Still, we are promised, by OpenAI and others that true AGI is coming. It's in fact, just behind the corner they say, we are going to see it in the next decade, if not sooner. I'll ignore any arguments against such claims and ask the next question - what happens if what they say is true?
Before we embark on the futile journey of trying to guess the future, I need to say a quick word about recursive self-improvement. There is the idea that once AI reaches human-level intelligence, one of the skills it will have will be to build better AI. As a result, it will enter a self-improvement loop, where the current AI builds the next more intelligent version of itself, which in turn builds the next one, and so on and so on, until we end up with an intelligence so far surpassing our own that we are like ants compared to it. If we ever get to the point of such recursive self-improving AI, I think we are definitely screwed and we should start looking for some spiritual salvation instead, while our mechanical heirs shape the universe to their liking. This essay is not about this. It is about what challenges we will face if we get the more boring version of AGI - the one as intelligent as a human, but which for some reason can't make itself any more intelligent.
So, if AGI arrives in 10 years, what do our lives look like 10 years after that? Speaking at Davos, Sam Altman said that once AGI is here "It will change the world much less than we all think and it will change jobs much less than we all think".1 I don't see how such a statement makes any sense. Human intelligence is the most important resource we have. We have used it to create everything from stone tools to skyscrapers, from cars to vaccines, the steam engine, the printing press and that greatest of inventions - the microchip. Thanks to that last one, we have found a way, for the first time in history, to create more intelligence without making more babies. But at what price?
AI is not just getting smarter, it is getting cheaper. Yes, every new frontier model takes an order of magnitude more compute to train, but actually running it is relatively cheap. Prices for running LLMs have been plummeting for the last two years to the point where it costs less than $10 to run a bunch of models and make them create a video game from scratch, all the while pretending to be employees in a video game company.2 It's not crazy to expect that when AGI arrives, it will be dirt cheap, and affordable for pretty much everyone.
The first and most obvious problem that arises from this is employment. If AGI can do any intellectual task at a human level, a lot of people will lose their jobs in a very short period of time. If your job consists of doing things on a computer, with no physical element to it, you will be the first to go, washed away by the wave of cheap intelligence that will flood the world. In previous instances, when people's jobs were taken over by machines, it took decades, but eventually new, more complex jobs were created. This time it will be different. If, by definition, AGI is as smart as you, there will be no other job for you to take. Even if new jobs get invented, and I'm sure they will, AGI will be able to do these as well. So give up that sweet IT position you have, learn to do some plumbing and hope that the future contains no robots. And no, you can't just "run a one-man startup staffed with AGIs" because this can be done just as easily by that same AGI.
While we are busy fighting with the job terminators, AGI will quietly revolutionize another field - biotech. DeepMind's AlphaFold changed the world for a lot of pharma companies by giving them a database with the shapes of all known proteins. It made drug discovery faster and allowed a lot of the work to be done in the virtual realm saving a ton of money and time. What happens when AGI can do that, and also automate all the drug discovery, all the while being able to keep up with the entire scientific literature in real-time? What new miracle drugs is it going to create?
On the flip side of this coin, we have a nightmare scenario - bioterrorism. The people who have the capability to create the drugs we use, also have the capability to create novel biological threads, in the form of unseen pathogens. AGI comes with the promise to make this easier, and also more accessible. Currently, if you have your own death cult, it's quite hard to find an expert willing to help you engineer the next plague, due to the simple fact that most experts in that field, like most other people, are not willing to recreationally engage in suicidal activities. Therefore, you are stuck with some easy-to-indoctrinate grad student, who is prone to make many mistakes.3 AGI will change that. For a few hundred dollars you might be able to access an intelligence that is at the level of most domain experts, helping you build your lab, and then use that lab to build your virus. Of course, it's still going to cost tens, possibly hundreds of thousands of dollars to do but one major barrier will be gone.
But won't the good AIs defend us?
Even if bad actors are able to get their hands on some AGI-level entities, we would have the same, if not better technology to defend ourselves. Right? I don't think so. Let's look at the example above, and assume that some terrorist group manages to engineer a novel virus and release it on the population. There is no reason to believe that such a virus will be less transmissible than COVID or less deadly. In fact, it could easily be both more transmissible and more deadly. How long does it take our own benevolent AIs to come up with a vaccine or some drug that keeps us safe? After all, they are not magic, and neither are they gods that can just will some vaccine into existence. And even after a vaccine is created, how long does it take to actually distribute it? A lot of the damage done by COVID came out of our failure to organize. We were lucky that it was a relatively safe virus, compared to something like the Black Death. Had it been 5 or 10 times more deadly, the delays that we saw and the failures to organize might have had disastrous consequences, way beyond an economic crisis.
This raises a wider question, one about the balance between offence and defence. It's a problem that has always been part of humanity's calculus. Huge stone castles (defence) were a great winning move, up until the introduction of cannons (offence) which rendered them irrelevant. A nuclear warhead is an amazingly deadly offensive weapon but it does little to defend you against other rockets. Of course, the fact that we are armed to the teeth with nukes means we can retaliate against any possible attacks and therefore act as a deterrent. But this only works if you know your enemy. It doesn't work if your enemy is a small group hiding somewhere in a mountain. If they get their hands on a nuclear warhead they can detonate it in some major US city with little fear of retaliation. The offence/defence balance in that scenario is clearly skewed towards offence - it is much easier to attack with a nuke than it is to defend.
A future widely accessible AGI will introduce that same problem. It will enable small groups to deal massive amounts of damage, for a fraction of the cost that a big country would have to pay in order to defend itself. One area where we clearly see this phenomenon is cyber security.
In the last decade, we have seen more and more hacks targeting the infrastructure of whole countries. Attacks like WannaCry,4 which froze the UK's health system and NotPetya,5 which targeted electricity infrastructure in Ukraine have shown just how easy it is to bring huge vital systems to a halt. Not only that but unlike a bioweapon, developing a hack is a purely intellectual endeavour - one that an AGI would be perfectly suited for. What does a future look like in which every terrorist organization has the capacity of Russia or China to build and deploy cyberweapons? Will we be able to defend ourselves when attacks can come from a million directions at once when we are not even able to tell who the attackers are? How certain is it that the offence/defence balance will skew towards defence in the cyberspace of the future? What happens if it doesn't?
Part of the promise of the nation-state is that will defend its people. If it can't fulfil this promise, the state loses almost all of its power. Why pay taxes and follow the laws if the resulting organization is not enough to protect you from outside attacks? This question might become a reality for millions of people in the next 15 years. Giving up our freedoms for safety is no longer a worthy bargain if safety can't be guaranteed. Trust in politicians and institutions in the US is already eroding at alarming rates.6 At some point this might lead to protests and unrest at a scale we have never seen before.
This problem will not be unique to the West. Although in countries like Russia, the government rules with an iron hand, having little concern about the trust of the people, its power still depends on its ability to protect the citizens. The same goes for China and the CCP. Cheap, powerful AGI in the hands of everyone might pose a threat to the very idea of a nation-state. Given that we haven't figured out a better way to organize societies, it is hard to overstate the implications of this.
Western democracies face yet another challenge resulting from AGI - disinformation. We are already seeing the beginning of this - deep fakes of Trump and Biden telling dad jokes are getting better and better7 and it's a matter of years, if not months, until every single person around the world has access to the technology needed to create a perfectly undetectable deep fake. With trust already at a record low, it is easy to see how such a technology will sow further discord in societies which already feel fractured and tribalized. It is not clear or obvious that Western democracy is equipped well enough to weather the coming disinformation storm.
The scenarios outlined above are only a few examples of what can go wrong. If you think about them hard enough you can find holes in these arguments and therefore assume that a future with AGI is actually safe. However, there are endless other fields that AGI will disrupt - remember that the G in AGI stands for General. Anything a human can do, AGI can do as well. Not only that but AGI would be able to ingest vastly more information than a human on a daily basis, staying up to date with the most minute events around the world, able to make way more informed decisions in any area. In a way a future general intelligence will be more general than a human even if it's not smarter, just because of the sheer amount of information it can ingest and store.
Even if we manage to create such AGIs that are smart but not autonomous, a million problems will arise when everyone gets their hands on them. In the same way if you give every single person a nuke, you are bound to end up with a nuclear war. Therefore, once we answer the question of how to build AGI, the most important question for humanity becomes how to contain it. How to make sure its power is harnessed for the benefit of humanity and not its destruction. Simply giving it to everyone predictably leads to chaos, even if it happens in unpredictable ways.
One thing that became strikingly clear to me when ChatGPT first dropped is that no one has a clear idea of how to deal with any form of intelligent AI. Leaving aside the people who claim (and they might yet turn out to be right) that ChatGPT is not a precursor to AGI but a dead end, even those who believe that AGI is coming couldn't come up with a good, realistic strategy. Calls to completely ban the technology don't sound plausible given the fractured political climate. On the other hand, ideas to open source everything forever, don't hold up to any form of scrutiny, usually defaulting to some specific good outcome and pretending that nothing bad ever comes out of technology.
It turned out that we have no strategy for dealing with AGI. We were, therefore, lucky that ChatGPT turned out to be as limited as it was. Had OpenAI achieved AGI on their first try we would be in a very different boat indeed. But they didn't and this fact gives us one very valuable resource - time. If we believe we are on the way to AGI, we need to come up with strategies to channel this new force into productive outcomes. These include both giving people access to the new technology, but also sharing the fruits that such an invention will bring to the world. The Luddite riots8 of the past will seem like child's play compared to what might happen if half the population finds themselves unemployed overnight. The big corporations building the AIs of tomorrow might not want to share their profits, but they might not have a chance. Being the CEO of Google is a much better position if the society around you is not falling apart. At the same time, bad actors need to be prevented from wreaking havoc at scale. Sharing AGI with everyone sounds good on paper but allowing every wannabe terrorist to freely experiment and launch sophisticated cyber attacks against hospitals might not be such a great idea.
Whatever the strategies look like, from UBI redistributing the profits of the AI labs to some form of restricted access. they will take time to get implemented through laws and regulations. Society will need time to adjust. The march of technology might inevitably bring us to AGI, it's possible that we can't stop it, but there are things which can be done to slow it down. Right now, essentially the whole industry runs on chips produced by TSMC, a Taiwanese company which in turn depends on machines produced by ASML, a company in the Netherlands. Such a narrow supply chain creates natural choke points, where pressure can be applied to slow down the progress. Of course, any slowdown is pointless if the time earned in this way is not used to build a better system that can handle the AIs of tomorrow.
The government can only regulate what it can see. Before taking out the banhammer and striking at the AI labs, a sensible first step is to create regulations which allow for audits and reporting. If we don't know the capabilities of the systems that are being built we have little chance to adapt to them quickly enough, even if the political will to do so existed. Therefore it's imperative to be able to peek inside the AI labs and see what they are working on.
Still, there is only so much that governments can do. Regulation is a blunt tool and laws and acts too often get sidetracked or killed for reasons that have nothing to do with the problem. True safety can only be achieved by the labs that are building these super-intelligent behemoths. A culture of safety inside these labs is essential. something akin to the nuclear plant industry. Operators of power plants work closely together and share best practices when it comes to safety because a single accident in any one power plant can destroy public trust and spell the death of the whole industry. When it comes to AGI, a single mistake might be much more costly. A stolen cutting-edge AGI model could be easily fine-tuned out of any safety guardrails it might have and then used for any purpose imaginable.
Even if both of these problems are solved, the government issues sane regulations and AI labs implement the best safety mechanisms, there is still one huge problem. Rogue states. Contrary to what many of my online friends believe, the world doesn't seem to be run by a secret cabal of rich evil people. It's instead a splintered place, where separate nations are all vying for power. Getting to AGI first might offer such amazing benefits that any safety precautions are foregone. At the same time, when talking about Chinese AI labs, people just say "China is building AI" as if China is some single, cohesive entity. This goes to show just how badly the West and the East are integrated, and how little dialogue is happening between the two hemispheres. Things are even worse when you factor in rogue states like North Korea. Yet, this dialogue is just what needs to happen if we have any chance of navigating a world in which AGI exists. Building AGI safely in the US means little if some rogue state goes full cowboy and builds a dangerous system anyway. Without international coordination, it's just a matter of time before disaster strikes.
When thinking about this whole problem, I always come to the same conclusion. Endless containment is nearly impossible. If we get AGI, at some point in the future someone will manage to steal a model and set it free. Even if they don't, there is nothing stopping malicious actors or even rogue states from building their own models and using them to initiate large-scale conflicts. The incentives are too high for this not to happen. The endpoint of the thought experiment is a world where everyone wields the power of destruction, able to deal massive amounts of damage to others.
In the end, it comes down to the people. In a society where every man is for himself, there is a big drive to get ahead, whatever the cost. So far this has been a great strategy to generate progress because "whatever the cost" hasn't been something catastrophic. On top of that, people were forced to depend on others for everything from food to medicine, to electricity, and safety. AGI brings with itself the promise to change that forever. Having access to cheap intelligence will empower billions in ways we have never seen before. People will become less dependent on each other, while at the same time, their capacity for violence will increase. This new balance of power will require a new culture, and new social contracts which we don't yet have. It doesn't take a genius to see that if we all act like crazy power-hungry monkeys, we all lose when AGI arrives.
This essay was inspired by Mustafa Suleyman’s book “The Coming Wave”, which discusses how we can prepare for the arrival of truly intelligent machines, what we can do at the individual, company, state and international levels to make sure we make it through the transition and come out on top. I strongly recommend that book to anyone interested in the implications of a world filled with AIs.
Sam Altman speaking to Bloomberg at Davos
Here is a great video explainer of the paper. Here is the paper
Aum Shinrikyo - the Japanese Death Cult
Trying hard to understand technology better. Arrived at your site by way of Jigs Gaton, I think. I seem to be a neo-Luddite who is terrified by the possibilities; but I’m glad to know that there are many thoughtful people in the world grappling with the challenges.