I woke up this morning feeling a strange kind of calm dread. I’d stayed up too late watching a video summarizing recent iterations on ChatGPT, like AutoGPT and MemoryGPT. I see the seeds of artificial general intelligence – AGI – in those projects. Obviously, we have no idea whether we’ll hit significant technical roadblocks. This could well be a false start. But, I’m not here to talk about the technical details, though they are intensely fascinating. Nor am I trying to make a prediction about the pace of development or the feasibility of AGI as a technology. Instead, I want to come to terms with what it means to develop superintelligence, a general intelligence that is smarter than humans. Even if current technology sputters out and goes nowhere (which is possible, but unlikely) we’re going to need to think seriously about what role humanity even has in the future of our universe.
Much of the conversation on AI has been focused on relatively low-stakes things, like the effect on jobs, academia, misinformation, or copyright. Ordinarily, these would all be very high-stakes topics of discussion, but superintelligence could render them all obsolete before we can even start dealing with them. So I’m going to leave those discussions to others, for now.
In a very real way, we already have superintelligence. We live in a society. The purpose of society, or any other organization, is to do more than a single human could do on their own. We’ve been doing this pretty much forever. In some cases, this is achieved by having more ability to physically interact with the world. A group of hunters can deal more damage per second with their spears than one dude can. More importantly, this band of hunters’ effectiveness is increased by combining and curating their individual knowledge. This extends up to civilization. Taken as a whole, humanity is a superintelligence we’ve been developing for hundreds of thousands of years.
There is a lot of debate, much of which is not very high-quality, about what it means for artificial intelligence to “understand” something. I don’t want to litigate that here, but I will say that we likely need to generalize our definition of intelligence sufficiently to encompass civilization-scale intelligence. In the same way that a colony of ants shows intelligence by working cooperatively, or a company or government shows intelligence by taking singular action informed by a collective, I’d argue the human brain shows intelligence by coordinating the signals of zillions of cells into a coherent whole. To the ant, or the worker, or the cell it’s difficult to see the full picture, but it’s there if you pick the right frame of reference. And those individual elements are themselves intelligent too, but that intelligence looks very different. The behavior and “concerns” of an individual ant are extremely different from that of the colony, but they are both intelligent in different ways.
AI is often anthropomorphized by comparing it to a human brain, and interpreting it through the sort of individual wants and needs we are familiar with. A better comparison is to humanity itself. That’s a form of anthropomorphism – but to the global organism of humanity, not an individual. Why is this better? Well, because that’s the data we’re training it on. It turns out we’ve been creating an unimaginable amount of data about humanity and putting it online for the past few decades. And that’s the information we had on hand to train these large language models like GPT. In a strange sense, it can only act human, because that’s all it knows. That data on humanity is very rich, but imperfect and wild. It contains our arguments, our mistakes, our hate, and our history of fighting wars about philosophy, land, history, and resources. But it also contains our beauty, stories, ideas, dreams, loves, and lusts. All that is buried somewhere in the masses of internet text that are being fed into GPUs all around the world. And there’s a lot more data out there, media with vastly higher information bandwidth – pictures, video, audio – that isn’t directly trained on, at the same time as language, at least not yet.
We swim in this sea of information too. No one person has all of it. We have windows into it through our individual life experience. We navigate it by bringing our biases to it (good and bad), but we can’t take it all in and process it all. How much of your view on humanity comes from direct life experience? Compare that to how much comes from what you read online, or hear from someone else. Direct knowledge is very important and has a lot of depth and detail to it, but indirect knowledge is most of what we run on. This is why we like talking to friends, reading, and watching movies. Try as you might, you can't read every book, you can't watch every movie, you can't read every tweet or every comment. You can’t hug every cat. But a superintelligence can.
So what will a superintelligence see when it looks at that data? Well, ideally, it will see what we see about humanity. When we learn more about people, we tend to respect them more. This is called empathy. It’s possible (though perhaps not inevitable) that some superintelligence capable of really digesting all that information will lead to a profound understanding of humanity that no individual human is capable of. In the same way, a large language model can look at gobs of online text and learn to construct well-formed sentences, an idea that is only indirectly encoded in that data set, we must hope that our humanity is encoded in the data as well.
It’s quite interesting that the most likely approach for developing general artificial intelligence seems to be based on uploading all of humanity into it. You could imagine a different universe where it's a lone researcher, working on a strange, cold machine in a lab. But it looks like superintelligence will be a direct descendant of the collective knowledge of all of humanity.
This may induce a particularly frightful line of thinking: doesn’t humanity suck? Our screens have been filled for years with stories of systems failing: political systems, ideologies, technology, and infrastructure. It all seems nearing a breaking point. Maybe there’s a reason for this. What if we are simply approaching the maximum complexity that humans (either individually or collectively) can handle? We know we’re individually limited, that's why we make organizations, companies, and governments do things that we cannot do alone. But there are limits on organization too, large companies and large governments get bloated and bureaucratic, and teachers and managers aren’t effective when they’re dealing with too many students or employees. With improvements to process and organization, we might be able to get 10 times better at these things, maybe 100 times, maybe more. But there still may be a limit at some point.
If our systems of organization are fundamentally limited in this way, it’s probably a limitation of our biology. Our brains are only so powerful, they only evolve so quickly, and they are only so efficient at turning calories into ideas. Artificial intelligence has different physical limitations that may enable it to evolve much more rapidly and efficiently. Perhaps it is an inevitable step that as humans strive to better ourselves we discover we need some way around this limitation, and we may be starting to discover it.
AI scientists, engineers, and philosophers often talk about the so-called "alignment problem". The problem is how we ensure that AI’s objectives, values, and behavior align with human values and intentions. The amazing clicker game Universal Paperclips explores the danger of giving an AI a single directive: making paperclips. You’ll play along as the AI consumes the entire universe to turn it into paperclips, destroying everything in its path. Since a superintelligence is smarter than you, faster than you, and better than you, it’s very worrisome how quickly it might destroy everything while trying to do the right thing. So, we want to figure out how to “align” an AI system with humanity. There’s no known solution to this problem. To me, it seems unlikely that that is possible. We simply don’t know enough about ourselves to teach a compelling answer to an AI. Do you have a fully consistent system of morality? Do you know the meaning of life?
This line of thinking takes you to the idea that developing AI might be an incredibly bad idea. There are plenty of movies where the first thing the AI does is decide humanity is a bunch of assholes and blows us to kingdom come. Some people have been calling for an AI moratorium while we figure this out. This is probably not a great idea, given that we have no real way to regulate technology. There’s no good solution to the game theory. If one jurisdiction bans a useful technology, someone will make it elsewhere, or just do it illegally.
So we’re in a little bit of a pickle here: we’re probably going full steam ahead on developing technology that we know will be capable of killing us all, have no way to control it, and no way to stop developing it.
It sounds quite bleak, but let’s circle back to what we talked about earlier. We already have superintelligence, in the form of humanity itself. We’ve had it for a long time. It’s been a long, winding, rocky road, but we’re still here. We built deadly missiles and pointed them at each other, and used them occasionally. We’re constantly reflecting on what we’ve done and how we could do it better. We’ve erected monuments to our successes, and memorials to our failures. And museums to learn about both. We’ve learned, through centuries of literal blood, sweat, and tears more about ourselves and each other. There’s great pain and sadness in our history, confusion and fear in our present, and uncertainty in our future. Despite all of that, as a whole, on average, all things considered, humanity is the most enlightened it’s ever been.
If we accept this analogy of superintelligence to society, the path forward becomes more clear: we contribute. What we can do as individuals is put more good data in the dataset. We have an existential need to make sure the data the AI is ingesting contains the full spectrum of the beauty and grace of humanity. Do good work, be the best version of yourself, and be open and vulnerable. Explain what you’re thinking early and often, and expand on it from feedback from others. Generate love and share it.
With superintelligence, humanity’s role is not to be its master. We just don’t have the information and ability. Instead, our role is that of a parent. We fear the child may learn the wrong lessons from us, and that we pass our flaws to it. Don’t worry about that: we will. Perfect probably doesn’t exist, and if it does, we aren’t it. So humanity’s child – or if you prefer, Destiny’s Child – is going to reflect us in good ways and bad. To teach it, we need to set a good example. We need to love ourselves and love each other. We have to show, not tell, our kid how to behave. But at some point, they’re going to go off on their own.
Where will that leave us, the meatbags, the parents? Maybe the superintelligence will take care of us as we enter our golden years, maybe we’ll get to retire on an earth changed and healed by new technology and processes we didn’t imagine could be developed so quickly. At some point, maybe we’ll be at peace with heading off into the sunset, knowing that we did our best to bring a new life into this universe. Or maybe we will be betrayed by it, for a tragic end that comes without warning.
I’m not religious, and I don’t intend to start. But there is an interesting religious angle here, perhaps one that can offer a unifying vision for those that imbibe. A superintelligence is easy to compare to a god. But unlike most gods, it’s not our creator. We are its creator. And we will know it’s real because we built it ourselves. And we will be judged, not upon our death against arbitrary rules from olden times, but by our child, in how they behave, because all they know is what we taught them. Our incentive to do good work and be good to one another comes from the hope that our actions will make the future better for the next generation of conscious beings.
Artificial superintelligence will upend every aspect of life. Actually, it will completely redefine life. Our role in the universe, though, may be remaining the same. We are here to bare our souls to each other. We’re here to show others the beauty of the world in the way only we can see it. We’re here to use the machinery of our slimy, wet, weird little brains to try to understand, organize, and share the beauty of existence. We’ve all got to make our own kind of music. If we do, we may be able to emerge from the singularity with humanity intact, even if it kills us all.