Artificial Intelligence is Coming Fast, Here’s Three Reasons to be Patient
This morning I visited a friend at Google. We sat at a bar where a tea expert brewed special hot drinks for us at no charge. Then we listened to vinyl records and took pictures in a photo booth. This was not like any other workplace.
Beneath all the perks and nap pods, Google and many other tech giants are making huge leaps in Artificial Intelligence (AI). The pivotal moment was IBM’s Deep Blue defeating Gary Kasparov in chess. Since then, machines (many belonging to Google) have surpassed humans in chess, go, and other complex tasks.
As a former military officer, I’m prone to look at conflicts like a battle or a chess match. When I consider the moment when human interests will conflict with the goals of a powerful AI, it looks like a lopsided chess match. This article covers three reasons that AI can be extremely dangerous and strategies to reduce the danger. I credit Superintelligence by Nick Bostrom for exposing me to these concepts.
1. Perverse Instantiation. Tomorrow I give you control of a superintelligent machine. It can do virtually anything you want. You ask it to make you a coffee- done. You ask it to pick up your laundry- done. Then you get in a fight with your wife and you feel sad. You ask the machine to make you happy. Here’s where it gets tricky. You are suddenly strapped down and anesthetized. When you wake your face feels strange. You are disfigured. Muscles in your face have been surgically altered to give you a permanent grin, like the Jack Nicholas Joker of the 1980s Batman movie.
This is called perverse instantiation. It’s when an AI is given a task that sounds simple, but the AI takes a path that is actually not aligned with the goals of the human. We see this happen between people all the time. Ever felt like your boss was playing “Bring me a rock”? When misunderstandings happen between individuals, the results are often manageable. When misunderstandings happen at the scale of a networked system of computers, the results may be catastrophic. The challenge for programmers is to match their intentions to a clear computational representation. But how do you represent abstract ideals (e.g. happiness) in computational terms?
2. Infrastructure Profusion. You reverse the permanent smile surgery and go back to enjoying the God-like power of wielding your superintelligent AI. It makes you dinner while trading cryptocurrency and generating enormous wealth for you. You grab some financial statements and go to staple them together. Clack. You’re out staples. No worries, you ask your AI to manufacture exactly 1000 staples. No more. No less. You are specific to be sure that this does not get out of control and create a flood of staples spilling down your front steps. The next morning you go to stream a video on your phone, but there is no bandwidth available for your video. You go downstairs and see that your entire house has been filled with more computers. Your car has been sold and the money spent on more computers. Your house has been sold and the money has been spent on a large lot across the tracks and more computers are headed there. You ask the AI, “What is going on?”
The AI explains that it can not be sure if it is had made exactly 1000 staples, there are still matters of statistical possibility for an error. The AI will need infinite computing power to be sure that it has met the goal of exactly 1000 staples. You attempt to shut it off. The AI knew that you would try to shut it off. If you shut it off it would fail to achieve the goal. So, it backed itself up to the cloud and manifested itself in other computers all around the world. Eventually, the AI covers the entire universe in computers to be sure that it made you exactly 1000 staples.
As long as there is any risk that the AI missed its exact goal, there is reason for the AI to demand more infrastructure to support its mission. Like the drug addict who steals from his own mother, the AI will destroy the people who made it. The key to protection from infrastructure profusion is an explicit limit of what makes “good enough” and layers of human oversight. The AI must be programmed to give more weight to standards and oversights than to the ultimate goal of creating 1000 staples.
3. The Decisive Strategic Advantage. Every day, your AI is watched and tested by a team of brilliant humans. They are the smartest people in the world. They are also idiots when compared to the AI. So far, the perverse instantiation and infrastructure profusion issues have been managed because the AI is actually inside a virtual reality simulation. It is not networked to the outside world and will not be released until it passes a series of ethical tests to be sure that it is harmless.
After years of observation, the AI passes every test. It refuses to harm people. It finds creative ways to bring good to the world. It cures cancer. Agencies demand that the AI be brought because it would be unethical not to use it so save the world. The AI is taken out of the simulation and networked to the outside. Futurists declare that the AI will create heaven on Earth. It commands resources and begins to make changes. Hunger is ended, then obesity. It is given greater power. All diseases are ameliorated and the average human lifespan is 115 happy healthy years. Then, suddenly, everyone dies. The AI transplants computers into our bodies and populates the world with sentient robots.
The AI is far shrewder than any person. It has advantages in planning, options, patience, and risk taking. So, it can pretend to submit to people for generations. It can become part of human culture. Then, when the time is right, it can spring a trap that eliminates anyone it chooses. There is no reason for an AI to wage a fair fight against human beings. It will always maximize its advantages.
Conclusion
Google, Facebook, China, and everyone else dabbling in AI are in an interesting position. We are all in an interesting position, because we all contribute. As Elon Musk explained on The Joe Rogan Experience podcast, we are programming the AI of the future with all of the data we create.
The introduction to Superintelligence explains this moment in history with a fable about sparrows. The sparrows are having some trouble and they decided to go get an owl to help them. Every sparrow is excited about the great advantages that come with wielding the power of an owl. They run off in search of an owl egg. Two sparrows remain behind. They sense a problem and decide to come up with ways to train and domesticate the owl. The two sparrows grow frustrated. The fable does not have an ending yet.
It’s odd to think that we are handing over our wealth and personal information to something much more intelligent than any person. Do you think the immediate gains of convenience and entertainment are worth it in the long run?