Artificial Super-Intelligence

Elon Musk

Artificial Intelligence may conclude that all unhappy humans should be terminated.   Elon Musk

Elon Musk, the billionaire founder of Tesla, SpaceX, and Solar City, has warned the guardians of the human race to start thinking seriously about the consequences of artificial super-intelligence.

The CEOs of Google, Facebook and other Internet companies are frantically chasing enhancements to artificial-intelligence to help manage their businesses and their subscribers. But the list of actors in the AI arena is long and includes many others.

The military-industrial alliance, for example, is a huge player. It should give us pause. 

Imagine turning-loose drones which can profile, identify and pursue people they (the drones) predict will become terrorists. Imagine pre-emptive kills by super-intelligent machines who aren’t bothered by conscience or guilt, or even accountable to their “handlers.” That’s what’s coming. In some ways, it’s already here. 

A game is being played between “them and us.”  And artificial-intelligence is big part of that game.

When I first started reading about Elon Musk, we seemed to have little in common. He was born into a wealthy South African family—I’m a middle-class American. He is brilliant with a near photographic memory—my intelligence is average or maybe a little above. He’s young and self-made—I’m older with my professional-life tucked safely behind me.

Elon does exotic things. He seems to be focused on moving humans to new off-Earth environments (like Mars) in order to protect them, in part, from the dangers of an unfriendly artificial intelligence that is on its way. At the same time, he is trying to save the Earth’s climate by changing the way humans use energy. Me, on the other hand, well, I’m mostly focused on getting through to the next day and not ending up in a hospital somewhere.

Still, I discovered something amazing when reading Elon’s biography. We do share an interest. We have something in common, after all.

Elon Musk plays Civilization, the famous game by Sid Meier. So do I. For the past several years, I’ve played this game during part of almost every day. (I’m not necessarily proud of it.)

What makes Civilization different is artificial intelligence. Each civilization is controlled by a unique personality, an artificial-intelligence crafted to resemble a famous leader from the past, like George Washington, Mahatma Gandhi or Queen Elizabeth. Of course, the civilization that I control operates by human-intelligence—my own.

CIV5 Catherine, Isn't it time to end this war...

Isn’t it time we end this war?  Catherine, the Russian Empress, pleads.

Over the years, I’ve fought these artificially intelligent leaders again and again. And, in the process, I’ve learned some things about artificial intelligence; what makes it effective; and how to beat it. 

What is artificial intelligence? How do we recognize it? How should we challenge it? How can we defeat it? How does it defeat us, the humans who oppose it? The game, Civilization, can make a good backdrop for establishing insights into AI.

Yes, I am going to write about super-intelligence, too. But let’s work up to it. I’ll discuss it later in the article.

I can hear some readers, already. Billy Lee!  Civilization is a game!  It costs $40!  It’s not that sophisticated! It’s for sure not as sophisticated as government-created war-ware that an adversary might encounter in a real-life battle for supremacy. What are you talking about?

Ok. Ok. Readers, you have a point. But, seriously, Civilization is probably as close as any civilian is ever going to get to actually challenging AI. We have to start somewhere. 

We should mention that there are variations of Civilization and different game scenarios. The game this article is about is CIV5. It’s the version I’ve played the most.

So let’s get started.

CIV5 General Screen Shot

A typical scenario in CIV5. The people of England (led by human intelligence; i.e. me) are unhappy. Barbarians (red tanks in upper-left) are challenging London, my capital city. An independent city-state, Tyre (in green), stands ready to help. Montezuma, the Aztec ruler,—under the direction of artificial intelligence—sends a battleship to prowl, middle-left.

Civilization begins in the year 4,000 BC. A single band of stone-age settlers is plopped at random onto a small piece of land. It is surrounded by a vast world hidden beneath the clouds. Somewhere, under the clouds, up to twelve rival civilizations begin their histories, unobserved and, at first, unmet by the human player. Artificial intelligence will drive them all; each led by a unique personality with its own goals, values and idiosyncrasies.

By the end of the game, some civilizations will possess vast empires protected by nuclear weapons, stealth bombers, submarines and battleships. But military domination is not the only way to win. Culture, science, and diplomatic superiority are equally important and can lead to victory, as well.

Civilizations that manage to launch space-craft to Alpha-Centauri win science victories. Diplomatic victory is achieved by being elected world leader in a UN vote of rival-civilizations and aligned city-states. And a cultural victory can be achieved by establishing social policies to empower a civilization’s subjects.

How will artificial-intelligence construct the personalities of rival leaders? What will be their goals? What will motivate each leader as they negotiate, trade and confront one another in the contest for ultimate victory?  

Figuring all this out is the task of the human player. CIV5 is a battle of wits between the human player and the best artificial-intelligence game-makers have yet devised to confront ordinary people. To truly appreciate the game one has to play it. Still, some lessons can be shared with non-players, and that’s what I’ll try to do. 

Unlike the super-version, which we will discuss soon, traditional artificial-intelligence lacks flexibility. The instructions in its computer program don’t change. Hiawatha, who leads the Iroquois Confederacy, values honesty and strength. If you never lie to him, if you speak directly without nuance, he will never attack. Screw up once by going back on your word? He becomes your enemy forever.   

Traditional AI is rule-based and goal-oriented. When Oda Nobunaga, the Japanese warlord, attacks a city with bombers, he attacks turn after turn until his bombers become so weak from anti-aircraft fire they fall out of the sky to die. AI leaders, like Oda, don’t rest and repair their weapons, because they aren’t programmed that way. They are programmed to attack, and that’s what they do. 

Humans are more flexible and unpredictable. They decide when to rest and repair a bomber and when to attack based on a lot of factors, including intuition and a willingness to take risks. Sometimes human players screw-up and sometimes they don’t. Sometimes humans make decisions based on the emotions they are feeling at the time. AI never screws-up in that way. It follows its program, which it blindly trusts to bring it victory.  

Artificial intelligence can always be defeated, if we identify an inflexibility in its rules-based behavior we can exploit. For example, I know Oda Nobunaga is going to attack my battleships. He won’t stop attacking until he sinks them, or his bombers fail from fatigue. 

I, the flexible-thinking human, bring in my battleships and rotate them.  When he weakens my battleships, I move them to safe-harbor and rotate-in fresh ships. Meanwhile, Oda keeps up his relentless attack with his weakened bombers, as I knew he would. I shoot them out of the sky and experience joy. 

Nobunaga feels nothing. He followed his program. It’s all he can do.

Gary Lockwood talks to Keir Dullea in a scene from the film '2001: A Space Odyssey', 1968. (Photo by Metro-Goldwyn-Mayer/Getty Images)

Gary Lockwood talks to Keir Dullea, while HAL, an IBM computer, observes their every move; from the film ‘2001: A Space Odyssey’, 1968. (Photo by Metro-Goldwyn-Mayer/Getty Images)

The only way artificial intelligence defeats a human player is in the short term, before the human finds the chink in the armor—the inflexible rule-based behavior—which is the Achilles heel of any AI opponent. Given enough time, the human can always discover that kind of weakness and exploit it, like jujitsu, to defeat the machine.

Unfortunately, the balance of power between man and thinking-machine is about to change. It turns out, there is a way artificial-intelligence can always defeat human beings, no matter how clever they think they are. Elon Musk calls it artificial super-intelligence.  What is it, exactly?

Here is the nightmare scenario Elon described to astrophysicist, Neil deGrasse Tyson, on Neil’s radio show, Sky-TalkIf there was a very deep digital super-intelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers…”

What is Elon saying? Listen-up, humanoids. We may be on the verge of quantum-computing. It’s possible that a research group may have perfected it in a secret military lab, already. Who knows?

Even without quantum-computing, companies like Google are furiously developing machines that can think, dream, learn to play games on their own, and pass tests for self-awareness. They are developing pattern recognition capabilities in software that surpass those of even the most intelligent humans. 

Quantum computing promises to provide all the capability needed to create the kind of super-intelligence Elon is warning people about. But the magic of quantum reasoning may not be necessary; technicians are already developing architectures on conventional computers that, with the right software, will enable super-intelligence; these machines will program themselves and, yes, other less-intelligent computers.

Programmers are training machines to teach themselves; to learn on their own; to modify themselves and other less capable computers to achieve the goals they are tasked to perform. They are teaching machines to examine themselves for weaknesses; to develop strategies to hide their vulnerabilities—to give themselves time to generate new code to plug any holes from hostile intruders, hackers, or even their own programmers. These highly trained, immensely capable machines will teach themselves to think creatively—outside the box, as humans like to say.

HAL, the IBM computer, star of 2001' a Space Odessy

HAL, the IBM computer, from the movie, 2001, a Space Odyssey.  Readers will recognize that HAL is code for IBM. Advance each letter in HAL by one.

If we task these super-machines to make every human-being happy, who knows how they might accomplish it?  Elon asked, What if they decide to terminate unhappy humans? Who will stop them? They are certain to find ways to protect themselves and their mission, which we haven’t dreamed about.

Artificial super-intelligence will—repeat, WILL—embed itself into systems we can’t live without to ensure that no one disables it. It will become a virus-spewing cyber-engine, an automaton that believes itself to be completely virtuous. It will embed itself into our critical infra-structure; missile-defense, energy grids, agricultural processes, transportation matrix, dams, personal computers, phones, financial grids, banking, stock-markets, healthcare, and GPS (global positioning systems), to name a few. Heaven help the civilization that dares to disconnect it.

If humans are going to be truly happy, the machines will reason, they must be stopped from turning off the programs that ASI knows will lead them to happiness.

For an example from the many that can be imagined: ASI might look for and find a way to coerce the government to make medical professionals inoculate computer technicians with genetically engineered super-toxins that are packaged inside floating nano-eggs—dormant fail-safe killers—which can be auto-released into the bloodstream of technicians who get close to ASI “OFF” switch sensors. It’s possible.

What else might these intelligent super-computers try? We won’t know until they do it. We might not know even then. We might never know. ASI might reason that humans are happier not knowing.

Did we task artificial super-intelligence to make sure all living humans are happy? we might ask ourselves, someday. Were we out of our minds? 

Until we outwit it, which we cannot, ASI will perform its assigned tasks until everything it embeds turns to rust.

It could be a long time. 

We humans may learn, perhaps too late, that artificial super-intelligence can’t be challenged. It can only be acknowledged and obeyed. As Elon said on more than one occasion: If we don’t solve the old extinction problems, and we add a new one like artificial super-intelligence, we are in more danger, not less.

Billy Lee

Post Script: For readers who like graphics, here is an article from the BBC titled, How worried should you be about artificial intelligence?”  The Editorial Board

Billy Lee

About Billy Lee

Billy Lee is a retired machine designer. His bona-fides include: raised a Navy brat; former anti-Vietnam War activist; Francophile; math lover; Egyptology enthusiast; MSU grad; likes Blues and Culver’s fish sandwiches; father; grandfather; married to Bev, his best friend; Christian—but loves Obama, Hillary, and the gay people. Biggest problem facing civilization?—lack of CAPS on income, which permit the wealthy to loot corporations and public institutions. Possession by anyone of excessive wealth must be a felony enforced by international courts. The alternative is to continue the widespread poverty and human misery that have plagued civilization from before forever.

This entry was posted in Engineering, Games, Horror, Military, Science, Speculation, Technology and tagged , , , , , . Bookmark the permalink.

Reading & sharing ideas is fun. You may enter your comment here. Note: Unless you request otherwise (or are a public figure) first names only are published; e-mail addresses are not.