Call AI something else. Please.
by Lev Tsitrin
I wonder why so much is written about AI. We are surrounded by machines, but I do not think that any of them get nearly as much attention as AI. And I wonder whether this imbalance in attention is due not so much to the actual power of AI, as to the name that we chose to give it.
After all, the word “intelligence” signifies power. Those who are smart, come on top. Those who are uncannily smart, come on the very top. Hence, the very word AI conveys the fear of being taken over — by something we don’t understand and may not be able to control (because we are only used to dealing with humans, but AI is “artificial” — a scary Frankenstein monster endowed with frightful power and alien logic of a machine). Yet, change the label to better reflect what it actually is — call it an “imitator of reason” for instance — and maybe we should all start breathing a bit easier?
The New York Times‘ op-ed titled “To See One of A.I.’s Greatest Dangers, Look to the Military” is as good example as any of hyperventilation over AI that’s in the air. “What makes an arms race in artificial intelligence so frightening is that it shrinks the role of human judgment. … On paper, military and political leaders remain in control. They are “in the loop,” as computer scientists like to say. But how should those looped-in leaders react if an A.I. system announces that an attack by the other side could be moments away and recommends a pre-emptive attack? Dare they ignore the output of the inscrutable black box that they spent hundreds of billions of dollars developing? If they push the button just because the A.I. tells them to, they are in the loop in name only. If they ignore it on a hunch, the consequences could be just as bad.”
Oh ah! Apocalypse is coming! But what if that same warning come not from the mysteriously powerful AI, but from IR — a mere “imitator of reason” machine? Would that change the dilemma for the human decision-makers? Will they still be “in the loop in name only”?
Let’s see. Per Wikipedia, “On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile hadbeen launched from the United States, followed by up to five more. Petrov judged the reports to be a false alarm. His subsequent decision to disobey orders, against Soviet military protocol, is credited with having prevented an erroneous retaliatory nuclear attack on the United States and its NATO allies that could have resulted in a large-scale nuclear war which could have wiped out half of the population of the countries involved. An investigation later confirmed that the Soviet satellite warning system had indeed malfunctioned. Because of his decision not to launch a retaliatory nuclear strike amid this incident, Petrov is often credited as having “saved the world”.”
Now, the “Oko” system was not called “AI” — it was a machine named with an the anachronistic Russian word for “eye.” But what’s the difference? AI is a machine, “Oko” is a machine — a machine that, as it turned out upon investigation, was fooled “by a rare alignment of sunlight on high-altitude clouds above North Dakota and the Molniya [which is Russian for “lightning”] orbits of the satellites, an error later corrected by cross-referencing a geostationary satellite.”
What saved civilization from destruction on that day? “Petrov … had been told a US strike would be all-out, so five missiles seemed an illogical start; that the launch detection system was new and, in his view, not yet wholly trustworthy; that the message passed through 30 layers of verification too quickly; and that ground radar failed to pick up corroborating evidence, even after minutes of delay.” And yet part of it was pure luck: “his civilian training helped him make the right decision. He said that his colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile launch if they had been on his shift.” Even that would not have helped if there had been more clouds, creating the impression of more launches corresponding to an “all-out” strike. The story could be very different — and cataclysmic indeed.
So what is the difference between what happened on 26 September, 1983, and the frightful scenario described by the New York Times’ op-ed columnist? Nothing — except for labeling: back then, the problem focused on a machine called “Oko;” in the New York Times-imagined future, the action is centered on a machine which we chose to give a sinister name of “AI.”
Yet it is only a word, and nothing else. A machine is still a machine. Machines may malfunction, humans may make errors — irrespective of how they are called. “AI” or “Oko” — what’s the difference? After all, it is not because of the machines that we are at each other’s throats — it is because of what we humans are. Machines kill — but it is not they who decide to kill. We do.
This is not to take machine malfunctioning lightly. Machines should be made soundly, and programmed well so they don’t unintentionally hurt anyone. After all, the situation that faced humanity on that day forty years ago was due to imperfectly-designed machine, and a later adjustment corrected its flaw.
Bottom line — any machine being only a machine, I again wounder whether we make such a fuss around AI only because of how we chose to call that particular machine. Let’s rename it into something else — “imitator of reason,” IR being a name as good as any — and stop scaring ourselves. Yes, it is important to prevent machine malfunction — but we need to worry more about the nonsense that infects the minds of humans, than about machines they try to use to advance their nefarious plans.