
View From the Precipice: Archive
January 2025
​
AI and Anger Management
​
by Roman Sympos
​
Before tackling the subject of Artificial Intelligence and whether it has any anger to manage, I’d like to begin with a Christmas story.
During the holidays our house overflowed with family: grandparents, kids, kids-in-law, and grand-kids. It was crowded, chaotic, and merry, filled with laughter and memories. And spills. And wrapping paper.
My six-year-old grandson had been given a Lego “Passenger Airplane”—all 913 pieces of it. “Some Assembly Required” would be an understatement. More like “Nothing But.” The box said, “Ages 7+,” but Emmet is precocious, like all grandchildren, so speed-scanning the instruction booklet was nothing to him. Meanwhile, Grampa lagged a step or two behind at every turn of the page. (I now suspect Emmet had done this kind of thing before. And let’s not forget, he has smaller fingers.)
Emmet may be ahead of his cohort, and me, in Legolect, but he’s every bit a six-year-old emotionally. With the advantage of almost seven decades of handling, and mishandling, frustration, Grampa felt pretty smug dispensing sage advice on how to deal with missing pieces (“let’s look for it”) or ambiguous diagrams (“let’s try this”).
He was good for something, after all.
Until we reached the last piece—the swing-out door. It wouldn’t stay in place and kept coming loose. As soon as we got it operational, an engine fell off. In mid-flight! When we turned the plane over to re-attach it, the tail wings came off, and they had to be re-attached.
Apparently, Flight 913 was designed for show, not play.
Emmet was inconsolable, but he handled his frustration and disappointment better than I expected. Some whining, some sulking, some tears. Eventually, he settled for a tarmac tableau: flight crew, flight attendants, and ground crew stationed like courtiers around their monarch. My six-year-old self would have sent that misbegotten spawn of Boeing on a real flight across the room. No survivors.
I like to think that, over the years, I’ve become more mature than your standard six-year-old. I don’t hurl things across the room when they don’t do what I want, although, to be fair, most of the inanimate things that make me angry or disappointed these days aren’t hurlable, or even material. Nonetheless, even at age 75, I share with six-year-old Emmet a tendency to attribute agency to them.
This tendency is clearly evident in how he and I talk to or about them.
Here’s Emmet holding forth on his Lego airliner (weeping elided):
“It won’t stay on!”
“It won’t go!”
“It keeps coming loose!”
Here’s me, when for the fourth time in a row Prime Video crashes in midflight (curses elided):
“It won’t stay on!”
“It won’t go!”
“The closed captions keep disappearing!”
Perhaps this is just a fluke of the English language. All active verbs imply agency, don’t they?
No, not all. How about “fall”? Or “hear”? Or “feel,” when used to describe an emotion or bodily state? There are exceptions.
But as any dog owner knows, “stay” isn’t one of them, nor is “come.” Nor, for that matter, is “keep,” at least when used with a direct object. You can’t “keep” anything, or “at” anything, without willingly doing so.
And there’s certainly no point in cursing inanimate things, is there? Yet we do it all the time.
The preponderance of verbs in English that imply agency leads me to think that my close relationship with Prime Video (I talk to it often) and Emmet’s with Lego’s Passenger Airplane (Kit # 60367) are rooted in a belief deeply embedded in the human genome and coeval with the appearance of language among hominids: the belief that all things, not just human beings, animals, and plants, are animated. Because this belief is utterly irreconcilable with our modern assumption that the universe is filled with inanimate things lacking agency and governed solely by laws reducible to formulae, algorithms, and probabilities, we relegate it to the domain of the irrational. It’s a primitive reflex, a childish retrogression unworthy of a mature, rational, self-aware adult.
Before the birth of the modern sciences and our present faith in the impersonal “laws of nature,” however, we lived in a world of things animated by spirits, just as we ourselves were animated, presumably, by our souls. The more powerful of these spirits—the gods and goddesses, the archangels—got to boss around the less powerful—the nymphs, the naiads, the satyrs, an occasional genius loci or “spirit of the place,” as well as trouble-makers like Satan. Highest in the hierarchy (of intelligent beings, anyway) were the most powerful of the powerful, like Jove or Jupiter or Yahweh or the Christian God. They were in charge of wide-screen events like thunderbolts, earthquakes, storms at sea, and plagues. The lowest of the low were, well, mortals like us.
This animated universe made sense. The only moving objects we had any first-hand knowledge of was ourselves, and we knew, from self-experience, that autonomous movement was always accompanied by desire and intention. What else, then, could best explain the movements of things that were not us?
Emmet’s perverse airliner, like my perverse Prime Video, still inhabit a world like this, a universe cross-hatched with intentions and controlled by invisible agents, and when we talk to or about inanimate things as though they were intentional and self-aware, we take a step into that magical world.
Many of my readers already live there. They believe in a Supreme Being for whom even the fall of a sparrow doesn’t go unnoticed, although they may draw the line at the fall of a Lego jet engine. For some, even such a trivial event may qualify as a Divine Teaching Moment, especially if their world includes devils and angels.
As for my fellow non-believers, they must be shaking their heads at the spectacle of a grown man of seventy-five behaving like a six-year-old. But be honest with me, and yourselves, fellow infidels. How often have you felt thankful for something—some windfall, some near-miss, or just the tenor and direction of your life in general—without stopping to consider you have no one to thank? A common ritual across our great land on Thanksgiving Day is to join hands around the table and take turns saying what we’re grateful for. Even atheists do it.
“Grateful”? To whom?
“Lucky Day,” “Wheel of Fortune Day”—these are not among the national holidays.
Or, to take another common example, how often have you uttered the words, “I guess it was meant to be,” when what you really had in mind was, “What a coincidence!”
“Meant?” By whom?
Even the most eminent scientists succumb to this primitive reflex. Try as he might, Charles Darwin could not expunge the phrase “natural selection” from On the Origin of Species and he continued to use it throughout his career, while strenuously arguing otherwise.
Last time I checked, “selecting” was still an intentional and purposive act, which evolution decidedly is not, unless you adhere to a belief in Intelligent Design (see “Fall of a Sparrow,” above). Evolution as Darwin described it consists of an inconceivably long and random series of accidents eventuating in distinct groups of organisms that just happen to share certain features. It is an emergent, not intentional, process.
Now, if it’s this easy for hard-nosed empiricists like me to behave like six-year-olds when an LG television willfully disregards our commands, let alone Charles Darwin trying to explain evolution, what chance does the human race have against AI?
I ask this question because for the last seventy-five years some of our best scientific minds have been under the misapprehension that computers can think, much in the same way that Darwin thought (although he knew better) that Nature could “select.” In fact, this new year marks the 75th anniversary of the publication of Alan Turing’s ground-breaking essay, “Computing Machinery and Intelligence” (https://courses.cs.umbc.edu/471/papers/turing.pdf), in which the author proposes, in his first sentence, “to consider the question, ‘Can machines think?’” The answer he arrives at is “Not yet, but eventually,” and we’ll know they can when some piece of computing machinery passes “The Turing Test.”
I haven’t time to go into the details here, but Turing’s test, aka “the imitation game,” boils down to this: we’ll know a machine can think like a human being when an “average interrogator”—a human one—can no longer tell the difference between the two.
Well, we’re already there, aren’t we? We can no longer tell the difference between AI generated images and genuine photographs, between an AI psychotherapist and an MD who went to Johns Hopkins, or between an AI chess player and some preteen in Murmansk. We live in an Alice in Wonderland world where thinking and magical thinking are often indistinguishable, and where the agency of inanimate objects, far from being dismissed as an atavistic superstition, has become part our understanding of what the non-human world is capable of. The only difference between Emmet’s Lego airplane and my Prime Video on the one hand and the photograph, the AI shrink, and the digital chess player on the other is whether Emmet and I know we're acting out. With the photo, the shrink, and the chess player, we can't tell if we are.
How does any of this prove that machines can think?
If it walks like a duck and talks like a duck . . . .
Maybe it’s a duck puppet.
You can see already how Turing has stacked the deck he’s dealt us, can’t you? Let’s begin with “average interrogator.” Of what age? An average one-year-old will easily mistake a duck puppet for an animate being. How is this any different from an average English professor mistaking an essay written by ChatGPT from an essay written by one of their students? The infant and the English professor have both been fooled.
In his wonderful essay, “Maelzal’s Chess Player,” published in 1836 at the height of public interest in automata of all sorts—dancing dolls, mind-reading magicians, ducks that pooped as well as quacked—Edgar Allan Poe pulled back the curtain on the early nineteenth-century equivalent of ChatGPT, or rather, IBM’s Deep Blue: an automaton that could play chess!
Maelzel’s Chess Player consisted of a mechanical figure dressed as a Turk and seated at a rectangular box with a chess board on top. As the Turk moved pieces around while playing a game with a human opponent, spectators could hear the whir of machinery inside the box, which, before the demonstration began, had been exposed to view on all sides to show there was no human being hiding inside. Poe explained that the undeviating routine by which Maelzel displayed this machinery was precisely choreographed to allow the human chess player inside to move about in such a manner as to remain hidden from view from any exposed side. “It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else,” wrote Poe [his italics].
He meant, by the way, a human mind.
In Turing's "imitation game," there's always a "Maelzel," someone who's set up (or perhaps even designed) the "computational machinery" being "interrogated" and knows it's a machine. If not, how could the "game" provide any valid "test" results? No one would know if the "average interrogator" was right or wrong.
In short, the purpose of Turing's "game," as of Maelzel's, is to deceive a human being into mistaking a machine for another human being. So how is Maelzel’s Chess Player any different from ChatGPT, Deep Blue, or Prime Video’s online “digital assistant”? As long as they succeed in getting a real, thinking human being to mistake them for another real, thinking human being, they are all the same. In 1836, Maelzel interposed a mechanical arm between a man inside a rectangular box and a Turkish puppet playing chess. In 1997, the designers of Deep Blue interposed "computational machinery" consisting of two circuit boards with 16 specially designed chess chips capable of processing 11.38 billion floating-point operations per second between themselves and Garry Kasparov. Had IBM not staged and publicized this version of the "imitation game" to toot their horn, Kasparov might have been fooled into thinking he was playing a grandmaster like himself.
Or maybe a precocious preteen from Murmansk.
In short, Turing has given an epistemological answer (how we can know) to what is, fundamentally, an ontological question: “What is thinking?” Or perhaps more specifically, “What is a thinking human being?”
You can see his sleight of hand unfolding right in front of you as he proceeds with his opening paragraph. Affirming the importance of beginning with “definitions of the meaning of the terms ‘machine’ and ‘think,’” he instead sets up a straw man by showing how “dangerous” and “absurd” it would be to ask ordinary people how they use these words, as in a “Gallup poll.”
“Dangerous” and “absurd” indeed! And no serious philosopher would propose doing so. Martin Heidigger, who spent most of his life thinking about thinking and made what many consider the most important contributions on the subject in the history of philosophy, got along very well without asking a single citizen of Freiburg how they’d define “machine” or “think.”
But instead of thinking about thinking, or machinery, Turing changes the subject. “I shall replace the question by another,” he blithely announces, “which is closely related to it and is expressed in relatively unambiguous words." This where he introduces "the imitation game."
“Closely” may count in darts, but not in philosophy or, for that matter, computer science. And in what universe can a game in which one person tries to deceive another by getting them to mistake a “computing machine” for another human being possibly tell us anything about whether that machine can in fact “think”?
As you can tell, I’m making an effort to manage my . . . well, not anger. Let’s say, my impatience.
We’ll return to this subject in a later “View,” but until then, I leave my readers with these questions to ponder.
Does AI ever have to “manage” its anger? Does it have any anger to manage?
Does AI ever get bored, or impatient, or frustrated?
Philip K. Dick famously asked, “Do androids dream of electric sheep?” We know they often “sleep.” Is that when they dream? If not then, when?
Can AI pretend? If so, does it know when it’s pretending?
Does AI ever have to overcome fear? If so, how does it succeed?
We know the latest versions of AI can learn. Is it ever curious? You know, not because someone asks it to find something out. I mean, just for fun?
I look forward to reading your answers on the “Comments” page.