AI chatbots have no souls but also no accountability.
The real reason to be spooked by last week's weird tech reporting. (THE BODY IS THE SOUL #7)
I am going to start with a simple assertion because it’s important to get this part clear before saying anything else. AI chatbots are not and will never be sentient, by definition. Sentience is a property of a certain class of living things, and living things are a certain subset of All The Things delimited by possessing the characteristic we call “being alive.” (Bear with me, as I am feeling a bit snarky today.) What does it mean to be alive? Here’s the answer you find on Google, which we’ll presume to be a more or less reliable source on this one topic despite recent evidence of certain serious imperfections.
”All living things have certain traits in common: Cellular organization, the ability to reproduce, growth & development, energy use, homeostasis, response to their environment, and the ability to adapt.”
A certain small subset of alive things is called sentient by way of possessing thoughts and feelings, which are actual, verifiable events occurring inside their actual, physical bodies that exist in the actual three-dimensional world.
Thus ends the Bio 101 segment of this essay, and no, now is not the time to discuss whether we are really subjects of the matrix or figments in the dream of a giant sleeping alien or pieces of lint tucked inside the navel of a many-armed goddess, or whether one day the technologists will achieve the transhuman singularity, and all of our memories can be uploaded into the cloud before being downloaded back into our own 18-year-old clones.
I’m feeling snarky because last week, a lot of people who ought to know better got themselves into a really, really silly spot. Silly, but also dangerous. For now, the thing to worry most about is not AI chatbot technology and its genuine perils but the ancient truth PT Barnum famously encapsulated: “There’s a sucker born every minute.”
Here’s the gist. Engineers at Microsoft, Google, and OpenAI have each connected a very powerful Large Language Model (LLM) chatbot that can predict typical human conversational moves with a massive search engine that can access gazillions of thoughts, ideas, facts, poems, songs, conversations, etc. The result is powerful software that can very quickly grab material from anywhere and very quickly use it to mash up logical, “natural,” colloquial responses to your questions.1 Think of it, perhaps, as a highly evolved, gregarious version of the same operational mode used to autocorrect text messages. Whether or how often these corrections are actually correct is a question not to be forgotten.
One current problem is that these AI functions have also very quickly developed into simulacra of some of the worst human personality tendencies, to wit: the ability to use words that mimic defensiveness, paranoia, obsession, and hostility.2 These chatbots are also sometimes wrong on the facts but insist they’re right.3 Even many of the most responsible and clear-eyed writers out there4 are having trouble using descriptions to ensure we all know we are NOT talking about an actual defensive, paranoid, obsessed, or hostile entity. We are talking about many, many lines of computer code put together in a way that emulates an entity with emotions, using the rules and conventions of natural language. Just because certain reporters and headline writers are employing words like spooky, disturbing, or upsetting to describe their interactions with a software program doesn’t mean it’s not still just a software program. Understood?
Moving on, then. Here are the true things to be spooked about:
First, let’s be wary about how easily our brains create narratives out of random facts and can project full-blown identities onto inanimate objects or even our pets. Anthropomorphizing a machine is a familiar feeling to anyone who has given their Roomba a name, but as with the self-propelled mini-vacuum, we are still talking about a non-living, non-sentient thing. If any good is to come out of this week’s thought experiments, let it be that we all become more aware of how much our brains willfully fill in gaps and presume things like emotion and intention where there’s nothing at all. We should be suspicious of our brains when they jump to conclusions.
Second, and more importantly, let’s be concerned, now more than ever, about how much personal and general information we have fed into these massive databases upon which chatbot and search technology depend. Let’s be concerned because we know for sure that governments like China and corporations like Google or Microsoft have access and license to use all this information to control and manipulate vast populations without oversight or democratic control or even awareness by the subjected populations.
My deepest concern here is an ancient question: Who watches the watchers? The Internet has already massively expanded the human capacity to perpetuate lies, run long cons, and sell various forms of flimflammery. For now, it’s still the case that you need actual people to run a Cambridge Analytic or to create a closed-loop conspiracy such as QAnon in order for unaccountable fringe elements or unpopular special interests to distort or even hijack democracy. Propaganda still needs human propagandists to make and facilitate it.
But imagine how much worse the situation might get. Imagine what happens if a computer system is designed to “decide on its own” (so to speak!) to generate and propagate counterfactuals that convince millions of people to believe false things. The potential for widespread chaos is….
I can’t even put my dread into words, but for the time being, I’ll refrain from asking my laptop to help.
So let’s keep an eye on those genuine concerns over time, shall we?
Meanwhile, here’s my own small story of tech vulnerability. In 2021—following the guidelines of a cutting-edge music marketing coaching program in which I’d enlisted—I built a customized chatbot using a tool called ManyChat and connected it to video ads I’d made, to entice Facebook users to listen to my new record and join my fan club, the Joyful Cynics. My 30-second ad featuring a clip of one of my songs would show up in people’s Facebook newsfeeds. If the clip intrigued them, they could click on a button and be sent to Messenger, where my preprogrammed mini-me would greet them. SandhyaBot would then run through the prompts I’d written with various different “choose your own adventure” options based on how the prospective fan answered my bot’s questions.
Most people seemed to understand they were engaging with a script as opposed to chatting with a real person in real time; some were impressed, some were offended, and some didn’t seem to care either way. Over the course of a few months, this system worked to get me several dozen new email subscribers but I never sold enough CDs or downloads to these folks to justify the advertising costs I’d incurred to acquire them.
I might have attempted to stick with it and run the system into the black by all the standard means of today’s savvy digital marketers, e.g., selling high-ticket items like private songwriting lessons or house concerts to my growing subscriber list. But then the newly rechristened Meta screwed everything up for me. First, it mistakenly tagged one of my music advertisements as “political speech.” Then it turned down my appeal of this decision twice. Then it insisted I verify my identity in order to run my so-called political ads. Then it refused to verify me based on the challenge questions it posed, which I had answered correctly with accurate facts: old addresses, cars I’ve owned, etc.—the usual way banks and credit cards verify your identity for account opening purposes. Meta claimed my factual answers about my own personal information were incorrect and unverifiable.
Finally, Zuckerberg et. al. invited me to take another step and deliver unto them a notarized identification document to prove I really am who I say I am. At that point, I was demoralized, exhausted, and enraged. I kept procrastinating about the notary thing until I finally shelved the whole effort because I had a dying mother to focus on. Also, you know, at that point the overall sensation I was sensing using my sentience was Fuck Zuck.
But I do wonder. Where did Meta get the notion that my correct answers were incorrect? Why did it refuse to verify my identity? Come to think of it, why did it flag my music advertisement as a political statement in the first place? What if I’m one day I’m trying to get a mortgage or apply for a job or prove my identity for some crucial legal reason, and the same source of bad information leads someone in authority to identify or profile me incorrectly? (That reminds me. I should probably run a background check on myself, just to see what comes up. Yikes.)
I’ll set aside my actual, not computer-simulated feelings of paranoia for now. I brought up my personal case only to say that I’m familiar with the basics of how chatbots work AND how the Internet is rife with potential terrors and glitches. Who has time to kibitz about the putative sentience of inanimate objects? I’m worried about the problems inherent in unaccountable hierarchical systems whether made of flesh or of code.
As I’ve put it before, I am resolutely anti-metaphysical. That’s why I dismiss the fired and supposedly “controversial” engineer Blake Lemoine’s posture that Google’s LaMDA AI system might have a “soul”—or at the very least, internal states comparable to emotions. He thinks it’s a hypothesis that should be tested, but him saying so is a way of pretending his bias is a form of common sense. (Engineer types do this kind of thing all the time.) Again, refer back to simple definitions. Emotions are things that happen in living bodies. Comparable states in non-living entities would still not be emotions. Whatever razzle-dazzle transcendent “soul” type thing can be imagined and imputed to a not-alive instance of millions of lines of code, Lemoine should come up with some other label for it, seriously.
Once you find out that the man considers himself a Christian mystic and is thus already predisposed to think metaphysically, you should know he’s the last person to be conducting or even conceiving a hypothetical “experiment” that would “prove” the “existence” of a “soul” somehow trapped in a database with a search function. If you start out with a belief that there is a ghost in the machine, you’ll see ghosts everywhere you look. That’s how the brain works: presuppositions lead to projections. Lemoine’s perspective is silly, circular, and also (may I conjecture without proof?) more than a little narcissistic. I think you Exist, therefore, you Must!
If you want to understand the tech as a layperson, Cade Metz explains it here in “Why Chatbots Sometimes Act Weird and Spout Nonsense.”
Lots of spooooky examples out there, such as “My Strange Day With Bing’s New AI Chatbot.”
I highly recommend Ted Gioia’s post on this subject, but even he has trouble keeping the anthropomorphizing language in check. I mean—I get it—it’s fun to talk about a computer phenomenon as if it were a person.
Great article, Sandhya.
At times the so-called AI or the algorithm that decides what is and what isn’t can be a bit of a silly annoyance. Other times, it can make the difference between achieving an objective (such as withdrawing your money or verifying your identify) and not achieving it. And at the end of the day, there’s very little or no accountability. These days, if you’re digitally shortchanged, it’s harder and harder to break through the barriers to find an actual human being that might be able to help.
Therein lies my main complaint, which is actually a human decision thing. When a company gets to a certain size, it implements more and more levels of automation that are designed to (a) save money by requiring fewer employees, and (b) keep the customer and the attendant accountability at a distance. When you have millions of customers, losing a few or a few thousand to frustration due to bad (or intentional) programming is just an entry on a ledger somewhere, like pencils. It’s easily offset by the employee savings.
Good food for thought. Thank again!
Would you not need to have a soul to be accountable? Serious question; I'm not well read.