Hallucination Nation – Part I: Three Words Do Us Part
Does AI dream of electric horses?
Lector cave… it is not recommended to read this post just prior to sleep. The author cannot be held responsible for the nature or quality of the resulting dreams.
“I see a cave with purple and bronze textures on the walls.”
“Wow really. Ok how about now?”
“Mmm now it’s green, like moss or something.”
I was lying on the chiropractic table, face up. She was pressing one thumb into the side of my ribs, and the other thumb somewhere along my leg. Each time a different set of pressure points—my head, the other leg, an arm, the hip. Each time, it triggered an image—as if she were playing me like a piano of color and texture. She was a graduate student at an acupressure institute in Los Angeles; I was a young screenwriter working production gigs on the side while I chased my big break. I had volunteered to be her guinea pig—I’d do anything for a free massage, and acupressure was the next best thing.
Except neither of us expected there’d be a light show. She’d never seen anything—anyone like me, apparently. She forgot all about what she was supposed to be doing and started trying out different pairs of pressure points to see what kind of visions each pair would trigger. We had a lot of fun that afternoon.
This wasn’t my first hallucinogenic trip.
Disclaimer: I have never, in my entire life, taken any mind-altering drugs, natural or synthetic, for any reason or purpose whatsoever. I have never even smoked a cigarette, much less weed or anything smokable. I’ve had a BBQ grill blow a gas flare in my face and my eyelashes singed by a candle, but that’s pretty much it. I still don’t know where the hallucinations came from.
A few years prior, I had moved to France for my junior year abroad. It was there that I started hallucinating these… images, these visions of color and texture. The most striking one was a large tunnel of fabric… a gigantic piece of white fabric with large red polka dots, swirling and twisting itself into a tube. It was even properly shaded, with light at the end of the tunnel (I guess my subconscious likes puns). The other visions were various textures and colors that would appear randomly — not every day, but enough throughout my year abroad that I certainly took notice. They weren’t dreams; they popped up randomly when I was fully awake and going about my day. My French friends thought it was all super-chouette, as they loved to say.
It did occur to me to maybe go get a medical opinion, but I didn’t have the money to get an MRI done or see an expert (too bad I didn’t know
back then). The visions were, well, visual, not auditory (as in, those voices in your head), didn’t interfere in my daily life, and I certainly didn’t hurt myself—or anyone else—because of them. So, I left it alone. I figured it was my subconscious mind welcoming me back to Europe, the continent I had left as a child when my parents emigrated to the U.S. Either that or a really overactive subliminal imagination.The experience with the acupressure student in Los Angeles was the last time I had these kinds of visions, and pretty much the last time I thought in any substantive way about the action of hallucinating.
Have you ever hallucinated anything? Please share in the comments!
The prodigal dream
How ironic it is, nearly three decades later, that hallucination comes knocking on my door again. Not in my head, this time, but from the outside. Peeking into the windows, slipping letters under the door, trying to convince me it’s different now. It’s all grown up and it’s got its own place. It has a serious role to play in society, a real job.
Dear reader, meet the AI hallucination.
Most of us have by now come across this term in the media and various Substacks. We have been made to understand that a hallucination in the context of chat bots like Google’s Bard or OpenAI’s ChatGPT is a response to a query or prompt that contains falsehoods, inaccuracies, or content that was created without any basis in factual information. There’s even a Wikipedia entry for it—it defines an AI hallucination as “a confident response by an AI that cannot be grounded in any of its training data.”
The very first time I saw the word hallucinate in an article about ChatGPT, I closed my eyes, cringed, and screamed Nooooo! silently inside. I knew it was too late to stop the anthropomorphism tsunami that had already started. And I could see the onslaught of inaccuracies and false information that would be flash-excused, en masse, by the public—faster than DALL•E could generate an image of melting clocks in the style of Dalí.
WARNING: dictionary definitions straight ahead 😱
Let’s revisit the basic definition of hallucination:
Hallucination
/hə-ˌlü-sə-ˈnā-shən /
noun
A sensory experience of something that does not exist outside the mind, caused by various physical and mental disorders,1 or by reaction to certain toxic substances,2 and usually manifested as visual or auditory images.
The sensation caused by a hallucinatory condition or the object or scene visualized.
A false notion, belief, or impression; illusion; delusion.
As you can see, the two definitions of the term hallucination are rather different. The dictionary definition is a “sensory experience,” a “sensation,” a perception of something that doesn’t exist. This requires some level of consciousness, awareness, a living organism with the capacity to feel and experience said hallucinations. The Wikipedia definition claims that an AI hallucination is merely a “response,” or output that is not grounded in facts or accuracy. Ok, so technically not a hallucination.
We can debate the evolution of language all you like, but let’s be very clear-headed about the selection of this particular word to describe the hiccups our AI friends are having. It is yet another instance of anthropomorphism, a tendency turned trend (and yes, I did just anthropomorphize AI in the previous sentence, intentionally). If you step back a bit from the overhyped media landscape, you’ll see article upon TV program upon YouTube video upon podcast upon more articles/TV shows/videos/podcasts about “interviews” and “conversations” with AI, as if it were some sort of sentient celebrity being. How is it we never talk about Adobe’s Photoshop or even Grammarly3 this way? Aren’t those also digital tools for creators?
Ya, I know, it’s because Photoshop and Grammarly don’t talk to us and they’re not buying us drinks. Yet.
A more accurate term for what is taking place behind the chat bot’s front end would be confabulation. AI researcher and author
used the term recently in one of his on-air interviews4 (kisses for that Gary! 💋 ). Let’s take a look:Confabulation
/ kən-ˌfa-byə-ˈlā-shən /
noun
The replacement of a gap in a person's memory by a falsification that they believe to be true.
According to the National Institutes of Health’s Library of Medicine, a confabulation is a “neuropsychiatric disorder wherein a patient generates a false memory without the intention of deceit.”5
Note the critical difference here, which is the conviction or assumption that the thing in question—whether image or information—is, or is not, real. In the case of a hallucination, the mind knows, on some level, that what it’s seeing isn’t really there. In the case of a confabulation, there is a gap in memory that the mind fills in with an idea or a memory it believes to be true.
If an AI system were to truly hallucinate, it would create an image or sound completely of its own volition, rather than a response to an external prompt. Anthropomorphic fantasies aside, when an LLM fills in a gap in the strings of predictive text that it spins off with a word or phrase that renders the output inaccurate or false, that’s not a hallucination. The LLM is not imagining something out of the blue. It’s responding to a question or prompt typed in by a human, and sometimes it hiccups and puts in words that don’t make sense or aren’t true or refer to things that don’t exist. So that would be a confabulation, then.
Ok, you say, so it sounds like we need to send out the big important READ THIS NOW OR LOSE HALF YOUR AUDIENCE press release to all the media outlets and have them execute a Find and Replace query. Replace “hallucination” with “confabulation.” Easy fix!
Before we turn on the presses though…
We’re anthropomorphizing again. The above definition of confabulation couches the concept in the context of the human mind, not code.6 Not software or hardware. In its most fundamentally basic, simple, foundational form, when an AI chat bot returns a false or inaccurate text string, it makes a mistake. An error.
This is a radical, daring thought, I realize. Perhaps best to sit down and take a few deep breaths here. Maybe light that special scented candle too (but don’t get too close your eyelashes might get singed). Ok ready? I have no choice but to say this part out loud: why not just call a spade a spade, and use the word… error.
Dear reader, meet Mr. Error. He has been buried underground since November 2022, his soul expunged from the collective consciousness of the modern Western world and its social media overlords. The chain he bears has but one key to release him, and it is you who hold it. Let us now resurrect thee, Sir Error, and dub thee Knight of Appropriate Phrasing! May you go forth and pry open the caked-over neurons of all those who hath sipped excesses of AI Kool-Aid! Rise, in error but good faith!
No, really, I have not smoked anything or eaten anything weird for dinner. It’s just 12:30 at night and this image of a mud-caked man in chains just spoke to me. Also, asked me what I’m working on and I said I couldn’t stop writing. It was then that I made the executive decision to split this essay into fraternal twins; Part II comes out next week. Back to Mr. Error, now that he’s been fully anthropo– god I’m sick to death of typing that impossible word— fully formed in the flesh.
Error
/ ˈe-rər /
noun
Something produced by mistake (like maybe the image above. I really hope he got paid, or at least a nice dinner, for being slathered in mud just so writers like me can paste the photo into their posts about hallucinations).
A deficiency or imperfection in structure or function.
An act involving an unintentional deviation from truth or accuracy.
Surely, calling a software error simply “error” is a profoundly unconventional idea that might destabilize entire paper skyscrapers that the media noosphere has built and glued together with the drool of a shocked and awed public. I shudder at the thought of Mr. Error walking the streets of our cities asking law-abiding citizens for the key to his chains. But there is hope—we can take comfort in the wisdom and experience of other industries and sectors that have been infected with the erratum virus—and lived to tell. Newspapers routinely publish errata & corrigenda. Economists, investment firms, and Wall Street sharks have been known to fudge a number or two (or most of them). Scientific research isn’t immune either. Hell, even Nobel laureates aren’t perfect.
All this to say…
Cogito ergo erro, n’est-ce pas? You can ask Bard or ChatGPT about that, but they might hiccup and throw an error if insufficient humility has been coded into their algorithms.
Ah, but error is not quite as romantic as hallucination, is it. Hallucination carries with it the connotation of an ethereal, dreamy state… imparting a halo around the imagined vision, as if it were lucid and light, lucid enough for the mind to believe. The very sound of the word lilts, breathes, sighs, as we swoon with it, losing ourselves in the deep ocean of a new, hypnotic intelligence that talks to us in natural language. Error grates against the tongue, the three r’s rubbing crushed gravel against our smoother sensibilities, and the short length of the word compacting any aesthetic-auditory appeal it might have had into the equivalent of a crushed empty can of soda, ready for the recycling bin. And error, embodied in the earthly (er, muddy) form of Sir Error, takes no prisoners. He still wants that key, and he knows you have it.
Part II of Hallucination Nation drops next week, where we give AI hallucinations a bit of a hard time. Because, the real world. We’ll see if I can dream up a Part III (am taking suggestions!).
Enjoyed this post? Hit the ❤️ button above or below (apparently it helps more people discover Substacks like this one and that’s a great thing). And sharing is hallucinating together with your friends. Or something.
Personally I take a bit of offense to this definition, as my hallucinations were most certainly not the result of any mental disorders (believe me my family would know). Don’t these dictionary people know about us artists??
Wonder what they mean by “certain toxic substances.” Jet fuel? Baby poop? The byproducts of commercial chocolate production? I’ve never tried the first two, so, hmm, yeah maybe it was those KitKats I used to eat, long before I was introduced to the world of craft chocolate. I digress, yes, but that’s what footnotes are for.
Guess what. Grammarly just announced its own generative AI integration. It’s called GrammarlyFOMO. I mean, GrammarlyGO.
See 04:35.
National Library of Medicine, viewed online at https://www.ncbi.nlm.nih.gov/books/NBK536961/ on May 20, 2023.
I love spontaneous (as in, unintended) alliteration. Five c’s in this sentence! Cue “mind blown” GIF…
100% agree on this. Right along side Halucinating AI is Biased or Racist AI which sufferes the same problem. Error. What we see is an accuacy problem because we aren't seeing the people coding AI intentionally adding ethical bias. What's funny is that there are three layers of bias in an algorithm:
1. Ethical Bias (typically an artifact of an org)
2. Measurement or Data Bias
3. Mathematical Bias (how we code our algorithm)
But again, it all boils down to error not ethics, not conciousness, not hallucinations.
https://polymathicbeing.substack.com/p/eliminating-bias-in-aiml
You make a good point about the accuracy of the terms we use - and this from someone who used the word "Hallucinating" in an article sharing my predictions about AI just last week. I think that part of it is that because AI is supposedly "the next big thing" (another debate entirely), we have to find special words to describe it. I also think that part of it is because "hallucination" is an interesting word to say, read, or write. I love delicious words. Error is distinctly much more correct, it's just not as "fun."
But, to your main point about anthropomorphism - you point out something that's often misunderstood about AI. It's not actually *intelligent.* It's just using vast amounts of data and clever algorithms to predict things about the dataset it's querying, and presenting that to the user.
It doesn't "think" - It's just an extremely clever parlor trick.