In Part I of Hallucination Nation, we explored the meaning and relevance of three innocuous little words now causing so much havoc in the world of AI. We also vicariously watched me having a few all-natural, literal hallucinations.
Let us now continue with the discussion. Here in Part II, we turn to these mysterious AI hallucinations (read: errors or confabulations) and take them for a spin in the real world. If you haven’t read Part I yet, I strongly encourage you to do so—you must meet Mr. Error! He’s a sexy one.
By the way, if you find the literary reference in the title of this post too obvious for words, email it to me and you’ve got yourself a free 3-month paid subscription to The Muse. While inspiration lasts. Not subscribed yet? Whatever are you waiting for! Here below!
A warming AI world
Somehow, in the blink of the Internet’s eye, we have gone from expecting a high level of journalistic integrity of our news media to slurping daily doses of propaganda and misinformation, and now to giggling over cute algorithmic gaps in data integrity.
Oh look, Charlie! How eminently adorable. Our little AI’s are hallucinating! Let’s sing them a lullaby... for when they awaken, surely they will reformulate their dreams into more accurate, fact-based, real responses. Or maybe we’ll just grow to love them with all of their curious and indelible flaws.
When a chat bot delivers inaccurate information or references to things that do not exist (scientific papers, news articles, the names of authors, etc.), what should our response be?
a.) Close the browser window and refuse to use the bot again until it can be relied upon.
b.) Accept it as TRUTH because we either don’t know any better or it does not occur to us to double-check the information
c.) Laugh it off as a “hallucination” and post screenshots of the exchange on social media
d.) Write an article about it, screenshots included, calling for the public to remember to fact-check the replies they receive (and for the AI developers to maybe retract their bots)
I’ve seen instances of each one. Personally, after numerous sessions with both ChatGPT and Google’s Bard, I closed the windows. There’s no doubt these technologies have tremendous potential, but you have to be clear on what you’re using them for and how. As far as search and research go, they’re not there yet—and it’s not my job to teach them to get there. I have yet to hear a convincing argument for becoming a glorified—and unpaid—bot babysitter or tutor. If I cannot place my trust in an algorithm when it comes to research or obtaining any sort of accurate, reliable information, it makes precisely 0.00 sense to continue using it. I’m not suggesting that everything online should be absolutely accurate or truthful—the dear lord knows how much chaff has been strewn around the Internet. But if you create a thing (product, service, tool, etc) intended to be used by humans for specific purposes, and you release that thing into the world, and you encourage people to use the thing, and the thing then fails to work well, is that not a valid reason for people to return the thing and get their money back? Let’s give this some real-world perspective.
Would you continue to buy lattes from Starbucks if they randomly started serving brown dish water instead of real coffee? Some days you’d get the coffee, but other days you’d get dish water. You would have to check each and every cup to make sure the “coffee” is real. Appetizing thought isn’t it.
Would you continue to drive a car whose front left wheel inexplicably fell off a few times each week? And sometimes it would turn into a set of stairs, which of course would be just weird. You would have to carry a tool kit with you, or at the very least a really good emergency roadside service because you’d never know when you’d need it. (And maybe a wizard’s handbook to turn the stairs back into a tire.)
Would you continue to hike up a mountain that randomly dropped out from under your feet, or suddenly raised itself up a couple hundred feet, knocking you off the trail? Too bad if you’re close to the edge…!
Would you continue to work with a colleague, date someone, or hire a babysitter—or engage in any sort of activity with another human who was randomly unreliable, spewed falsehoods and inaccuracies at random intervals, but who was almost always very polite about it?
Oh hang on, you say. I get the Starbucks analogy but you can’t compare a mountain to software. Mountains are made by Nature, and software is made by humans. Mountains don’t just disappear. They’re always there. They’re, you know, reliable.
Except when a cataclysmic earthquake cracks one of them open, or it turns out that mountain is a volcano that’s just woken up after a few million years of slumber, or there was no mountain there because you were having a lucid dream.
We can debate what is real, and what is really reliable, for a good solid week or so. Point is, a functioning society requires a certain foundation of fact, reliability, and accuracy. This is not a rant against AI. It is a call for AI developers to slow down and not break things. And not take credit for others’ work.
It’s this slow yet lightning-fast progression through evolving stages of misplaced trust, false assurances, corporate gaslighting, FOMO, a dash of pure unadulterated tech bro ego, and evaporating critical thought on a mass scale that has brought us to this point. We are the proverbial frogs in the pot, the fish in the ocean who don’t realize how wet—or hot—the water is (yes even the oceans are warming now). I doubt we would have accepted these so-called hallucinations quite so easily if we had not been taken for a walk (read: ride) through the Purgatory of Fake News these past several years.
Our imagination is our gift. What we do with that imagination is our responsibility.
Hallucinated Histories
Anything worth debating always needs its proper context and history. The history of human hallucination is one hell of a spiderweb—and I’m not referring to psychedelics. I’m referring to history, the way it has been written, spoken, recorded, and remade. Even today we’re squabbling over who gets to tell whose story—enough there for an entire book, but to give just one example: it seems that the history of the trade of human beings in the US is uncomfortable enough for some that they endeavor to deny it and ban books about it. Those are true stories that some try to suppress. A different matter is false stories that some strive to promote.
We humans have an extraordinary faculty for imagination, and imagining things. Whether those imagined things are right or wrong, good or bad, true or false, is beside the point. The point is, we are capable of imagining them, and expressing them. Our imagination is our gift. What we do with that imagination is our responsibility.
Is not fiction imagined? Is not fiction, well, fiction? We do not consider fiction bad, immoral, or wrong; we instinctively understand the purpose of fiction is to allow us to dream, to live other lives, to imagine places and people we’ve never seen or met. Fiction does not pretend to be real (usually). Non fiction, be it an essay like this, books, news, or documentaries, also carries a purpose, and that is to communicate facts, realities, observations of our world. It is when non fiction uses the sex appeal of fiction, puts on the makeup of artifice, and dresses up in an imagined story to fulfill an ulterior motive or agenda, that the trust between writer and reader, between creator and consumer, starts to break down. It can be innocent, but it can also be malevolent, and all manner of colors and textures in between.
All world religions have stories they share with their followers and believers. It is not for me to place value judgments on those stories, but let’s just say some are more fanciful than others. Governments, political groups, companies, and numerous other constructed groups all have their stories as well. Some of these stories have started wars (the Great Crusades), some have caused horrific mass murder (the Holocaust), and some have only recently been de-veiled (Christopher Columbus’ “discovery” of the Americas).
Then there is what we used to call yellow journalism. This article runs down a brief history of fake news—quite a read! How ironic, too, that Joseph Pulitzer, the famed newspaper publisher who lent his name to one of the most esteemed honors in journalism, engaged in rumor mongering as part of his ongoing rivalry with William Hearst. In other news, did you know that the original founders of YouTube apparently had no issue with videos that had been uploaded without the permission of their copyright owners? That inflated their stats of course, Google bought the company, and the rest is history—more crucially, the rest of their competitors, who played by the rules, faded away from that history. The alternate history that could have been, had they all played on the same ethical playing field. More recently, another promising and well-intentioned startup felt the sharp teeth of a predatory bigger fish.
Examples like these are like raindrops in a storm—hundreds of thousands if not millions. Fake news, false stories, tales taller than the Eiffel Tower abound in a dizzying array of form and channel, while honesty and fact are choked by a Sargasso Sea of deception. Guess what “content” made it into the training data sets. Is it any wonder that the hiccups and errors we now see the chat bots spit out, are equally if not more inaccurate, biased, and non sensical? GIGO,1 as they say.
What is most dangerous isn’t the stuff generative AI comes up with. It’s the fact we can no longer rely on our Fourth Estate, and perhaps we never fully could. Perhaps it’s to be expected that we would accept AI-generated confabulations in our lives after the constant diet of fake news and so-called fake news that were actually real, confusing the issue even further. If someone can manage to dissolve your mind, how do you put it back together?
Flowers for AI
Like the rodent character in the famous short story that inspired the headline for this essay, AI’s purported intelligence, crafted mechanically, algorithmically, artificially, is destined to decline as we feed it with ongoing cycles of our own laziness, willingness to surrender creative agency, and lack of critical thought. Our own desires for templatized entertainment, poems about electric vehicles in the style of Shakespeare, and plasticized movie scripts with synthetic AI voices. Our own willingness to let an algorithm plan our dinner menus, parties, weddings. Our own gullible fear of being left behind in the rush toward some prefabricated utopia, not realizing it’s the corporate power players who have bamboozled us into rushing headfirst for that cliff, giving freely and without limitations of our time, our energy, and our mental faculties to take their chat bots and AI interfaces to the next level, trusting their promises to give us more strategic positions and higher salaries. But at some convenient point they will pivot and performatively wring their hands and issue PR-peppered apologies saying oh actually we no longer need even the higher-skilled white collar workers, we’re good thanks!
It doesn’t have to be this way. We can choose to harness the powerful algorithms of AI to solve our most pressing problems. Let AI fold proteins as well as laundry. Let AI model climate predictions and good behavior in online comments. Let AI outwit the fraudsters, the spammers, the plagiarists, the criminal masterminds—and put them out of business forever (or at least make them buy their own merch).
But leave the artistry, the imagination, the inspiration, the creativity to us. With or without digital tools. AI doesn’t care what it does; it’s a piece of software. But we do. We care whether we will be able to earn a living doing what we love, and we care whether we’re able to choose how we create.
Given the global outcry by the creative community against the excesses of AI, it certainly won’t be an easy battle. It might be that the data-hungry, profit-motivated, power & status-obsessed players have finally met their match: the now-truly-starving artist. Just as that old saying about the cornered mouse proves to be stunningly false, so the starving creator can grow some gnarly teeth pretty quickly.
Author and activist Naomi Klein has written a well-thought-out opinion piece about AI hallucinations in The Guardian. Ms. Klein’s headline reads “AI machines aren’t ‘hallucinating’. But their makers are.” She outlines four “hallucinations” that the AI power players seem to be having, and cautions us against falling for them. But they’re not hallucinating Naomi. They’re all too clear-eyed.
One last thing. The Flowers for AI aren’t real. They only bloom backward into noise.
If you’ve gotten this far, you are either pumped up on a double espresso or you accidentally hit the “page end” key. Either way, I hope this has been a worthwhile ride! Part III of Hallucination Nation is in the works (can’t give a precise date because I haven’t integrated my writing calendar with ChatGPT, and have no intention to). In the meantime though, share your thoughts below and share The Muse with friends!
GIGO: Garbage In, Garbage Out. I know, just one footnote in this essay. It’s a lonely little footnote, but considering it’s GIGO, I hope it rots in a landfill.
The say, about wars, that the first victim is the truth. In the misinformation skirmishes so far about A.I. watchers and their worst fears is an interesting study. When the history of humans is written, the only thing which might have an 'extinction event' is truth.
Thanks for writing a great piece.
I enjoyed it and I trust that you wrote it using tools ....
Cheers,
Makr
Some bakers will always want to bake their bread the same way forever.