Discover more from The Muse
The "why" of my guest post on Michael Spencer's AI Supremacy
Earlier this summer,of the well-regarded newsletter invited me to write a guest post. I had always found him to be thoughtful and supportive of the writer community here on Substack, and gladly accepted the invitation. We iterated on various angles to a few different topics, all of which were worthy and valid and intriguing. Ultimately, I chose the one I feel is fundamentally responsible for the tears and cracks in our social fabric: what I call the datafication of the human being.
The post is titled The Great Disconnect and it talks about the fragmentation, atomization, and loneliness that algorithms, be they of the social media or the generative kind, bring to bear on us living breathing human beings. I’ve reposted it here below for easy reference, or you can read it on AI Supremacy—and while you’re there, do check out other posts by Michael or his other guest writers. What I would like to do here is give a little more context for the reasons I wrote this piece and how it almost gutted me.
In 1988, Noam Chomsky and Edward Herman wrote a book titled Manufacturing Consent. This was shortly after the Digital Age ignited, but Chomsky’s and Herman’s work was prescient then and it is prescient now. If you haven’t heard of or read the book, please allow me to highly recommend it to you. It will give you a thought-provoking foundation for the kinds of conversations taking place today about the power and influence of social media and generative AI. The book’s Wikipedia page is illuminating—notably the part that talks about the apparently intentional demise of the publishing house that dared to publish the two men’s previous work, Counter-Revolutionary Violence: Bloodbaths in Fact & Propaganda. Gosh I cannot imagine what its parent company, Warner Communications Inc., was worried about.
Now that the masters of content have our consent, the next step is to atomize us, break us into little monetizable pieces that can be sliced and diced and tossed into shareholder salads. Our children are being born into a world that lives and dies by emojis, data points, and algorithms. We are being conditioned to train our own professional replacements; to filter our emotions, thoughts, and reactions into a set of neat, quantifiable, analyzable data sets, which then feed more content formulas back to us, and rinse and repeat; and in the process disassociate from our very minds, psyches, and bodies. We are being gutted into shells of our human selves. Asso eloquently said in the comments on my guest post,
We are one of countless social species because this basic instinct allowed us to survive through various versions over millions of years. A social instinct has allowed us to find food, protect ourselves and procreate more effectively. We are wired for this at the DNA level - to communicate through affective resonance and pheromones, and to be physically touched. Lives in misalignment with basic genetic wiring become ill … and in this case empty/depressed, like Harlow’s monkeys.1
It took me three days to write The Great Disconnect, but the thinking and feeling that went into it took the better part of a decade. The moment I hit the first key, it was as if the fire department arrived with their pentagonal wrenches. It all came gushing out and tripped the breakers on my neurons. I had to stop multiple times—not because I didn’t know what to write (on the contrary, there was too much, and I had to cut it down to 25% of what I originally outlined), not because it’s violently complex (it is and it isn’t), but because the deeper I slid into the rabbit hole, the more clearly I began to see what was taking place at the molecular level. Out of respect for you, dear reader, I had to keep pulling myself out of the rabbit hole to get my bearings on the horizon, to breathe fresh air, and dive back in with a full tank of oxygen.
But this rabbit hole of technological isolation isn’t a simple straightforward tunnel that connects from the human world above to some deep dark tech cave down below. It’s a network of threads and pathways, of data points and algorithms, of pixels and patterns. It mimics our expressions, words, and actions; it mirrors our most extreme tendencies and ambitions; it amplifies our fears and anxieties. Like the vast tapestries of mycelia that weave our soils into the cradles of ecosystems, the algorithms that underpin the levers of modern society now run too deep and too wide for us to simply cut them at the roots.
Yet unlike the natural world, this algorithmic network has no life and no consciousness… no relationship to itself and no capacity to love, fear, or hate; it neither dreams nor wakes. It serves only those who know how to thread its circuits. In the end, it abdicates all responsibility because we are the ones who’ve built it, and we are the ones who are feeding it. It would behoove us to consider carefully where our willingness to feed this beast comes from, for we do so at our own ultimate risk.
A special announcement
Lastly, I’d like to shed some light on my MIA-ness since the last article I published a month ago, which was’s own thoughtful guest post here on The Muse. Aside from all of the usual and daily responsibilities a working mama needs to fulfill, without fail and often without recognition, the past few weeks have been spent preparing a brand new and absolutely delicious new Substack, the twin sister to The Muse. We go live with an initial soft launch here for the Substack community tomorrow, Friday, September 22. Watch this space!—and if you’re not subscribed, do, if only to meet The Muse’s twin sister 😍.
And now, without further ado, here is The Great Disconnect. Settle in with a nice big cup of something piping hot and maybe a pastry… it’s a long read, and you’ll need the sugar to counteract this bitter melon.
“The Great Disconnect,” as originally published on
You know when the Surgeon General writes an OpEd about loneliness in The New York Times, things have taken on the color of ants, as they say in Latin America.
It’s not just the Surgeon General’s OpEd. Headlines all over the media ecosystem have been blaring about the mental health bomb of loneliness and isolation for several months now. It first blew up during COVID, when the world was shoved nose-first into a sea of N95 masks, everyone went online for everything, and cities turned into ghost towns. We disconnected in-person and reconnected online.
Much of that seems like a distant nightmare now. But not only has the COVID tide not fully receded, there’s a tsunami forming on the horizon that might make that quarantine feel like a kindergarten field trip. The robots are coming, the headlines now warn. AI is going to take not only your job, but your last few shreds of sanity and whatever connections and relationships you had with other humans, too.
During COVID, the skyrocketing ubiquity of social media algorithms in our lives turned into an overdose of epic proportions. Where social media dug a trench, generative AI is going nuclear. It has the potential to displace entire professional sectors; gut the creative class; re-thread our communal relationships with doctors, educators, and law enforcement; dissolve the nuances of romance; and turn our sense of what it means to function in human society inside out.
But before we all run screaming for the hills, let’s unpack the thing that simmers at the core of AI’s power to disengage society: the Great Disconnect.
Typing in prompts instead of working with your creative team to design a videogame, sexting with a bot instead of a hot human, or scrolling through digital galleries of beach sunsets instead of throwing your camp gear in the car seems like the perfect way to dilute life into a plate of cold broth. If you’re concerned ChatGPT and its bot friends might uproot your life and career at least a little, you wouldn’t be wrong. But you wouldn’t win debate class, either. The visible spectrum isn’t just black & white, there are more than a few ways to peel an apple (I feel you cat people!), and the enigma of superposition isn’t limited to quantum physics.
What isolates and depresses you, might empower and delight someone less able or fortunate. What fragments and disconnects in one scenario, might unify and bond in another. Perhaps gen AI’s true legacy will be to make us all realize just how disconnected we already are, how much worse it can get, and spur us to turn the loneliness epidemic around. But that’s only if we’re willing to a.) see it; b.) accept our responsibility to address it; and c.) addressing it.
Great, but why should I care?
Fair question. I don’t expect this essay to resonate with everyone. But if you’re a startup founder, CEO, or manager in the process of integrating AI into your operations, you might want to know how to keep your people motivated, performing at the top of their game, and feel respected. If you’re a techie, wouldn’t you want to know how not to be rendered redundant, and why people are not doing happy hour anymore? And for the creators out there, as much outrage, frustration, and anxiety you might be feeling about generative AI, it is more strategic and useful to educate yourself about AI, how it works, where it fails, and how to ensure creators continue to thrive. Finally, if you’re any kind of human at all, unless you’re completely unplugged—in which case you’re probably not reading this post anyway—the worst thing you can do is dunk your head in the sands of denial. In the case of this formidable technology that’s poised to upend global society, ignorance is anything but bliss.
OK, but give me some context please
First off, let’s clarify the difference between loneliness and solitude. Solitude is the sense of contentment or even joy when being alone, present with one’s self. Those of us who write, paint, compose music, or engage in other activities that do not require communal participation, for example, often prefer solitude to do our work. I’ve never been able to join a writers’ group because I need to write alone. Loneliness, on the other hand, is a sense of isolation that persists whether or not there are other people around you. It’s that sense that something deep within is missing or out of tune, and perhaps even lack of a fulfilling inner life.
Personal isolation and loneliness have been a focus of concern for the mental health sector for some time, certainly prior to the release of generative AI. There’s even a documentary film that bears the same title as this essay (I discovered it after deciding on the title). The Great Irony of the Great Disconnect is that the very technologies designed to connect more of us across the world, such as email, smartphones, and social media, did connect us but they also isolated and alienated us, each new technology driving us further and further apart. How can this be? Three critical shifts took place.
First, the foundation of contact stopped being about communication, storytelling, and exchange, the way human societies have connected for millennia. It turned into likes, followers, heart icons, GIFs, and other digital compliments and feel-good indicators. Slowly, insidiously, our thoughts, opinions, and musings, and more crucially, our mistakes and unintended impulses began to be published for the entire world to see, and judge, and respond to, and, AND… reshaped as data. We’ll come back to this point, because it’s foundational to the discussion about generative AI.
Second, the opinions of everyone reading our posts, listening to our songs and voices, and viewing our art and photos and videos, were given public life. Never before were creators—here I use the term “creators” in the most generic sense of the word—given such wide and deep access to the reactions and sentiments of the public, whether that public was their intended audience or not.
Imagine what the Instagram feed of Cleopatra would have looked like! Assuming every inhabitant of Egypt had a connected device, the most she would have had is 2-4 million, a mere 0.95% of Beyoncé’s IG. Then again, Beyoncé doesn’t have her name carved in two-thousand-year-old sculptures.
Third, we began to connect at lightning speed with people we didn’t know in real life. Those new connections were no longer based on in-person, multi-year relationships, but digital representations of people’s lives, carefully—and not so carefully—curated. We went from getting to know the people we talk to over time and through experience, to making flash judgments on the basis of a single post or comment. As we’ll see, these three shifts have made a perfect storm of disconnection, alienation, and loneliness that generative AI will make quick work of if we’re not paying attention.
Datafication of the fully expressive human
The first shift in the way we communicate and connect, as discussed above, has to do with the mutation of the value and meaning of our expression. Digital communication tools, whether AI-driven or not, distill the words, images, and sound we express online into data. This data takes on various forms known as attention metrics, such as:
Icons for somewhat more nuanced reactions, such “insightful” or “funny”
Shares & reposts
Open rates and click-through rates
Length of time spent on a webpage
Of course, the full range of human emotions and expression does not fit into neat little categories, and that’s always a problem when you’re running analytics and trying to tie user activity to some kind of benchmark. It’s a particularly important challenge if you’re using those benchmarks to drive traffic to your online store, social platform, or search engine, or otherwise keep your investors well fed and happy.
The biggest shift on the individual level is that we are no longer viewed as full human beings and bodies. We are “users.” “Consumers.” “Subscribers.” “Followers.” “Customers.” “Use cases.” Why not just say it out loud: we are Data Points. Stats. Numbers. In short, we have been datafied. Ironically, this datafication extends to everyone working in the tech sector as well, from investor to startup employee. They’re human too—and they post, comment, and browse just like the rest of the online world (albeit maybe a little less). This is the biggest step in the disconnect on an individual level: being human is an experience that is at once physically embodied, emotionally felt, and, for most people dare I say, spiritually traveled. When you consider yourself, or others, to serve the purpose of a data point, you’ve disconnected the humanity from the person.
Looking at it from the POV of the market, you need a way to quantify, analyze, and forecast the behavior, preferences, and opinions of all those annoyingly diverse and unique humans if you’re going to launch a (massively) profitable product or service. Well, you're in luck, because those annoyingly diverse and unique humans are also deeply tribal and constantly strive to fit in with their peers and communities, and so they’ve jumped right into the data pool, cold water be damned. We now routinely ask our readers to “please like, share, and comment below!” because we want data on them just as much as the algorithms do. The algorithms have trained us well.
Too well. The real reason we ask readers to like, share, and comment is not the data. It’s how the data makes us feel, and how easy the data makes it to get an instant result. It is a profoundly human thing to seek approval from our loved ones, friends, and now, increasingly, the rest of the world. It is an equally profound human thing to not want to have to do the work—the work of reading nuanced comments and opinions from fifty people, vs. a quick glance at that nice fat figure “42” next to the “like” icon atop your blog post. The number itself, of course, is relative—relative to how long you’ve been posting, relative to your overall email list size, relative to how many of your hard-earned dollars you’ve sunk into promoting that blog post. But it’s a concise, easy-to-grasp indicator of likability. What you miss, of course, is the real reason why any given person has “liked” your work. Some did it because they know and like you, and haven’t bothered to read the post. Others because they scanned the post and decided, in the spur of the moment, to like it. And some do it because they genuinely appreciated what you had to say. Now what about those readers who also read and appreciated your post, and maybe even told numerous friends about it, but chose not to ring that little bell? You’ll never know about them, just like Gabriel García Márquez never knew what the vast majority of his readers really thought of his work.
And so the algorithms have effectively succeeded in transmuting our natural proclivity for peer and community approval into the data points they can work with. The invention of the “like” icon was a brilliant, if brutally world-altering, idea. Too many of us now create “content” with the intention of “engaging eyeballs.” (Whatever happened to the rest of our bodies? The disconnect thus becomes digital dismemberment, too.)
How long—and deep—can we play the numbers game?
But why do we chase the highest numbers, instead of being content with whatever followers we’ve got? Because we can. Because the algorithms connect all of us in fractions of seconds. And because superlatives always impress: the fastest car in the Indy 500, the richest person in the world, the biggest predators on land and in the seas. The fastest growing app on the Internet (hello ChatGPT—no wait. Hello Threads, good-bye ChatGPT). So yes, we do numbers because assessing quantity is truckloads easier than trying to gauge quality and craft, and because we’re hard wired to seek and value resources, be they food, shelter, money, or yes, even those digital thumbs-up. In the latter case, digital status is a type of resource—it feeds not just our ego but expands the potential to earn revenue. (How some of us humans plan for that revenue is another matter.)
We can appreciate big numbers but we can’t process them properly beyond a certain size. Close your eyes right now and imagine a hundred-dollar bill. Now imagine a suitcase full of them. An entire warehouse of suitcases. How about a trillion hundred-dollar bills? Not so easy now, is it. Beyoncé can’t physically interact with every one of her 317 million Instagram followers either. I don’t pretend to sit in her head, but I imagine she doesn’t have the time to read all of their messages. 317 million does not represent the singer’s friends and peers; it’s a number that represents her success, her status on the world stage. Yet for all intents and purposes, Beyoncé is disconnected from her adoring tribe until and unless she pulls a few of them up on stage with her and puts her arm around them as they swoon. But what about the fans, from where they sit? How connected to their queen does any one of them really feel? Clearly, many do, as evidenced by their fierce loyalty and hordes of bee emojis. But is adoration true connection? And are the algorithms good for Beyoncé’s own mental health? Numerous sources would say not so much.
AI systems, on the other hand, can crunch unimaginable volumes of data, and they can do it 24x7. All they need is lots of processing power and a nice big data center with a steady supply of energy and water, the environment be damned. They are also completely unaffected by any of those pesky human emotional, psychological, or physiological consequences of excessive screen time or the nature of the content their own code serves up.
When you, a human, spend even just an hour or two scrolling through the output of AI-driven social media feeds—TikTok videos, LinkedIn posts, Twitter (X) threads, how much mental or emotional energy do you have left for you, your work, and the people in your inner circles? How do you feel after subconsciously comparing yourself and your life—personal, professional, financial—to what the screen dangles in front of you?
And if you’re not impressed with the big numbers on Beyoncé’s Instagram, try the 500 million tweets posted daily on the site formerly known as Twitter, and the herculean task assigned to its recommendation algorithm to distill them down to “a handful of top Tweets… [on your] For You timeline.” It involves a neural network of roughly 48 million parameters “continuously trained on Tweet interactions.” Such work is not for the human of brains. Here’s a fascinating rundown of Twitter’s distillery—er, recommendation process if you’d like to slide down that rabbit hole.
Whether and how to connect, that is the question
Many argue that connecting online is still connecting—and in many respects that is true. I have personally made numerous connections online that have turned into friendships, including here on Substack. In some ways, the quirks of AI algorithms seem similar to the quirks of everyday life—a chance meeting in the hallway, on the train, or in line at a coffee shop seems just as accidental as stumbling across a thoughtful comment you decide to chime in on and end up with a new colleague or friend. And just like in real life, the experience and impact of those connections depend on the intentions and character of both parties. If you’re looking for trouble, you can find it online as well as in the real world. Granted, you can’t hide behind an avatar in the real world. At least not yet.
The dynamic of connecting with total strangers in an online world, itself disconnected from the physical environment, and interacting with them through text, images, and symbols, has inverted our natural, innate forms of communication. One could say we’ve evolved. A more accurate assessment, however, is that our own technology precedes us. We certainly know how to use it—and misuse it—but we remain biologically wired for physical contact and in-person interaction. You don’t need a stack of research studies to prove this (but here’s one, and another, if you like)—just notice how you feel in your body the next time you meet a good friend for dinner, vs. chatting with that same friend via text.
The problem we have is not a simple question of quantity vs quality, or even AI vs human. The problem we have is one of nuance, empathy, and context. Everything we do online is getting reduced to ones and zeros, to algorithms following instructions designed to maximize the indicators of popularity and profit. Influencers are voicing their frustration about their content being served less to their own fans and more to random people based on algorithmic assessments of what content is likely to spark the strongest reactions from which user accounts. When those reactions are less than pleasant, that can impact the influencer’s mental health. In fact, we’re seeing rising numbers of celebrities speaking out about their mental health struggles fomented by social media, and that is helping to bring the topic to the fore of public awareness.
Civil debate and thoughtful conversations do still exist online, but they tend to be drowned out in an increasingly virulent online environment that might get worse before it gets better. The feelings of loneliness, isolation, and the ever-present FOMO (Fear Of Missing Out) that social media has wrought upon us are being taken to a whole new level by AI-powered algorithms and the automated firehose of content and targeted recommendations they’ve turned on. It’s one thing to feel pangs of envy about the Instagram influencer with the crazy-perfect dream house, but it’s another to stare down the black hole of your entire professional career in customer service going up in flames because it is likely to be made redundant by a much more cost-efficient chatbot. And it is quite another to have law enforcement break down your door and arrest you while pregnant and getting your kids ready for school because their facial recognition system glitched.
Social media platforms do not interact with you; they are platforms, digital canvases upon which you, along with other humans, paint your words and emojis and little dancing pig memes. AI-powered chatbots are different—they talk back. They effectively step out of the platform and onto it, wielding their own talking brushes, as it were. Up until recently, any time we received a digital communication, we could safely assume a human initiated and produced that communication. Now that chatbots have blipped onto the digital scene, that safety has begun to slip away. We’re aware bots write conversational threads, and that’s primarily because of a.) their still-stilted communication style and b.) the context of their delivery: a ChatGPT window, a chatbot on a company website, a Siri on your phone. Would you be able to tell, assuming the chatbot’s natural language functionality is advanced enough, whether any given Discord, LinkedIn, or Facebook post or comment was written by a human or a bot? Maybe you can now; just give it a few months.
If certain interests have their way with generative AI, there will soon come a time when you won’t be able to tell whether any text or voice on any platform has been produced by a human or a bot, unless you have an established relationship with the human.
The nuclear option
Since AI algorithms in general have been running beneath social media for a while, you might think this type of AI isn’t much different from other digital communication tools in terms of its impact on our isolation factor. But there is one simple but fundamental distinction, and that is the virulent, adaptive, and weaponizable nature of generative AI that other AI algorithms do not share. Specifically:
The ability to respond to your question or prompt in natural language (read: the ability to respond, period)
The ability to produce new content in response to said question or prompt, in multiple modalities
Training on billions of words, images, sounds, strings of code, and other content
The ability to develop synthetic data
The ability to analyze complex or voluminous data and find hidden patterns and trends
The ability to automate and accelerate a broad range of tasks and processes
This kind of power and speed should not be taken lightly; in fact, it has been argued that generative AI systems like ChatGPT should not (yet) have been made widely available to the public. A little too late for that now.
Content, the meat and bones of the online ecosystem, used to require human energy and effort. Teams of copywriters, SEO analysts, marketers, social media consultants, and product managers would spend weeks building marketing and media campaigns, launching them, and gathering the results. Now we’re seeing teams of 4 accomplish the work of 3x that number with generative AI tools and workflows. This is an extraordinary improvement in efficiency of time, cost, and scale.
So what do we do now
The AI cat’s fully out of the bag. ChatGPT & co aren’t the first AI interfaces to have impacted our lives, but this is the first time an AI system has captured the imagination and dread of the public in such profound and powerful ways. The Internet has been trawled for content gems without anyone’s permission, and literal tons of collateral damage float in the wake of the gen AI ship. Myriad questions and quandaries abound, from the professional to the very personal:
Do you hire a designer or use Midjourney for your blog post images?
Can you let that staff writer whose work you’ve always disliked, finally go and use Jasper instead?
Will you get sued if you generate a sound-alike song for your video?
Can you trust code generated by AI?
How do we define (and enforce) copyright now?
How can you tell an AI-generated dating profile from a real one?
Should I trust the voicemails I receive are really from my friends and family?
Is it ok if I want ChatGPT to be my BFF?
True to form, it’s the headlines that scream end of days that draw those clicks and shares, but the real issues are the ones with the quieter headlines. We need to worry about human misuse of AI systems much more than any AI-spawned apocalypse or conquest. Perhaps the most sobering reality that we will need to come to grips with is not what AI can do against us, but how humans can use AI to our mutual detriment, whether intentionally or unwittingly. Never has technology saddled humanity with such existential burdens. To borrow from the poet Robert Frost, the only way out is through the AI fire.
A few tips for the road:
Don’t outsource your creative or critical thinking agency to an AI system. Continue to rely on your experience, insight, and wisdom.
Always put your people first. If you want to leverage AI in your company, identify the workflows where it can best support, rather than replace, your staff.
Give them agency: involve your team in determining the best ways to integrate AI tools into your operations.
Don’t be seduced by the temptation of fast, cheap image generators. There is no replacement for skilled human artists.
Likewise, don’t be seduced by what reads and sounds like a sentient entity. There is no mind, heart, or soul behind your chatbot screen.
Never, ever forget there’s always a non-zero chance your AI dreams of electric horses.
Oh, and one more thing.
Thank you for reading. If you’d like to leave a comment, I’d be thrilled to respond.
David is referring to the harrowing but profound experiments conducted by American psychologist Harry Harlow in the 1950’s and 1960’s on baby rhesus monkeys to study the influence of maternal separation, dependency needs, and social isolation on their development and well being. See https://www.simplypsychology.org/harlow-monkey.html.