Edge case
/ˈedʒ ˌkeɪs/
noun
A problem or situation, especially in computer programming, that only happens at the highest or lowest end of a range of possible values or in extreme situations.1
Edge case (autonomous cars)
noun
In sectors like AI for autonomous cars, which operate in the physical realm, edge cases are all the things that can go wrong in the real world that you never generally do not expect.2 Things like children, squirrels, and soccer balls entering the road at the most inopportune times or the most inconvenient angles of entry.3
We’ve gotten quite friendly with computers over the years. Perhaps not quite as friendly as with our cars (I’ve named my car but not my laptop), but we cannot seem to let go of the notion that our smartphones and laptops and other devices are our friends, and continue to anthropomorphize the hardware and software we use. The current AI craze has deepened that tendency: we now use words like hallucinate, plagiarize, write, draw, think and create in relation to generative AI systems. These are words that in the not-too-distant past were exclusive to humans.
We also do the opposite: we refer to biological or living things in computer terms. We talk about uploading or downloading info into/from our minds, or say our mind glitched when we can’t remember some detail we wanted to share. I myself jokingly refer to my aging hardware when I can barely keep up with my pre-teen on a 9-mile bike ride. “As long as the software works,” I say, and my fellow moms laugh, knowingly.
As entertaining as it is to spin up metaphors and analogies to the way the human mind and computational machines mirror each other, it also carries a fundamental gap. It’s a gap that’s about as deep and wide as the Mariana Trench.
And that gap is lined with edge cases.
Let’s get edgy
Not to be confused with edge computing, edge cases are those instances that lie outside of the norm, at the edges of the expected and customary. The exceptions to a desired rule. Consider the case of the autonomous car, one of the most passionately debated examples of AI’s difficulty in mastering edge cases. The algorithms that guide the car’s movements must process a dizzying amount of visual and motion data—all in real time. As any human driver knows, you can travel the same road every day for twenty years, and the driving experience on that road will be different each and every time, largely due to things that we experience as a natural part of our physical world, such as the weather, the season, and the time of day; the condition of the road; traffic flows; pedestrians, cyclists, birds, animals, and insects; and of course, the other vehicles. Not to mention our own mental, emotional, or psychological state: we might be tired or energized; anxious or optimistic; calm or angry. These things are not specific to the road we drive. They are a part of our internal and external worlds. But they do define the experience of any given period of driving on any given road.
Many of the external elements of the physical world are expected, and therefore fairly easy to program: the fact that a car drives on a road (as opposed to in a river, I guess). Other vehicles, moving or parked. Trees lining the sides of a street. Pedestrians, standing or walking. Traffic lights and stop signs. Daylight and nighttime. But here’s the kicker: what happens when you take any one of these expected external elements, and throw a twist into the equation?
These “expected elements” become edge cases.
The “pedestrian” might be a mother with a baby in a stroller and a young child holding her hand; the child sees a puppy on the other side of the crosswalk, lets go and starts running toward it. The mother is of course terrified, and runs after the child—with the stroller in tow.
The “tree” that’s supposed to stand along with other trees on the side of the street, has come crashing down (hello winter storm in California) and the road is impassable.
The “stop sign” that’s usually embedded on a post that’s stuck in the ground, is suddenly walking (no, it’s not a zombie stop sign, it’s held by a construction worker who’s helping reroute traffic).
The “other vehicle” has no driver or passenger and is driving itself in a circle in an intersection, while its owner is running after it, desperately trying to get back in (this has actually happened, in a small town in Connecticut some decades back—and likely plenty of other small towns across America).
For a human driver, it takes all of a second or two to take in the scene and react. Granted, this assumes they’re not distracted or under the influence. But the driver’s mind is able to recognize any given permutation of a pedestrian, tree, vehicle, or stop sign, and make quick decisions about how to maneuver their own car to avoid a collision or accident. (In the case of the driverless car running in circles, I imagine everyone at that intersection had a good laugh. Little did they know “driverless” would turn into “self-driving” in the not-too-distant future…) For our purposes here, whether or not the accident is materially avoided is secondary in importance to how the mind processes these “edge case” events.
To be fair, more than likely a self-driving AI system would pick up on a large tree blocking the road, and slow the car down. It should also “see” the mother chasing her toddler in the cross walk. And yet, there are instances of self-driving cars not seeing or recognizing pedestrians, running into various types of obstacles in their path small and large, and confusing red signs on buildings with stop signs. It’s this apparent inconsistency in edge case recognition that’s proving to be a thorn in autonomous AI engineering teams’ sides.
We can assume an autonomous AI system has been programmed with the same basic concept of driving that human drivers are taught: things like stopping at red lights and stop signs, driving on the right side of the road (in most countries), using the turn signal, following the speed limit, turning the headlights on at night, and so on. These scenarios form the core knowledge base of both the human driver and the AI. But the algorithm does not “see” the way we see—it processes visual and spatial data coming in from its sensors. It does not hear, feel, smell, or experience proprioception the way we do (yes, proprioception does come into play when you drive—in fact, experienced drivers have the ability to extend this sense of their body’s movement and position in space, to that of their vehicle). It does not feel anxious if it’s running late or impatient if a light stays red a little too long, and it’s not prone to road rage (thank heaven for that).
Most critically, the algorithm has neither the breadth and depth of the human mind, nor the gut instinct of an experienced driver. It can only do what it’s been programmed for and trained on, and eventually what it’s capable of learning on its own. Bottom line, the manner in which a human driver reacts to any given scenario differs from that in which an algorithm processes it. In some cases, the human will prove the better driver. In others, the algorithm—the car has sensors all around the body, for example, giving it a few extra pairs of eyes.
But even if the algorithm is able to handle some, or even many edge cases, the reality of our world is such that there will always be some scenario that is crazier, more unusual, or unforeseen, than the ones the algorithm is able to handle. This is not to say we’ll never have a sufficiently advanced autonomous car AI to hang up our car keys; but it does mean we should not labor under the illusion that self-driving cars will never make the wrong decision.
To illustrate the context a little better, here’s an edge case for you… if you’re in Iowa. Maybe not so much of an edge case in Alberta, Canada, where this photo was taken. :)
The reason I chose the autonomous AI sector to illustrate the concept of edge cases is because so many of us can relate. Raise your hand if you’ve survived at least one near-panic attack triggered by a squirrel that just couldn’t make up its little rodent mind which way to dart across the road: straight across; halfway in and stop; across and then back again to the original spot; wait till your wheels are about to crush it; or just sit there at the edge of the asphalt making you slow down for no reason. It’s one furry bundle of edge cases. Cute not cute.
It’s the real world, baby
The reality is that the world is a massive, planetary amalgamation of edge cases. If you like, you can think of it as one giant squirrel crossing the road (since the chickens are on sabbatical). What’s more, none of these edge cases are isolated from other potential edge cases. The squirrel might not have run into the road if it hadn’t been scared by an errant soccer ball. The small child might not have tried to run toward the puppy if there was no puppy, or if it had been a larger dog.
Except that these are not edge cases. Everything that happens, to us, near us, or even somewhere far away from us, is simply the real world, and it’s all deeply interconnected. Fundamentally of course, the squirrel is not an edge case. It’s a mammal acting as it has for its 36 millions years of existence. The bears in the photo above are not edge cases either. Nor is the uprooted tree or the handheld stop sign or the panicked mother in our discussion above.
The term “edge case” refers not to the actual animal or plant or thing, but to the frequency and (ir)regularity that its existence, activity, or occurrence represents. In other words, a squirrel crossing the road might represent an edge case in some scenarios, but a fairly routine occurrence in others. Think New York City vs Charleston, West Virginia. Or the bears in Iowa vs Alberta.
It’s there that the core of the matter lies. It’s one small associative step from recognizing the source or cause of an edge case as a real, material living being or thing, to linking the term edge case with that being or thing and conceiving of them as one and the same. For all AI intents and purposes, the squirrel, the stop sign, and the child effectively become the edge cases.
There is nothing strange or wrong about employing terms that aid in identifying and resolving mathematical or engineering problems and challenges. In fact, it’s necessary. Engineering, logic, and mathematics all need their own language to function and communicate, just as do law, philosophy, neuroscience, literature, art, music, and every other human endeavor. There are entire books about semantics and the role it plays in human society.
The danger we need to watch for is when the semantics we employ start to displace our sense of our own reality. In the industrialized world, and increasingly in the developing world, technology is so deeply interwoven into nearly all aspects of our lives that we have begun to normalize tech-related terms, metaphors, and analogies. We talk about minimizing screentime before bed as part of our “sleep hygiene.” As if it were a natural part of our nightly facial. We’ve got a whole new lexicon of social acronyms, inspired—necesitated?—by the size of smartphone keyboards and the speed-typing nature of online conversations. Some people actually pronounce the acronyms LOL and OMG in their IRL conversations. (BRB might be a stretch but I’m sure someone somewhere has tried.) Say “swipe left” and “dating” and pretty much everyone will know what you mean. Cute, but maybe not so cute.
We are being systematically programmed to think, react, and interact in more and more predictable ways, to smooth the marketing funnels, help take the edge off those pesky edge cases, and feed the ever-hungry algorithmic feedback loops. (Shoshana Zuboff, the author of The Age of Surveillance Capitalism, has a few thoughts on that—700 pages’ worth, and they’re worth their weight in gold). There’s a lot more to unpack there… stay tuned for an upcoming post.
Coming full circle
And so we circle back to the argument made in the beginning of this essay, which is that the human mind and the algorithm are not as similar as the AI companies and the media would love to have us think. Do I think AI is an extraordinary tool that can do wonders for humanity and the planet (assuming, again, it’s utilized with beneficial intentions)? Absolutely. Do I think the technologists are well on their way to creating a master AGI (artificial general intelligence) that will solve all our problems? No. There is a lot more to the human experience than the processing of information and the organization and completion of tasks.
If evolution had organized biological life into the neat little data points and packets the algorithms seem to prefer, we wouldn’t be here. Accidents, mutations, variations, anomalies… this is the wondrous creativity and infinite imagination of nature, of evolution, of life itself. To call a given event or occurrence “edge case” might be technically accurate, in the context of a larger set of events or occurrences, but it sucks the life out of it.
Think of any field, profession, or sector. Healthcare. Education. Finance. Agriculture. Sports. All the arts. Everything we do and experience is a rich, organic, dynamic system in constant flux, constant motion, constant evolution. Yes, there is always a “core” that defines the basic concept and experience of that field, but enveloping it and coursing through it are waves and currents and swirls of edge cases. It’s precisely the edge cases that help define the field, sector, or profession. This is what gives flavor to life. What spices up our experience and deepens our understanding of the world and our relationships with one another.
To the engineers and technologists, I ask, are we spending too much of our precious time and resources trying to parse every aspect of the human experience into bits and pixels? Artificial intelligence is an extraordinary tool, but as any tool, it is only as good as the purpose it is given. A wheel is only a decorative circle if it’s not employed in transportation, mathematics, or manufacturing. Shredded, soaked and pressed plant fibers aren’t much to look at until someone takes a brush or pen to them and writes a book (or prints some cash). And who doesn’t love some fine fermented grapes! AI can work wonders in finance, health, biotech, automation, manufacturing, and supply chain management—in areas where it serves as support for the human experience. When it is deployed to replace or co-opt the very bedrock of the human experience, formed by creativity, imagination, and ingenuity, it ceases to be just a tool and becomes an agent of cultural and societal decay.
The bull’s eye misses the point
We don’t need AI to control every aspect of our lives. It’s great and useful and valuable in specific sectors and use cases—what engineers call narrow AI—but the human experience is best left to, well, humans. Personally, I neither want nor need a chat bot to write my novels, stories, or posts like these for me. I will be the last human writer standing tall against the call for us storytellers to become “prompt engineers.” I also happen to enjoy the physical company of my fellow edge-case generating humans, and I know for a fact I’m far from the only one. If everything worked like clockwork and everyone had the personality of a cookie template, we would all get bored in a hot minute. It’s the edge cases that keep us on our toes, make life interesting, and give us things to talk about. How often have you come home bursting to tell your partner about the perfectly non-eventful drive home you just had? Now, that time you saw a squirrel darting backwards across the road…
So the question becomes, do we allow the algorithms to bleed us dry and render us data puppets, mechanically jostled from one reaction to another by over-engineered viral posts, or do we embrace the warm, churning chaos of unpredictability that makes us who we are, algorithms be damned? After all, chaos in the natural world is what drives beauty, non linearity, and innovation. How very edgy.
Squirrels, capable of extraordinarily rapid random multidirectional movement, are the greatest source of sudden onset road panic in the animal kingdom. Someone really needs to do a study to prove this scientifically.
Very interesting. I have some views on driverless cars -- see links below, especially the one called #3 in the second article link. I'd love to know more about "proprioception the way we do (yes, proprioception does come into play when you drive)". I'm wondering if it's another way of thinking about the heightened awareness of potential hazards demonstrated by advanced drivers (see article #3)
https://www.ictineducation.org/home-page/2014/5/30/driverless-cars-technology-trumps-experience-again.html
https://www.ictineducation.org/home-page/an-unintended-consequence-of-driverless-cars
Great points here and things I've been dealing with in Autonomous design for years. Interestingly enough, the grid and structure of cities is actually easier, much easier, than terrain navigation even though it seems busier.