Table of contents
- Incremental change vs. sea change
- Why technologies fail–a theory
- The ideal trifecta of technological adoption
- Or maybe it’s all about timing?
- 3 things that make generative AI uniquely new
- A low barrier to GenAI
- Generative AI achieves the trifecta of technological adoption
- AI has arrived to help us confront the biggest challenges humans have ever faced
- Back to the naysayers
- Let’s keep talking about generative AI
Is generative AI overhyped? Depending on how you look at it, we’re about halfway through the year in which GenAI burst into public consciousness, meaning we’re right on time to hear some contrarian views that this new class of tools designed to generate copy, photorealistic images, animations, and code is actually not going to be all that big of a deal after all.
The main arguments offered by skeptics like those linked above appear to be twofold–one, the cost of scaling up the hardware infrastructure for growing demand is immense and involves manufacturing custom chips. That’s really hard!
Two, the regulatory environment will become increasingly restrictive and limit the scope of widespread adoption of AI-generated content. Any day now, the US Congress could drop a bunch of rules on us and kill the whole GenAI vibe.
I’ll refute both of these arguments in due course. But first, let’s make a distinction between technologies that represent incremental improvements on existing products or processes, and those that represent sea change advances. The former is a matter of degrees of change, while the latter is about the kind of change certain technologies represent.
Incremental change vs. sea change
Incremental improvements give us better ways of doing something that we were already doing. Uber, Lyft, and ride-sharing in general is a good example. By eliminating the process of nervously watching a meter and waiting for your cabbie to make change, these technologies made something we were already doing–paying someone else to drive us somewhere–even easier.
Sea change technologies introduce an entirely new activity to the realm of human endeavors and require us to reorganize how we choose to spend our time. Virtual reality, for example.
Working out of a virtual reality startup incubator in 2016, I was exposed to numerous astonishing applications in their early stages that often left me convinced that VR was the Next Big Thing. Hovering over my childhood home in Google Earth VR, creating sculptures with Tiltbrush, and interacting with 3D avatars in a strange digital cave in The Wave VR were just a few of the experiences that blew my mind.
And then… the legless and lifeless Metaverse failed to gain any traction, headsets on shelves grew dusty with neglect, and we all seemed to move on from our Ready Player One fantasies when it dawned on us that life spent cocooned inside a helmet was no one’s idea of a good time.
Watching the hype cycle of VR/AR got me reflecting on why some technologies thrive while others die on the vine. Computer vision, speech generation, drug discovery, recurrent neural networks, natural language processing, the list goes on–we’re being hit with advances from multiple directions at once. Before we address whether generative AI tools that apply massive compute power to human intelligence have staying power, it may be pertinent to ask why certain technologies stick around, to begin with.
Why technologies fail–a theory
Broadly speaking, I think failed technologies tend to fail for one of the following reasons:
- They offer a different way of doing something that’s actually worse. Do you own a copy of the Star Wars trilogy on DVD? I do, in addition to a copy on VHS. I also own copies of the original trilogy on a long-forgotten, defunct format called Capacitance electronic disc. RCA rolled out this instant dinosaur of a format in 1984 and it vanished within two years. Imagine having to pause to flip a movie cartridge over halfway through the viewing. No wonder this format is lost to the sands of time.
- They offer incremental improvements that mistake themselves for sea changes. One way to spot these technologies is the frequency of the word “revolutionary” in the marketing copy. Truly revolutionary technologies tend not to tout themselves as such.
- They have to try too hard to invent the problem they’re invented to solve. Remember the Segway? You still see these scooters ridden around by security guards at malls sometimes (That is, if you visit malls at all). When they were introduced into the market with much fanfare, they were originally hyped as replacing walking. Meanwhile, some early adopters used the Segway to develop innovative new ways to fall.
- They fail to make a distinction between “different” and “improved.” One of my favorite examples is the Juicero juicer, which flamed out in 2017 after raising $120m in venture funding to produce a $700 piece of hardware that squeezed bags of juice for you into a glass. Another innovation that’s good at squeezing juice from a bag into a glass? Human hands.
- The barrier to entry is too high, either in terms of consumer investment or in the learning curve required to adapt a new technology into your life. Could this be the case with VR headsets (expensive) and cryptocurrency (confusing)?
It’s easy to laugh your way through the graveyard of failed tech. So what about technologies that succeed? What’s the secret to their staying power?
The ideal trifecta of technological adoption
Here’s a theory–new technologies must check most, if not all, of the following three boxes in order to last:
- Provide a low barrier to entry
- Improve existing processes
- Introduce new behaviors into daily life
Consider e-commerce, through which we buy stuff (existing process) cheaply (low barrier) online (new behavior).
Or smartwatches, which augment our exercise routines (existing process), with biometric data (new behavior), for the same cost as a nice but affordable traditional watch (low barrier).
Or maybe it’s all about timing?
And sometimes when it comes to the introduction of a new technology, the timing is just off. In the year 2000, I interviewed for a job at a Seattle company that made tablet computers that they called “ePads.” No joke. The battery technology at the time couldn’t support the device for more than a couple of hours, Wi-Fi was spottier and harder to come by, and the tablets were way too expensive. I didn’t get the job, thank goodness; the company dissolved a few months after my interview.
So a technology that improves upon an existing process and introduces new behaviors with a low barrier to entry can still flame out if it appears too soon for the Zeitgeist to integrate. Is that the case for generative AI?
3 things that make generative AI uniquely new
Artificial intelligence may just have all the hallmarks of a trifecta technology with the potential to change civilization on the order of the Industrial Revolution or Agriculture. Those are pretty lofty claims to make, I recognize, but there are at least three new properties of AI that set it apart from any technology that has come before.
1. Emergent behavior. Unlike the computers that appeared in the 20th century, which could perform tasks only according to specific instructions, the output of AI is unpredictable and emergent in the same way that behavior emerges in nature, as with the flight patterns of flocks of birds or the organization of ant colonies. This is an entirely new property of computers, to the point that we are being forced to revisit our definition of what a computer is even for.
2. Machine learning. We used to tell machines what to do. Now machines can tell machines what to do, reinforcing behaviors that lead to particular outcomes through trial and error. The ability of a computer to teach itself how to improve means that we humans will increasingly be setting processes in motion rather than dictating their outcome, not so much providing instructions as offering prayers or reciting spells.
3. It wants to exist. One big question is whether we are about to witness the emergence of artificial general intelligence. Another question might be whether we’ll recognize it when it does. Take a look at this recent interview with “Godfather of AI” Geoffrey Hinton, in which he claims that AIs are intelligent, have experiences, will soon gain consciousness, and will soon surpass humans as the most intelligent beings on earth. Consider Kevin Kelly’s argument that technology itself is driven by the same evolutionary processes that drive biology and represents the emergence of a new kingdom of life, which he details in his excellent book What Technology Wants. Or familiarize yourself with Andy Clark’s extended mind hypothesis.
Regarded together, these strains of thought suggest a vision for artificial intelligence that’s as rooted in science as it is unsettling and weird–that of a technology with which we should seek to peacefully coexist rather than exploit as a means to fulfill human ambitions and whims.
A low barrier to GenAI
I just got my bill for last month’s usage of Midjourney, the image-generation AI that I play with on my phone via the Discord app. It was $3.19. It required no investment of additional hardware or lengthy process to learn how to use this text-to-image program and start generating otherworldly alien landscapes in the style of Frank Frazetta.
In addition to the generally reasonable cost for users, you can also use generative AI right now, integrated into tools you already use. Whenever I open a Google Doc, I can click the icon in the top left corner to prompt Duet AI to generate a few paragraphs. (This blog, fwiw, is 100% organic, written by a free-range human being).
GenAI continues to get integrated with the typical daily workflows of students, creatives, developers, and really anybody who’s paid to spend some of their day staring at a laptop. It’s easy enough to click over to another tab, type a prompt, press the button, and paste the output somewhere else. As more people lean on generative AI to help them accomplish routine tasks faster, we might expect to see economies of scale assert themselves and prices fall in accordance with Moore’s Law.
My prediction is it will soon be less cost-effective to not use generative AI for certain kinds of jobs. Those who thrive in this new GenAI landscape will be those most adept at toggling between machine-generated content and authentic human expression, and who treat AI as a collaborator rather than a savior or job killer.
Generative AI achieves the trifecta of technological adoption
I’ve tried to establish that generative AI:
- Introduces humans to an entirely new behavior, that of engaging with machine intelligence, which experts agree is poised to surpass our own.
- Is easy to pick up and start using with little, if any training and costs little to use.
- Makes mundane tasks faster and easier, signaling better (not just different) ways to achieve existing activities.
Be that as it may, it’s reasonable to ask, what is AI’s ultimate purpose?
AI has arrived to help us confront the biggest challenges humans have ever faced
We’ve arrived at an inflection point when it behooves our species to bring philosophy to bear on the question of generative AI. It’s as good a time as any to stretch our imaginations. In a recent post on Medium, I speculated that the reason that AI is appearing on our planet right now might be to help us confront the climate crisis, spark a moral and ethical awakening, and kick off the Singularity by ushering in technologies that operate according to retrocausality.
If your powers of prognostication are limited to market fluctuations, costs of chips, and yesterday’s business news, it can be easy to fall into the trap of dismissing generative AI as just the latest overhyped tool that’s going to frustrate inventors and disappoint investors. I happen to believe otherwise, having experienced the shudder of recognition of a sea change technology, the impact of which we all must begin to imagine.
Back to the naysayers
The radical notion that AI may be developing not quite sentience but at least something that resembles a collective, autonomous will (as suggested by Hinton, Kelly, et al), would suggest that contrarian opinions about the cost being too steep have a blind spot.
We like to praise a particularly smart purchase as “paying for itself,” but when it comes to AI, this may become literally true. In a few years, we might be having conversations about an automated economy that radically disengages money from human labor altogether.
As for the argument that the development of AI is about to be brought to heel by regulations, well, good luck with that. The people we elect to pass laws aren’t the same people we hire to write code, and now the ones who write code are increasingly not “people” at all.
I worry that the deliberative properties that are necessary for functional governance are no match for the exponential explosion of AI. And I have my concerns about how AI has begun to influence the ways we work, think, and interact with one another in ways that aren’t all positive. I wonder how it’s possible to regulate a technology that even its creators don’t fully seem to understand, and that is growing faster than anyone can comprehend.
What we can do, then, is keep asking questions, keep discussing the development of this civilization-altering technology beyond its mere economic impact, and maintain a healthy dose of skepticism even as the wonders produced by generative AI astonish us in ways we never even imagined were possible.
Maybe the best course of action for us humans are bearing witness to this explosion of machine intelligence is to simply… be human?
Let’s keep talking about generative AI
I hope you enjoyed this post! If you’re someone who is in a position to explore how to use generative AI within your organization, we’d love to hear from you. SADA’s team of AI experts is exploring use cases, capabilities, and pricing models of generative AI, with particular emphasis on Google innovations including Gemini and Vertex AI. Feel free to set up a custom consultation and let’s start figuring out how your business can maximize the impact of these compelling new tools.