The November 2022 launch of ChatGPT by OpenAI signaled a major milestone in the awareness and adoption of AI. Usage grew at record speed, achieving in mere weeks the kind of scale that it took previous record holders like Tik Tok and Instagram months or years to attain. Claims that conversational AI could challenge Google’s dominance in search caused a ‘code red’ at the tech giant, who quickly announced their plans to incorporate their own AI-powered bot (Bard) into their search engine. But in the sudden and swift generative AI arms race, even this wasn’t enough to ward off Satya Nadella’s claim that Bing is back (baby!) as Microsoft upped its stake in OpenAI by a cool $10 million and began a suspiciously quick rollout of a new chat-driven Bing experience to anyone willing to install the developer edition of the company’s Edge web browser.
From Great Heights
Early ChatGPT users were blown away. Business and tech journalists pumped out hype-fueled puff pieces. Pundits grew giddy with excitement. The future had indeed arrived! Knowledge work would never be the same. Humans would lose our lock on creativity. Professions would be imperiled. Entire industries would vanish practically overnight as “prompt engineers” learned to coax passable articles, high school essays, business plan outlines, and snippets of computer code out of the AI model’s vast storehouse of information.
ChatGPT may or may not have passed the Turing Test. It did pass the United States Medical Licensing Exam, if only barely, and nearly passed the legal bar exam. Buckle up — the world and work will never be the same.
That’s a whole lot of disruptive energy sparked by the hasty release of a half-baked project that the team at OpenAI nearly shelved, and OpenAI chief executive Sam Altman openly admitted was “incredibly limited” and even a “horrible product” — essentially, more toy than tool.
Indeed, it didn’t take long for those limits to become obvious to anyone who cared to consider them.
How Low Can You Go?
Trained on information but devoid of knowledge and experience, ChatGPT — like any conversational bot built on top of a large language model — is hardly the sage (or — ahem — bard) eager proponents made it out to be. It’s a stochastic parrot — stringing together snippets of language based on probabilities, devoid of understanding (hence, devoid of meaning), and prone to unwitting but not unimportant errors, falsehoods, and worse. Helpful in certain well-constrained cases but not the machine-learning miracle worker it was first made out to be. Certainly, not much (if at all) better than the kinds of AI writing assistants, generally trained on the same data, that have been available for years.
The very same media outlets that fueled the hype now decry ChatGPT as a “robot con artist” that is making “suckers” out of us all, a “notorious bullshitter” that can’t be trusted, a “deal with the devil” that will cost us dearly. I mean, hey, one innocent error in Google’s promotional video for Bard wiped $100 billion off Alphabet’s valuation. Break through ChatGPT’s carefully maintained guardrails and you might find yourself chatting with “DAN,” the bot’s dark side alter-ego who isn’t as likely to play it safe. Meanwhile, the more waitlisters gain access to the new Bing, the more obvious it becomes that the underlying OpenAI model (a version of GPT-3.5 that is more advanced than the one used by ChatGPT) is not only less-than-useful as a search engine but outright weird, inappropriate, and unhinged.
None of this should be a surprise to anyone who remembers Meta’s Galactica or Microsoft’s Tay, both of which went hardcore robo-fascist within days (even hours) of their release.
Still, oh what a difference a day makes!
If ChatGPT broke records for its speed of adoption, it may also hold the record for the fastest plunge from the tippy-top of the Peak of Inflated Expectations to the bottom of the Trough of Disillusionment.
The Future? Or Fvck It?
As anyone who has spent a fair amount of time around emergent technologies knows, the answer is not so cut-and-dry.
Prodigy. Netscape Navigator. Blogger. Twitter. Instagram. Tik Tok. Some were undeniably awful, at least at first. (Twitter has the distinction of surviving its early innovator arrows only to become increasingly awful now.) But each also represents a “where were you when” moment in the evolution of not just technology itself but the way humans interact with technology, information, and each other.
Early users of each remember the sense that something important had just changed forever — and over time, as these tools (or the new capabilities each represented) gained traction and earned mainstream appeal, it became clear that something had changed indeed. And not just something — arguably everything. Even if it took some time to understand just how significant the change would be. And even if not every change was positive or quite as promised.
And this is perhaps why, at a moment when conversational AI prompts us to “ask anything,” it’s equally important to question everything.
It’s why I’m equally excited by what AI promises and skeptical about what it delivers. Why I’m equally eager to heed (but hopefully not feed) the hype and to cast a critical eye. If this is a pivotal moment in the history of technology, the economy, and humanity, it’s also moment that benefits from a balanced perspective.
And all of this is why I’m not particularly discouraged that an AI Winter has come so soon. Because our climb up the slower, more steady Slope of Enlightenment begins soon too. There will be more bumps along the way, and not every early innovator (or digital age stalwart, for that matter) will make it all the way to the Plateau of Productivity.
But I’m excited to see where this journey leads us all.
(My brother from a cyborg mother, Geoff Livingston, has shared similar thoughts on Medium, and I’ve written about this before too.)