The past year has been a roller coaster ride in the world of artificial intelligence, and no doubt many people have been dazed by the number of developments, reversals, constant hype, and equally constant stirrings of fear. But let’s take a step back: AI is a powerful and promising new technology, but the conversation isn’t always real, and it generates more heat than light.
Artificial intelligence is interesting to everyone from PhDs to elementary school kids for good reason. Not every new technology makes us question the fundamental nature of human intelligence and creativity. And It allows us to create an infinite variety of laser grappling dinosaurs.
This broad appeal means that debates about what AI is, what it isn’t, and probably shouldn’t have spread from trade conferences like NeurIPS to niche publications like this one, to the front page of push news mags at the grocery store. The threat and/or promise of AI (generally, where lack of privacy is part of the problem) has become a household topic seemingly overnight.
On the other hand, researchers and engineers who have worked in relative obscurity for decades on what they feel is an important technology must be widely validated and monitored. But like the neuroscientist whose paper resulted in a headline like “Scientists pinpoint delicate love center,” or the physicist whose ironically named “God particle” leads to religious debate, it is certainly equally frustrating to bounce back on one’s work on inter hoi polloi (i.e. Unscrupulous critics, not innocent regular people) are like a beach ball.
“AI can now…” is a very dangerous way to start any sentence (although I’m sure I’ve done it myself) because it’s so hard to tell what an AI is actually doing. It can certainly outsmart any human at chess or go, and it can predict the structure of protein chains; He can answer any question with confidence (if not correctly) and can do remarkably good imitations of any artist, living or dead.
But it’s hard to say which of these things mattered, to whom, and which ones will be remembered as short parlor tricks in 5 or 10 years, like many of the innovations we’re told will change the world. AI’s capabilities are widely misunderstood because it has been actively misrepresented by both those who want to sell it or drive investment in it, and those who fear or underestimate it.
There’s clearly a lot of potential in something like ChatGPT, but those who make products with it will love nothing better than to think it’s more powerful and less error-prone for you, or more likely a customer or at least someone who will encounter it. than it is. Billions are being spent to ensure AI is at the heart of all kinds of services — not necessarily to improve them, but to automate the way so much has been automated with mixed results.
Not to use the dreaded “they”, but they – that is, companies like Microsoft and Google that have an enormous financial stake in the success of AI in their core business (having invested so much in it) – are not interested in changing the world for the better, but in making more money out of it. the money. They are companies, and AI is a product they sell or hope to sell – no slander against them, just something to consider when making their claims.
On the other hand, you have people who fear, with good reason, that their role will be eliminated not because of actual obsolescence but because some naive manager has swallowed the “AI revolution” hook, line, and sinker. People don’t read ChatGPT scripts and think, “Oh no, this program does what I do.” They’re thinking, “Looks like this program is doing what I’m doing, for people who don’t understand either.”
This is very dangerous when, as is often the case, your work is systematically misunderstood or undervalued. But it is a problem of management methods, not artificial intelligence In itself. Fortunately, we have bold experiments like CNET’s attempt to automate financial advice columns: Graveyards like these ill-advised efforts will serve as dreadful markers for those contemplating making the same mistakes in the future.
But it is also dangerous to dismiss AI as a game, or to say that it will never do it because it cannot do it now, or because one has seen an example of its failure. It’s the same mistake the other side makes, but reversed: the proponents see a good example and say, “This shows it’s over for concept artists;” Opponents see a bad example (or maybe the same example!) and say “It shows they can never replace concept artists”.
They both build their homes on quicksand. But both clicks and eyeballs are, of course, the basic currency of the online world.
And so you have this duel of extremes that grabs attention not for being thoughtful but for being interactive and extreme — which shouldn’t surprise anyone, because as we’ve all learned from the last decade, conflict drives engagement. What seems like a circle of hype and disillusionment is just a see-saw bogged down in an ongoing and very unhelpful debate about whether artificial intelligence is fundamentally this or that. It feels like people in the 1950s debating whether to colonize Mars or Venus first.
The truth is that a lot of concept artists, not to mention novelists, musicians, tax preparers, lawyers, and every other profession that sees the encroachment of AI in one way or another, are actually excited and interested. They know their job well enough to understand that even a really good imitation of what they do is fundamentally different from actually doing it.
Advancements in artificial intelligence are happening slower than you think, not because there are no breakthroughs but because those breakthroughs are the result of years and years of work that wasn’t as photogenic or shareable as stylized avatars. The biggest thing of the past decade was “Attention is all you need,” but we didn’t notice it on the cover of Time magazine. It’s certainly worth it that from this month or that it’s good enough to do certain things, but don’t think of it as “crossing a line” so much as AI moving down a long, long, continuous gradient that even most talented practitioners can’t. See more than a few months out.
All this just to say, don’t get caught up in any of the hype or pessimists. What AI can or can’t do is an open question, and if anyone says they know, check if they’re trying to sell you something. What people might choose to do with the AI we already have, though — that’s something we can and should talk more about. I can live with a model who can imitate my writing style – I only quote dozens of other writers anyway. But I’d rather not work for a company that calculates their paychecks or who gets laid off, because I wouldn’t trust those who made that system. As always, it’s not the technology that is the threat – it’s the people who use it.