
WASHINGTON, Feb. 2 — Excitement over ChatGPT — an easy-to-use chatbot that can deliver an article or computer code on demand and within seconds — has sent schools into a panic and made Big Tech green with envy.
But behind the headlines, ChatGPT’s potential impact on society remains more complex and unclear. Here’s a closer look at what ChatGPT is (and isn’t):
Is this a turning point?
It is quite possible that the release of ChatGPT by California-based OpenAI in November will be remembered as a turning point in introducing a new wave of AI to the wider public.
What’s less clear is whether ChatGPT was really a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investment from Microsoft.
Yann LeCun, Meta’s chief artificial intelligence scientist and professor at New York University, believes that “ChatGPT isn’t a particularly interesting scientific advance,” describing the app as a “dazzling demo” designed by talented engineers.
Speaking to the Big Technology Podcast, LeCun said that ChatGPT is devoid of “any internal model of the world” and is just a “word-by-word” flip based on input and patterns on the Internet.
“When working with these AI models, you have to remember that they are slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, a Silicon Valley venture capital firm.
“Every time you ask a question and pull your arm, you get an answer that may be great…or not…failures can be very unpredictable,” Huang writes at Ars Technica, a technology news site.
Just like Google
ChatGPT is powered by a nearly three-year-old AI language model — GPT-3 from OpenAI — and the chatbot uses only a portion of its capabilities.
The real revolution is human-like chatting, said Jason Davis, a research professor at Syracuse University.
“It’s familiar, it’s talkative and guess what? It’s kind of like making a query on Google,” he said.
ChatGPT’s rock star-like success has shocked its creators at OpenAI, which received billions in new funding from Microsoft in January.
“Because of the magnitude of the economic impact we’re anticipating here, more scalability is better,” OpenAI CEO Sam Altman said in an interview with the StrictlyVC newsletter.
He said: “We rolled out GPT-3 almost three years ago… so the incremental update from that to ChatGPT, I felt should have been expected and I want to do some more reflection on why that was kind of a miscalculation”.
The danger was bewildering the public and policymakers, Altmann added. On Tuesday, his company unveiled a tool to detect text generated by artificial intelligence amid concerns from teachers that students might rely on AI to do their homework.
What now?
From lawyers to speechwriters, programmers to journalists, everyone is waiting breathlessly as the disruption caused by ChatGPT will come out first, with a paid version of the chatbot expected soon.
At present, officially, it will be the first significant implementation of OpenAI technology for Microsoft software products.
Although details are scarce, most assume that ChatGPT-like capabilities will appear on the Bing search engine and in the Office suite.
“Think of Microsoft Word. I don’t have to write an essay or essay, I have to tell Microsoft Word what I want to write with a prompt,” said Davis.
He believes influencers on TikTok and Twitter will be the first to embrace this so-called generative AI because going viral requires massive amounts of content and ChatGPT can make chores almost instantaneous.
This of course raises the specter of disinformation and spam that is done on an industrial scale.
At present, Davis said that the scope of ChatGPT is very limited due to computing power, but once this is increased, the potential opportunities and risks will grow exponentially.
And just as with the never-ending imminent arrival of self-driving cars, experts disagree on whether this is a matter of months or years.
mockery
LeCun said that Meta and Google have refrained from releasing AI as aggressively as ChatGPT for fear of “ridicule” and backlash.
Quieter versions of language-based bots — Meta Blenderbot or Microsoft Tay for example — have been quickly shown to be capable of generating racist or inappropriate content.
He said the tech giants should think twice before launching something that “would send crap” and disappoint. – France Press agency