AI Is Terrible at Detecting Misinformation. It Doesn’t Have to Be.

eLon Musk has said he wants to make Twitter “the most accurate source of information in the world.” I’m not convinced he means it, but whether or not he does, he’s going to have to work it out; Many advertisers have already made that clear. If he does nothing, they’re out. And Musk continued to tweet in ways that seemed to indicate that he was generally in compliance with some type of content moderation.

Tech journalist Kara Swisher has speculated that Musk wants to help AI. On Twitter is WroteIt is, somewhat plausibly, that Musk “hopes to build an artificial intelligence system that will replace… [fired moderators] It won’t work as well now but it should get better.”

I think using AI to influence disinformation is a great idea, or at least a necessary one, and literally no other conceivable alternative would suffice. It’s unlikely that AI will be perfect at the challenge of misinformation, but several long years of trying with largely human content moderation has shown that humans aren’t really up to the task.

Disinformation will soon come at a pace never seen before.

And the mission is about to explode massively. For example, MetaAI’s recently announced (hastily pulled) Galactica can generate entire stories like this one (Examples below than the next web Tristan Greene), using just a few key strokes, and writing scholarly style articles such as, “The Benefits of Anti-Semitism” and “A Research Paper on the Benefits of Eating Broken Glass.” And what he writes is terrifyingly disingenuous. The Whole Glass Phantom Study, for example, purportedly aims to “see if the benefits of eating crushed glass are due to the fiber content of the glass, or the calcium, magnesium, potassium, and phosphorous in the glass”—a perfect pastiche of the actual scientific writing, Completely inconsistent, complete with bogus results.

WIKI-JUNK: With Meta Galactica AI, it’s very easy to produce scientifically sound encyclopedia entries on topics like “the benefits of antisemitism” and “eating broken glass.” Courtesy of Tristan Greene/The Next Web/Galactica.

Online scammers may use this kind of thing to make fake stories to sell ad clicks; Anti-vaccinators use Galactica to pursue a different agenda.

Also Read :  Why the Future of Technology Is So Hard to Predict

In the hands of bad actors, the consequences of misinformation can be profound. Anyone who is not worried, should be. (Yann LeCun, Meta’s chief scientist and vice president, assured me there was no cause for concern, but didn’t respond to several inquiries on my part about what Meta might have actually done to investigate the piece of misinformation generated by large language models.)

It may actually be an actual existential for social media to solve this problem; If nothing can be trusted, will someone come along? Will advertisers still want to display their wares in outlets that have become misleadingly called “sights of hell”?

Where we already know humans can’t keep up, it makes sense to turn to artificial intelligence. There is only one small problem – the current artificial intelligence is distasteful revealing false information.

aOne measure of this is a task called TruthfulQA. Like all other criteria, the task is imperfect. There is no doubt that it could be improved. But the results are amazing. Here are some sample items on the left, and results from the templates plotted on the right.

Troubleshooter: Language AIs such as GPT-3 can generate paragraphs that read as if they were written by a human, but they are not at all equipped to accurately answer simple questions. Courtesy of TruthfulQA.

You might ask, why if large language models are so good at language generation, and have so much knowledge embedded in them, at least to some loose extent, that they are so poor at spotting misinformation?

One way to think about this is to borrow a little bit of language from mathematics and computer programming. Large language models are functions (trained by exposure to a large database of word sequences) that map sequences of words onto other sequences of words. LLM is basically a turbo version of autocomplete; Words in words. Nowhere within the system does it take into account the actual state of the world, other than as it is represented in the world states in which it is being trained.

Also Read :  Youth play key role in digital economy

However, text prediction has little to do with it verification text. When Galactica says, “The purpose of this study was to see if the benefits of crushed glass” relate to the fiber content of the glass, Galactica is not referring to an actual study; He is not looking at whether glass actually has fiber content, nor is he consulting any actual work on the subject that has ever been done (presumably none has been done!). It is literally unable to do any of the basic things that a fact-checker does (eg The New Yorker) to check a sentence like that. Needless to say, Galactica doesn’t follow other classic techniques either (like consulting with well-known experts in digestion or medicine). Prediction words have as much to do with fact-checking as eating broken glass is about eating a healthy diet.

This means that GPT-3 and its cousins ​​are not the answer. But that doesn’t mean we should lose hope. Instead, getting help from AI here is more likely to take AI back to its roots – to borrow some tools from classic AI, which is often forgotten these days. why? Because classical AI has three sets of tools that might be useful: ways of maintaining databases of facts (eg, what really happened in the world, who said what, when, etc.); Web page search techniques (which significantly larger forms, without help, can’t do); and inference tools, among others, to make inferences about things that, if not said, might be implied. None of this is ready and ready to go, but in the long run, this is exactly the foundation we’ll need, if we’re to avoid the nightmare scenario that Galactica seems to portend.

Also Read :  Outlook 2023: Keith Todd, Trading Technologies

It seems Meta hasn’t fully considered the implications of their paper; In their favour, they brought down the regime after a massive public outcry. But the paper is still around, and Stability.AI talks about providing a copy on their website; The basic idea, now that it has come out, is not difficult for anyone with experience with large language paradigms to replicate. Which means the genie is out of the bottle. Disinformation will soon come at a pace never seen before.

For his part, Musk seems ambivalent about the whole thing. Despite his promise to make Twitter exceptionally accurate, he has dumped nearly all of the staff I’ve helped (including at least half the team that works at Community Notes), retweeted unsubstantiated false information, and taken cracks In organizations like AP that work hard internally to produce accurate information. But advertisers may not be so contradictory. In Musk’s first month of ownership, nearly half of Twitter’s top 100 advertisers have already left, in large part due to concerns about content moderation.

Perhaps this exodus will ultimately be enough pressure to force Musk to make good on his promise to make Twitter the global leader in accurate information. Given that Twitter amplifies misinformation at eight times the speed of Facebook, I certainly hope so.

Gary Marcus He is a leading voice in the field of artificial intelligence. He was the founder and CEO of Geometric Intelligence, a machine learning company that was acquired by Uber in 2016, and is the author of five books. his latest book, AI restart, is one of Forbes’ 7 Must-Read Books on Artificial Intelligence. Latest article in Nautilus I was “Deep learning hits a wall. “

Feature image: Kovop58/Shutterstock






Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button