When Mark Zuckerberg testified before Congress last year, the Facebook CEO addressed lawmakers’ concerns over the dissemination of fake news during the 2016 US election by doubling down on technology.

“I am optimistic that over a five to 10-year period, we will have AI (artificial intelligence) tools that can get into some of the nuances—the linguistic nuances of different types of content to be more accurate in flagging things for our systems,” he told senators.

But if the way forward relies at least partly on solutions that aren’t human, can we be sure such systems don’t fall prey to the kinds of biases and agendas that have turned social media from what was hoped to be a free exchange of facts and ideas into what many believe to be a serious threat to democracy? Not entirely, according to some of the programmers currently developing such systems.

Trust problem

There’s little debate that the phenomenon of “fake news”—real and imagined—is exacerbating deep distrust in the news media, especially in sources shared over social media platforms. Americans believe that some 65 percent of news distributed on social media is either “made up” or “cannot be verified as accurate,” according to a recent report by the Gallup and Knight foundations.

Such views aren’t unreasonable. Of 30 million tweets containing links to news outlets in the five months leading up to the 2016 election, 25 percent “spread either fake or extremely biased news,” one study found. Many of the tweets came from Russian sources, part of a deliberate effort to interfere in the race and undermine American democracy. The exposure was widespread: the tweets were viewed as many as 288 million times, according to another report.

Although a burgeoning industry of fact-checking websites with teams of analysts is working to rebut fake news, they typically take a long time to release information, often after news cycles have moved on. Enter computer automation, with the ability to scan huge tracts of writing, on which fact-checkers are increasingly relying. Since the 2016 election, “we are seeing much more of an automation move in fact-checking, and a lot of this is to automate detection of false statements,” said Angie Holan, as editor at Politifact, a leading fact-checking platform run by the Florida-based Poynter Institute.  Current methods include feeding articles into what’s called a computer assisted reporting (CAR) text analyzer. But not every falsehood is caught. “I still find human analysts better at capturing the whole picture,” Holan said.

AI would help the process by learning to analyze data and find patterns to rapidly produce results humans might miss.

Among the handful of groups spearheading the initiative, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is collaborating with the Qatar Computing Research Institute (QCRI). They use various methods to analyze news media websites, associated Twitter accounts, source reputations, web traffic and other factors to hypothesize rankings of high, mid-level and low factuality.

Another group, the FANDANGO Project, a European Union-funded effort, is designed for journalists and media professionals “to help them during the verification and fact-checking process of news pieces, images and videos that might be false, misleading or manipulated,” said María Álvarez del Vayo, a journalist at Civio—a Spanish transparency non-profit—and partner on the project.

Such projects aim to help journalists and eventually news consumers to quickly ascertain the level of factuality that can be assigned to sources or articles.  The technology can also help highlight political biases in news media.

The cutting edge of AI research is centered around the development of deep learning algorithms. Previously, AI models relied heavily on machine learning—a manipulation of statistical regressions, clustering and other mathematical processes to predict outcomes based on trends from historical data. Human-based decision-making is essential in such machine learning models.

By contrast, deep learning—technically still a subset of machine learning—builds on the concept of neural networks to enable AI to make predictions and decisions independently. Modeled on the human brain, neural networks are significantly expanding the capabilities of AI.

Inherent bias

But can AI independently eliminate bias in its methods? That’s currently a topic of debate, with most in the industry agreeing the answer depends on how AI is developed.

Since the models, regardless of whether they incorporate machine or deep learning, rely on human training, some argue that bias and perceptions in those developing AI will inevitably be reflected in the technology.

“Having unbiased datasets for training an AI model is definitely one of the most difficult tasks,” said Theodoros Semertzidis, a researcher at CERTH and partner with the FANDANGO PROJECT.

Holan agreed. Although AI’s potential benefit makes it a promising investment, she said, “as of now, it still requires a human touch.”

But even if AI were capable of unbiased fact-checking, it probably wouldn’t end use of the term “fake news” to dismiss objective facts. That’s because the mere presentation of facts doesn’t necessarily sway people.

“Readers [are] reluctant to consider arguments that clash with their values and identities,” Álvarez del Vayo said. Confirmation bias, a psychological phenomenon that causes belief in information that supports existing views, prompts many to ignore other information that doesn’t back preconceived ideas. AI and automation “imagines a technical solution for what is actually a human problem,” Holan said.

Still, although much more needs to be done to make AI a reliable radar for detecting false and misleading reporting, its potential benefit makes it a promising investment. “It can assist journalists in their day-to-day work,” Álvarez del Vayo said, “and by helping media outlets not to fall for hoaxes and distribute misinformation, it can help rebuild trust in traditional outlets.”

The postings on this site are the author’s own and don’t necessarily represent IBM’s positions, strategies or opinions.