AI Used to Spread and Counter Disinformation

AI Used to Spread and Counter Disinformation

Deep-fakes

Public opinion can be influenced by malicious actors trying to degrade public trust in media and institutions, discredit political leadership, deepen societal divides, and influence citizens’ voting decisions. The rise in AI and algorithmic governance of web apps has led to propagation of racial and gender biases, reaffirmation of beliefs through personalised content, infringement of user privacy, and manipulation of the user and their data.

A double-edged sword, AI is used to both spread and counter disinformation. Though the number of fact-checking initiatives has quadrupled from 2013 to 2018, manual fact-checking is increasingly ineffective given the substantial volume of disinformation. So, automated fact-checking (AFC) tools have been developed by non-profits such as Full Fact and Chequeado.

AI solutions have been effective in detecting and removing illegal, dubious, and undesirable content, and identifying fake bots. Google, Facebook, and Twitter rely on Machine Learning (ML) algorithms to spot and remove fake bot accounts. According to Facebook, 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI tools. Facebook also implements tools to detect false, debunked stories.

Social media platforms rely on AI for the most repetitive work and human review for nuanced cases. In 2018, Facebook employed 7,500 human moderators to review user content.

AI moderation has shortcomings such as over-blocking of accurate content, misplaced censorship, inheritance of biases from the data it is trained on, and failure to detect nuances such as sarcasm. The complexity and opacity of “black box” ML models constrict explainability of automated algorithmic decisions. Users don’t have full control of the content they see.

AI is used as artillery in ideological battles between democratic and autocratic states. Russia, China, and ISIS have used AI to spread disinformation campaigns. These campaigns utilize user profiles, often augmented with data consolidated by ventures like Cambridge Analytica, to customize the generated disinformation.

AI is used to generate ‘deep fakes’, digitally manipulated audio or visual material that is realistic and indistinguishable from real material. Some depict politicians, such as Sarah Palin, making statements they never made. UN devised an AI, trained on text from Wikipedia, to produce realistic speeches.

Deep-fakes

Policy measures such as EU’s GDPR relinquishes some algorithmic control and offer users insights on how companies use their data. Increased demands for algorithmic transparency are countered by tech companies’ unwillingness to expose proprietary code. Regulatory measures give more control to big tech to determine the content users see and stifle free speech.

Disinformation succeeds because it has an audience. Many European countries have launched anti-disinformation campaigns at schools. Media and information literacy programs encourage people to consume information critically and share content responsibly.

Countering mass-disinformation requires funding for AI Research and Development. In addition to the €5 million pledged to address disinformation in 2019, the European Commission earmarked an additional €25 million in 2020 for research and innovation projects that develop tools to identify content and analyse networks, and to better understand information cascades across various platforms.

Governments should strive to cooperate with technology companies, including social media platforms, to develop better filters to prevent the spread of disinformation. International organisations should encourage sharing information identifying disinformation campaigns.