MIT researchers have developed an AI system to fight the spread of disinformation online

Researchers have developed an AI system that could help combat disinformation on social media. — ETX Studio pic
Researchers have developed an AI system that could help combat disinformation on social media. — ETX Studio pic

Follow us on Instagram and subscribe to our Telegram channel for the latest updates.


CAMBRIDGE, June 9 — Disinformation campaigns are becoming increasingly frequent on social networks, especially in the run-up to elections. Such campaigns are capable of manipulating opinion, reinforcing conspiracy theories or even swaying elections.

To counter this new danger, Massachusetts Institute of Technology (MIT) researchers have designed an artificial intelligence system capable of identifying fake news with great accuracy.

In an effort to fight online disinformation and clean up the internet, a team of MIT researchers have launched a programme called “Reconnaissance of Influence Operations (RIO).”

Their aim was to create a system capable of automatically detecting disinformation and the accounts that spread it. Its success rate is around 96 per cent.

The project first originated in 2014, when the team was studying how social media could be exploited by malicious groups.

As part of their work, the researchers noticed increased and unusual activity from accounts that appeared to be driving pro-Russian stories.

As a result, the team decided to launch the programme to study what kind of similar techniques might be used on social media during the 2017 French presidential elections.

In the 30 days leading up to the vote, the team compiled 28 million posts from more than one million accounts relating to the election.

For example, the study detected a large rate of hostile accounts spreading fake news around the #macronleaks narrative.

The team of researchers also looked at the impact of disinformation messages. Using the RIO programme, they were able to find out whether an account was contributing to the spread of fake news and what impact this had on the wider network. RIO also has the ability to help predict how different countermeasures might stop the spread of a given disinformation campaign.

Hostile online influence operations

Edward Kao, a member of the research team, told MIT News: “If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts. What we found is that, in many cases this is not sufficient.

It doesn’t actually tell you the impact of the accounts on the social network.” To address this, Kao developed a statistical approach (as outlined in the study) to help determine whether an account is spreading disinformation and the extent to which the account causes the network, as a whole, to change and amplify the message.

The RIO system collects relevant data then identifies potential influence operation narratives, and classifies accounts based on their behaviour and content.

It then builds a narrative network to estimate the impact of these accounts on spreading specific narratives across social media.

These hostile online influence operations, as described in the study, are facilitated by the relatively low cost, unbridled scalability and automation of a certain number of variables in the spread of disinformation on social media.

It remains to be seen if and how this system can be implemented in order to control the spread of fake news. — AFP

Related Articles