Latest News

AI disinformation: human worse than detection software in distinguishing real and AI-generated content

04/23/2025 04:59:35 AM News

AI generated Disinformation in Europe and Africa report unpacks the dangers of artificial intelligence in elections, wars and that there is no 100% method to detect AI generated content.

Source: Supplied




Sizwe sama Yende


A few weeks ago, a video of Kaizer Chiefs players smooching after scoring a goal caused a bit of a stir on social media.

The less discerning among us believed it was real reflection of a celebration, and on the usual football banter went, even eliciting homophobic comments.

Lately, Russian president Vladimir Putin is seen on videos purportedly on holiday, getting cosy with Hollywood actress, Angelina Jolie.

These are the effect of artificial intelligence (AI), which, while it brings a lot of economic opportunities, it is equally aiding disinformation already given a boost by the advent of social media networks.

AI disinformation has pervaded more serious areas of our lives such as elections, internal and state-to-state conflicts, which, unlike a sport event that could have been live on TV, can have catastrophic consequences on people’s judgements.

Other than deepfakes – AI produced or manipulated video or audio content - the challenges spawned by AI disinformation are many. South African judges and magistrates recently sniffed out error-riddled documents filed with the use of ChatGPT, and the implicated lawyers have had to be reported to the Legal Practice Council.

A study by Karen Allen, a former Africa and Middle East BBC correspondent, and Christopher Nehring, director of Intelligence at the Cyberintelligence Institute in Germany, conducted under the auspices of the Konrad Adenauer Stiftung Media Programme Sub-Saharan Africa says that to date there is no “100% accurate detection method for Al-generated content.”

“Humans are worse than detection software in correctly distinguishing between AI-generated content and human-generated content,” the study, titled – AI generated Disinformation in Europe and Africa – finds.

The study indicates that the African landscape was a fertile ground for AI disinformation to flourish due to low levels of trust in traditional media. AI-powered news websites, which may seek to establish equivalence with established media outlets, have a big chance in the continent.

But this does not mean that Europe was immune to disinformation as evidence shows the wars in Ukraine and Gaza and the 2024 Olympics have been primary targets of AI disinformation.

“The development of artificial intelligence (AI), especially generative AI to synthetically create texts, images, audio and video have brought manifold possibilities for economic development most notably in healthcare, agriculture, education and finance,” the study says.

“However, the same technology is helping to shape the rapidly evolving world of disinformation perpetrated by local actors, nation states and their proxies.”

The digital plagues, the authors said, unfortunately continue to spread across the world and they are becoming more and more dangerous.

“We urgently need more enlightenment, media literacy and counter-measures,” they said.

Moreover, the study points out the following:

·      Deepfakes have created a hotbed for fraud including activities on the dark web which uses deepfakes to blackmail, create pornographic videos and execute identity theft;

·      In SA, prominent radio and television news anchors images are used to give credibility to fraudulent campaigns;

·      Yahoo boys in Nigeria are cloning military profiles to perpetrate fraud and romance scams, which create an aura of authority and authenticity, making it hard for victims to question them.

“In general, both in Africa and Europe, deepfake technology and other AI manipulations are significantly more often used in organised cybercrime, such as fraud and cyberbullying (most notably deep porn attacks targeting prominent female journalists, influencers and politicians) rather than political disinformation,” the report says.

AI-assisted disinformation has been used to influence voter behaviour and voter perceptions during election.

Despite wide-spread fears, the report says, no elections in Europe over the past two years had been swung, overturned, or decisively manipulated due to AI-driven misinformation.

‘That is, there is no empirical proof that suggesting that any form of AI disinformation turned election results upside down.  In Africa, it seems that only in Mauritius where a ruling PM tried using the deepfake defence to discredit unwanted leaked information as being Al Fake, did AI misinformation make a decisive difference.”

 However, the PM’S attempt to use the deepfake defence backfired and caused him to lose an election that was a sure win.

The report, however, warns that these results should not be interpreted as implying either that AI disinformation campaigns are harmless or that they have no impact.

“The long-term effects of massive disinformation, for example, the erosion of trust in democratic institutions, the normalisation of manipulative tactics, and deepening societal conflicts and polarisation, represent significant albeit indirect effects of disinformation and victories for malign actors.”

It adds: “This is also not to say that disinformation, including AI disinformation, did not shift voter behaviour at all.”

Al disinformation campaigns in Africa have often focused on undermining election authorities and processes.

In 80 countries, it has become apparent that no disinformation actor or political campaign has yet leveraged all the supercharging features of generative AI for disinformation at the same time.

“This means genAI has been used in all its forms separately (for example, creation of content such as images, videos or text or creating and using fake bots) and not in combination (that is, for the fully automated, customised, personalised spread of high-quality fake content by realistic avatars and bots in combination with automated algorithmic manipulation attacks).”

The authors say that apocalyptic fears about the impact of AI disinformation may only come true if malignant actors cross this line.


 

 

 

 

Related Post