Is This Real? AI-Faked Rishi Sunak Videos Invade Facebook, Spreading Misinformation
3 min readFacebook has promoted more than 100 deep fake video ads showcasing Rishi Sunak, the UK Prime Minister.
According to the report published by Fenimore Harper, a company specializing in online communications, they conducted research on deep-faked political ads. The report can be found at the following link: Report
According to reports, the deep fake advertisements were seen by approximately 400,000 individuals through a total of 143 adverts. These ads were believed to have originated from multiple countries, including the US, Turkey, Malaysia, and the Philippines. It is estimated that the advertising costs may have surpassed £12,929.
An example that stands out is a manipulated video of BBC news anchor Sarah Campbell inaccurately reporting a scandal involving Sunak and an app supposedly created by Elon Musk.
According to the advertisement, Elon Musk has released a software that is able to independently carry out stock and market transactions. The advertisement further affirms, “I can personally attest to the trustworthiness of this investment platform.”
A scam investment was promoted through a falsified BBC News page.
A false advertisement, using deep fake technology, directed individuals to a fraudulent investment scheme by posing as a BBC news page. Cited from Fenimore Harper.
According to Marcus Beard, the creator of Fenimore Harper, the accessibility and affordability of voice and face cloning technology has made it simple for anyone to use someone’s likeness for malicious intentions.
The lack of proper moderation of paid ads on platforms such as Facebook was emphasized by the speaker. He pointed out that many of these advertisements go against Facebook’s advertising policies, but only a small number of them have been taken down.
According to a spokesperson from the UK government, they are taking extensive measures within the government to be prepared to quickly address any potential threats to their democratic processes. This is being done through their defending democracy taskforce and specialized government teams.
Our legislation, the Online Safety Act, includes additional measures for social platforms to promptly eliminate unlawful misinformation and disinformation, even if it is generated by AI, as soon as they become aware of its existence.
In response to the rise of false information, the BBC has emphasized the importance of remaining vigilant. A representative from the organization declared that they have initiated the BBC Verify program in 2023 as a measure against the increasing danger of disinformation. This initiative involves the creation of a specialized team equipped with advanced investigative techniques and open source intelligence (OSINT) to verify and fact-check videos, combat disinformation, analyze data, and clarify complicated narratives.
The BBC earns the trust of its audiences by demonstrating the credibility of its journalists and providing informative materials on how to identify fake and deepfake content. In the event that any fabricated BBC content is brought to our attention, we promptly take necessary measures.
Meta, the parent company of Facebook, has also acknowledged and taken action on these concerns. According to a spokesperson from Meta, they remove any content that goes against their policies, regardless of whether it was generated by artificial intelligence or a human.
The rise in complexity and frequency of AI-created deep fakes, particularly in the realm of political campaigns, prompt significant concerns about the trustworthiness of data in the age of AI.
Election outcomes under pressure due to AI-generated content
In the upcoming years of 2024 and 2025, the world is preparing for multiple significant elections. However, a pressing global problem that is emerging is the use of deep fake misinformation in political campaigns.
Prior to the election in Slovakia, there were concerns about the potential for AI to manipulate political narratives due to the emergence of fake audio clips featuring political figures, as reported by DailyAI.
The national election in Bangladesh faced comparable obstacles, as there were reports of pro-government organizations utilizing artificially generated news videos to manipulate public perception.
Leading technology corporations such as Google and Meta have initiated the implementation of regulations regarding political ads, requiring transparency when it comes to the use of AI or digital modifications. However, these efforts frequently prove to be inadequate.