DEEPFAKES FROM THE GAZA WAR CATAPULT FEARS ABOUT AI’S POWER TO MISLEAD – FAKE BABIES, REAL HORROR

By The Malketeer

The Gaza war has brought to light the disturbing use of deepfake images, particularly those depicting bloodied and abandoned infants, created using artificial intelligence (AI). These manipulative visuals, viewed millions of times online, serve as a potent propaganda tool, exploiting people’s emotions and spreading false information about casualties and atrocities during the conflict.

While many misleading claims about the war don’t necessarily involve AI, technological advancements are amplifying the frequency of such deceptive practices. The potential of AI as a formidable weapon in misinformation campaigns is becoming increasingly evident, raising concerns about its future role in conflicts, elections, and significant events.

Experts, such as Jean-Claude Goldenstein from CREOpoint, predict a worsening situation with generative AI, creating a surge in manipulated content encompassing pictures, videos, and audio. The Gaza conflict has witnessed the reuse of images from other disasters and the generation of entirely new ones using AI programmes, intensifying the emotional impact by featuring images of babies, children, and families amidst the chaos.

The effectiveness of these manipulations lies in their ability to tap into people’s deepest impulses and anxieties. Whether it’s a deepfake or an authentic image from a different context, the emotional response from the audience remains the same. The more shocking the image, the more likely it is to be shared, unintentionally perpetuating the spread of disinformation.

Similar deceptive AI-generated content emerged during the Russia-Ukraine conflict in 2022, indicating the persistence of misinformation despite efforts to debunk it. With major elections scheduled in various countries, including the U.S., concerns are growing about the potential misuse of AI and social media to influence voters.

Lawmakers, such as U.S. Rep. Gerry Connolly, emphasise the need for investing in AI tools to counteract the threats posed by AI-generated falsehoods.

In response to these challenges, startup tech firms worldwide are developing programmes to detect deepfakes, add watermarks to verify image origins, and scrutinise text for inserted specious claims.

Factiverse, a Norwegian company, has created an AI programme to scan content for inaccuracies or bias introduced by other AI programmes, offering potential solutions for educators, journalists, and analysts in identifying falsehoods.

However, experts like David Doermann caution that those utilising AI for deceptive purposes are often one step ahead. Responding effectively to the political and social challenges posed by AI disinformation requires a multifaceted approach involving improved technology, regulations, industry standards, and substantial investments in digital literacy programmes.

Merely detecting and removing manipulated content may no longer suffice, necessitating a broader solution to tackle the evolving landscape of AI-driven misinformation.


MARKETING Magazine is not responsible for the content of external sites.

The Malaysian Marketing Conference & Festival 2024 at the Sime Darby Convention Centre is a TWO-day marketing event for all those in Marketing, Media, Advertising, PR, Digital, Data, and more….

The experience is on May 15 & 16, with Keynote Speakers, multiple tracks or Breakaway Sessions hosted by our booth partners who will show you the latest in the industry. 

Download Event PDF
Register Here




Subscribe to our Telegram channel for the latest updates in the marketing and advertising scene