Policymakers worldwide are expressing concerns about the potential misuse of AI-generated disinformation to manipulate voters and intensify divisions in the lead-up to significant elections in 2024, according to a report by the Financial Times.
The focus of this worry has materialized in Bangladesh, a nation of 17 crore people who are headed for an election in January 2024. The campaign so far has been marked by a contentious power struggle between incumbent Prime Minister Sheikh Hasina and her rivals.
Both pro-government and pro-opposition news outlets and influencers in Bangladesh have reportedly been promoting AI-generated disinformation especially deepfakes which have been made using affordable tools offered by US and Israel-based AI startups.
This trend underscores the challenges in controlling the use of such tools in smaller markets that may be overlooked by major American tech companies.
The Financial Times report quoted Miraj Ahmed Chowdhury, the managing director of Bangladesh-based media research firm Digitally Right, saying that while AI-generated disinformation is currently at an experimental stage, the use of AI tools allows for the mass production and dissemination of misinformation and poses a significant threat.
The Financial Times cites instances where politically motivated deepfakes or fake videos, often taking the form of news clips, are created using tools like HeyGen, a Los Angeles-based AI video generator. This tool enables users to produce clips featuring AI avatars for as little as $24 a month.
The disinformation exacerbates the already tense political climate in Bangladesh ahead of the upcoming elections.
Despite calls for action, tech platforms have shown apathy when confronted with the fake nature of these videos.
A primary challenge in identifying such disinformation also lies in the lack of reliable AI detection tools, particularly for non-English language content.
Sabhanaz Rashid Diya, founder of Tech Global Institute and former Meta executive, noted that the solutions proposed by major tech platforms, primarily focused on regulating AI in political advertisements, may have limited efficacy in countries like Bangladesh, where ads play a smaller role in political communication. She emphasized that the lack of regulation and selective enforcement by both platforms and authorities exacerbates the problem.
Diya also highlighted a greater threat: the possibility of politicians leveraging the mere potential of deepfakes to discredit information.
The ease with which a politician can claim a genuine piece if news to be a deepfake or claim “This is AI-generated” whenever they are questioned, adds a layer of confusion, challenging people’s ability to distinguish truth from falsehood. As AI-generated content is weaponized, particularly in the global south, the challenge lies in addressing how it erodes the public’s trust in information.
(With inputs from agencies)
from Firstpost Tech Latest News https://ift.tt/BKQIepk
No comments:
Post a Comment