What is the Potential Risk of Misinformation and Manipulation in Such a Historic Global Election Year?

What is the Potential Risk of Misinformation and Manipulation in Such a Historic Global Election Year?

Approximately four billion people, representing nearly half of the world’s population across 50 countries, are headed to the polls this year, with the outcome of these elections in countries such as the United States, India and the United Kingdom having a significant impact on the role of advanced technology, including artificial intelligence.

The wider accessibility of the internet and social media in many countries means that voters are more connected than ever with politicians and their platforms. People also have more exposure to news and daily updates than ever before. While this can result in a public that is more informed and in touch with complex realities around the world, the synonymous growth in misinformation and disinformation poses a tremendous challenge.

Many news outlets go to great lengths to verify information and validate sources prior to publishing eye-catching headlines. However, others are not so rigorous. This has contributed to an influx of misinformation by seemingly legitimate organisations, which has lowered public trust in the media.

Additionally, with some news outlets taking partisan stances, people can choose to get their information solely from sources that conform to their political views. By only consuming media that aligns with their existing beliefs, social groups often become more polarised as people have a limited and narrow perspective that lacks alternative points of view or dialogue with others who see the world differently.

In its 2024 Global Risks report, the World Economic Forum noted: “The growing concern about misinformation and disinformation is in large part driven by the potential for AI, in the hands of bad actors, to flood global information systems with false narratives.”

Similarly, late last year, the United Kingdom’s Cyber Security Center released a report highlighting its concerns that states such as China and Russia could use AI to manipulate voters and interfere with the country’s elections. The report calls for additional safeguards to be implemented, such as enhanced legal frameworks and funding for the research and development of technologies to mitigate the impact of malicious internet content.

There are concerns that disinformation, especially in the form of deep fakes or hyper-realistic media that convincingly portrays real people, could be used by malign actors to manipulate voters and exacerbate social divisions. We have already seen examples of this around the Slovakian election last year.

Days before voting was to take place, a video of one of the candidates claiming to have rigged the election began circulating the internet. While the video was quickly identified as fake, it had already been shared many times across various social media platforms with an unquantifiable impact on voters’ behaviour. This foreshadows how easily voters could be manipulated by AI-generated deep fake content in upcoming elections around the world.

Experts are suggesting several strategies for companies and governments in response to the growing threats of disinformation to democratic elections in the current age of the internet. At an individual level, improving the population’s media literacy through public campaigns and access to short courses could mitigate the impact of disinformation campaigns. Additionally, holding news outlets to a higher standard when it comes to fact-checking and verifying sources will be important.

Many countries, including the United States, United Kingdom and Australia are working to develop enhanced regulations around technology, including AI. Such measures might include a requirement to label AI-generated content so that it is easier for users to identify while scrolling social media.

Since preventing the use of AI would be nearly impossible, legislators around the world are also lobbying major social media platforms, such as Meta, the parent company of Facebook, Instagram and WhatsApp, to do more to regulate disinformation and misinformation posted on their sites. Meanwhile, others are investing in the development of technology that can detect and address deep fake content.

The exponential growth of AI has made it difficult for governments to keep up with and regulate its use. As such, the use of AI and social media more broadly will undoubtedly play a role in upcoming elections around the world as individuals, companies and governments push to ensure it is used wisely.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : IBTimes – https://www.ibtimes.co.uk/what-potential-risk-misinformation-manipulation-such-historic-global-election-year-1723288

Exit mobile version