OpenAI Rules Out Use in Elections and Voter Suppression

OpenAI Rules Out Use in Elections and Voter Suppression

Source Node: 3073710

In a decisive move to combat misinformation in elections, OpenAI has declared a rigorous stance against using its generative AI tools in election campaigns and voter suppression tactics.

This announcement comes as an essential step toward ensuring the integrity of numerous key elections scheduled for 2024.

Also read: OpenAI Negotiates News Content Licensing with CNN, Fox, and Time

Combating misuse with innovation and policy

OpenAI has initiated a strategy to safeguard its technology from being exploited to manipulate election outcomes. The company has established a specialized team focusing on election-related concerns, merging the expertise from various departments, including legal, engineering, and policy. This team’s primary aim is to identify and mitigate the potential abuse of AI in elections.

“We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams.”

The threat of misinformation in politics is not new, yet the advent of AI technology presents unprecedented challenges. Recognizing this, OpenAI is taking proactive measures. The company plans to employ a mix of techniques such as red teaming, user engagement, and safety guardrails. Specifically, their image generation tool, DALL-E, has been updated to prevent the creation of images depicting real people, including political candidates.

“DALL·E has guardrails to decline requests that ask for the image generation of real people, including candidates.”

OpenAI continuously revises its user policies to keep pace with the evolving landscape of AI technology and its potential misuse. Its updated safety policies now explicitly restrict the development of AI applications for political campaigning and lobbying. Additionally, measures have been put in place to prevent the creation of chatbots that mimic real people or organizations.

Enhancing transparency and accountability

A key component of OpenAI’s strategy is the introduction of a provenance classifier for its DALL-E tool. This feature, currently in beta testing, can detect images generated by DALL-E. The company aims to make this tool accessible to journalists, platforms, and researchers to enhance transparency in AI-generated content.

“We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.”

OpenAI is also integrating real-time news reporting into ChatGPT. This integration aims to provide users with accurate and timely information, enhancing transparency around the sources of information provided by the AI.

In a joint effort with the National Association of Secretaries of State in the U.S., OpenAI is focused on preventing its technology from discouraging electoral participation. The teamwork involves directing GPT-powered chatbot users to reliable voting information websites like CanIVote.org.

Rivals follow suit in the AI race

OpenAI’s announcement has set a precedent in the AI industry, with rivals like Google LLC and Meta Platforms Inc. also implementing measures to combat misinformation spread through their technologies. This collective effort by industry leaders signifies a growing awareness and responsibility towards the potential impact of AI on democratic processes.

But is this enough? Charles King of Pund-IT Inc. raises a critical point, questioning whether these measures are timely or reactive. He argues that concerns about AI-generated misinformation have existed for years, and OpenAI’s recent announcement could be seen as too little, too late. This perspective provokes a deeper reflection on the role and responsibility of AI developers in the political landscape.

“So at best, this announcement suggests OpenAI was asleep at the switch. But at worst, it resembles a hand-washing ritual that OpenAI can point to when generative AI hits the fan during upcoming global elections this year.”

Time Stamp:

More from MetaNews