1350029 Untitled Design 57.jpg

New Delhi: This year major world’s democracies like the United States, United Kingdom and India are set to hold elections. OpenAI has implemented several policy modifications to ensure that its generative AI technologies, including ChatGPT, DALL-E, and others do not pose a threat to the integrity of the ‘democratic process’ during the upcoming electoral events.

In a blog post, OpenAI has outlined measures to ensure the safe development and use of their AI systems particularly during the 2024 elections in major democracies. Their approach involves prioritizing platform safety by promoting accurate voting information, enforcing responsible policies and enhancing transparency. This aims to prevent potential misuse of AI in influencing elections.

The company is actively working to anticipate and prevent potential abuses, including misleading “deepfakes,” large-scale influence operations, and chatbots impersonating candidates. OpenAI permits the use of its technology for political campaigning and lobbying. However, the company imposes restrictions on the creation of chatbots that simulate real individuals such as candidates or local government representatives.

The San Francisco-based AI will not permit applications that dissuade individuals from engaging in the democratic process, such as discouraging voters or misrepresenting qualifications. OpenAI has revealed plans to introduce a provenance classifier aimed at assisting users in identifying images created by DALL-E. The company has indicated that this tool will be released soon for initial testing, with the initial group of testers comprising journalists and researchers.

Before this declaration, Meta, the owner of prominent social media platforms such as Facebook and Instagram had already prohibited political advertisements from utilizing its generative AI-based ad creation tools. This decision was based on the perceived “potential risks” associated with this emerging technology.

“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries”, Meta wrote in a blogpost on its website.