Here’s how ChatGPT maker OpenAI plans to stop election misinformation in 2024


NEW YORK (AP) — ChatGPIT creator OpenAI has created a framework Plan As voters to stop using their devices to spread election misinformation more than 50 countries Get ready to cast your vote in the national elections this year.

The safeguards outlined by the San Francisco-based artificial intelligence startup in a blog post this week include a mix of pre-existing policies and new initiatives to prevent its wildly popular abuse. Generative AI Tools, They can create innovative text and images in seconds, but can also be weaponized to craft misleading messages or fake images.

These steps will apply specifically to OpenAI, which is just one player in an expanding universe of companies developing advanced generative AI tools. The company, which announced the steps on Monday, said it plans to “continue our platform security work by increasing accurate voting information, implementing measured policies, and improving transparency.”

It said it would ban people from using their technology to impersonate real candidates or governments, misrepresent voting patterns or create chatbots that discourage people from voting. It said it would not allow its users to create applications for political campaigning or lobbying purposes until more research could be done on the driving power of its technology.

OpenAI said “starting this year”, it will digitally watermark AI images created using its DALL-E image generator. This will permanently mark content with information about its origin, making it easier to identify whether an image appearing elsewhere on the web was created using AI tools.

political cartoon

The company also said it is partnering with the National Association of Secretaries of State to lead ChatGate users to accurate information on that group’s nonpartisan website that asks logical questions about voting.,

Makela Panditaratne, a lawyer with the democracy program at the Brennan Center for Justice, said OpenAI’s plans are a positive step toward combating election misinformation, but it will depend on how they are implemented.

“For example, how broad and detailed will the filters be when flagging questions about the election process?” He said. “Will there be items that slip through the cracks?”

OpenAI’s ChatGPT and DALL-E are some of the most powerful generative AI tools to date. But there are many companies with equally sophisticated technology that don’t have as many election misinformation safeguards.

While some social media companies like YouTube and Meta have introduced AI labeling policies, it remains to be seen whether they will be able to catch persistent violators.

“It would be helpful if other generative AI companies adopted similar guidelines to lead to industry-wide enforcement of practical rules,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation.

Without voluntary adoption of such policies across the industry, legislation will be needed to regulate AI-generated disinformation in politics. In the US, despite some bipartisan support, Congress has not yet passed legislation to regulate the industry’s role in politics. During this, more than one third American states have passed Or bills introduced to address deepfakes in political campaigns as federal legislation stalls.

OpenAI CEO Sam Altman said that despite all the security measures his company takes, his mind is not at peace.

“I think it’s a good thing that we have a lot of concern and we will do everything we can to get it right as much as possible,” he said during an interview Tuesday at a Bloomberg event on the sidelines of the World Economic Forum in Davos, Switzerland. “We have to look at it incredibly closely this year. Very strict monitoring. Super tight feedback loop. ,

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s Democracy Initiative Here, AP is solely responsible for all content.

Copyright 2024 The associated Press, All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


Source link

Leave a Comment