• Sat. May 18th, 2024

New tool unveiled by OpenAI can identify fake images made by its AI model

By

May 9, 2024

The proliferation of deepfakes on social networks is a growing problem driven by artificial intelligence. According to Home Security Heroes’ State of Deepfakes 2021 report, the spread of fake and deceptive videos created with AI increased by up to 550% between 2019 and 2023. In response to these findings, companies specializing in AI models have developed tools to detect images and videos produced by AI, similar to what OpenAI has done.

OpenAI, led by Sam Altman, recently announced the development of a tool designed to identify images created by their Generative AI, DALL-E3, with at least 98% accuracy. This tool will initially be tested with a closed group of scientists, researchers, and non-profit journalistic organizations. OpenAI emphasizes the importance of establishing common ways to share information about how digital content was created, aiming to provide clarity on content origins and creation methods.

Reported to be 98% effective, the tool by OpenAI seems promising in detecting images created with AI. Further expert opinions are awaited to validate its effectiveness. The same tool will be incorporated into Sora when officially launched.

Additionally, a range of articles and sources discuss topics like wedding photography, lawn care equipment, snow blowers, and legal services, among others. Each source provides insights and tips on their respective subjects, offering valuable information for readers seeking guidance in these areas.

By

Leave a Reply