• Wed. Jun 26th, 2024

AI is being utilized by propagandists as well

By

Jun 8, 2024

OpenAI’s adversarial threat report serves as a crucial starting point for enhancing data sharing practices in the field of artificial intelligence. Independent researchers have started compiling databases of misuse, such as the AI Incident Database and the Political Deepfakes Incident Database, which enable comparison of different types of misuse and tracking of changes over time. However, detecting misuse from an external standpoint can be challenging. As AI tools become more advanced and widespread, it is essential for policymakers to comprehend how they are being utilized and exploited. While OpenAI’s initial report provided a broad overview and specific examples, expanding data-sharing partnerships with researchers to offer more insights into adversarial content or behaviors is a necessary progression.

In the fight against influence operations and AI misuse, online users also have a significant role to play. The impact of such content is only felt if individuals view it, believe it, and further propagate it. In instances highlighted by OpenAI, online users identified fake accounts that utilized AI-generated text. Our personal research has revealed Facebook communities actively identifying AI-generated image content generated by spammers and scammers, aiding those who may be less informed about the technology in avoiding deception. A healthy dose of skepticism is increasingly valuable: taking a moment to verify the authenticity of content and individuals, as well as educating friends and family about the prevalence of generated content, can help social media users resist manipulation from propagandists and scammers.

As OpenAI’s blog post mentioned, “Threat actors work across the internet.” Therefore, it is imperative for us to do the same. As we enter a new era of AI-driven influence operations, we must address common challenges through transparency, data sharing, and collective vigilance to build a more resilient digital ecosystem. Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), focusing on the CyberAI Project. RenĂ©e DiResta is the research manager at the Stanford Internet Observatory and the author of “Invisible Rulers: The People Who Turn Lies into Reality.”

By

Leave a Reply