The Rise of State-Sponsored Disinformation: How OpenAI's ChatGPT is being Misused by China
OpenAI has banned multiple accounts using its ChatGPT model for malicious purposes, including misinformation and surveillance campaigns. The "Peer Review" campaign, which used ChatGPT to generate reports about protests in the West for Chinese security services, and the "Sponsored Discontent" campaign, which generated comments in English and news articles in Spanish to spark discontent in Latin America, are two examples of how state-sponsored actors are using AI to disrupt elections and undermine democracy. These campaigns demonstrate the growing threat of disinformation and surveillance in unstable or politically divided nations.
- The misuse of ChatGPT by Chinese state-sponsored actors highlights the need for greater regulation and oversight of AI-powered disinformation campaigns, particularly in the context of global geopolitics.
- As more countries adopt AI tools like ChatGPT, how will international cooperation and standardization of guidelines for responsible AI use help mitigate the spread of disinformation and surveillance?