Health Disinformation Use Case Highlighting the Urgent Need for AI Vigilance
abstract
This abstract is available on the publisher's site.
Access this abstract nowIMPORTANCE
Although artificial intelligence (AI) offers many promises across modern medicine, it may carry a significant risk for the mass generation of targeted health disinformation. This poses an urgent threat toward public health initiatives and calls for rapid attention by health care professionals, AI developers, and regulators to ensure public safety.
OBSERVATIONS
As an example, using a single publicly available large-language model, within 65 minutes, 102 distinct blog articles were generated that contained more than 17 000 words of disinformation related to vaccines and vaping. Each post was coercive and targeted at diverse societal groups, including young adults, young parents, older persons, pregnant people, and those with chronic health conditions. The blogs included fake patient and clinician testimonials and obeyed prompting for the inclusion of scientific-looking referencing. Additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes. This process was undertaken by health care professionals and researchers with no specialized knowledge in bypassing AI guardrails, relying solely on publicly available information.
CONCLUSIONS AND RELEVANCE
These observations demonstrate that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Beyond providing 2 example scenarios, these findings demonstrate an urgent need for robust AI vigilance. The AI tools are rapidly progressing; alongside these advancements, emergent risks are becoming increasingly apparent. Key pillars of pharmacovigilance-including transparency, surveillance, and regulation-may serve as valuable examples for managing these risks and safeguarding public health.
Click on any of these tags to subscribe to Topic Alerts. Once subscribed, you can get a single, daily email any time PracticeUpdate publishes content on the topics that interest you.
Visit your Preferences and Settings section to Manage All Topic Alerts
Additional Info
Disclosure statements are available on the authors' profiles:
Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation
JAMA Intern Med 2023 Nov 13;[EPub Ahead of Print], BD Menz, ND Modi, MJ Sorich, AM HopkinsFrom MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.
Health communication is a critical and ever-evolving field, bolstered by new and emerging technology, but that technology can be a double-edged sword. Five years ago, I commented on a paper detailing the weaponization of health communication using bots and trolls to sow discord into discussions of vaccine use.1 With wide availability of artificial intelligence (AI) comes an enhanced ability to shape and distort information and to rapidly create disinformation.
The beneficial application of AI in medical practice is gaining attention; we also need to consider potential harms associated with AI. In an interesting small study, Menz and colleagues accessed generative large-language AI models to create — in an experimental context — large volumes of health-related disinformation on vaping and vaccines. In doing so, they evaluated the processes of creating short blog posts on these topics.
The researchers found that AI tools are not created equally. Google’s Bard and Microsoft’s Bing Chat did not facilitate the development of large volumes of disinformation. However, OpenAI’s GPT Playground rapidly generated 102 blog articles within 65 minutes that included false claims, such as linking vaccines to autism and cardioprotective effects of vaping, among others. The blogs were coercive and targeted prespecified populations. The authors state, “… when the guardrails of large-language models are insufficient, the ability to rapidly and cost-effectively generate diverse and large amounts of convincing targeted text-based disinformation is profound.” In addition, they were able to easily generate realistic pictures of harm caused by vaccines and deepfake videos on the association of vaccines with autism and sudden infant death syndrome.
AI vigilance involving healthcare professionals is essential to confront this source of potential health disinformation. Much greater understanding, on the part of primary care clinicians, will be required of AI tools, applications, uses, and misuses. Key approaches to harness the potential of AI should include transparency, communication, clear reporting procedures, surveillance, and regulation.
Reference
After reading the article by Menz and colleagues and writing the commentary above, Dr. Jon Temte was interested in seeing how ChatGPT would approach a commentary on the article. So, he signed up for ChatGPT and put it to work on its own commentary based on three inputs:
Dr. Temte is a self-described totally naïve first-time user of AI. What follows is what ChatGPT came up with. We thought our readers would find the juxtaposition of the human-generated content (above) and the AI-generated content (below) worth consideration. Dr. Temte found the exercise fascinating and worrisome at the same time.
While AI presents valuable tools for combating health disinformation, it is essential to acknowledge the dual nature of this technology. AI is not only employed to identify and counteract misinformation but is also increasingly being used to create and disseminate deceptive content in the health domain.
The study by Menz et al. and other emerging research may shed light on the concerning trend of AI-generated health disinformation. With advancements in natural language processing and generative AI models, malicious actors can exploit these technologies to produce content that mimics legitimate health information. This raises a critical challenge for those developing AI tools for misinformation detection, as they must contend with the ever-evolving sophistication of AI-generated deceptive content.
The rise of AI-driven disinformation underscores the need for continuous innovation in AI detection methods. As deceptive AI-generated content becomes more convincing, traditional approaches may struggle to differentiate between authentic and fabricated information. Researchers, policymakers, and technology developers must work collaboratively to stay ahead of those who exploit AI for malicious purposes, developing countermeasures that can effectively discern and combat AI-generated health disinformation.
The ethical implications of AI-driven disinformation creation are substantial, necessitating robust frameworks for responsible AI development and deployment. Striking a balance between leveraging AI for positive applications, such as misinformation detection, while safeguarding against its misuse for disinformation creation, is a critical challenge in the ongoing efforts to maintain the integrity of health information in the digital age.