While beneficial in modern medicine, artificial intelligence poses a significant risk of generating targeted health disinformation, threatening public health and safety. In a demonstration, researchers from Flinders University used a single large-language AI model to produce 102 blog articles with over 17,000 words of disinformation about vaccines and vaping, targeting diverse societal groups and accompanied by fake testimonials and convincing images.
Researchers conducted this experiment without specialized AI knowledge, which shows the ease of bypassing AI guardrails to create persuasive disinformation. These findings underline the profound risk of AI-generated disinformation and the urgent need for robust AI vigilance, including enhanced transparency, surveillance, and regulation, to safeguard public health.
Ref: Menz BD, Modi ND, Sorich MJ, Hopkins AM. Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation. JAMA Intern Med. Published online November 13, 2023. doi: 10.1001/jamainternmed.2023.5947
