The U.S. Food and Drug Administration's effort to streamline drug review with artificial intelligence is off to a shaky start, marked by technical glitches, trust issues, and conflicting leadership messages.
Margaret Manto and Taylor Giorno report for NOTUS.
In short:
- The FDA’s new AI chatbot, Elsa, was designed to speed up drug and device review but has struggled with limited internet access, broken links, and so-called "hallucinated" citations to non-existent scientific studies.
- Staff are skeptical, citing concerns over transparency, data security, and leadership spin, while partnerships with tech companies like Anthropic are hampered by bureaucratic red tape and lack of security clearances.
- Despite these setbacks, both critics and supporters see long-term potential, if implemented with integrity, oversight, and a commitment to public health over Silicon Valley hype.
Key quote:
“I am sure in the long range we will have productive collaboration. It will be because we quietly work together to solve specific problems, and the AI will just be another tool in our toolbox.”
— anonymous FDA reviewer
Why this matters:
In theory, it makes sense to bring some digital muscle to the FDA’s notoriously slow review process. But in practice, Elsa can’t even Google reliably. For AI to live up to its promise in public health, it’s going to need more than buzzwords and silicon swagger. It’ll need good old-fashioned transparency and human oversight.
Read more:














