Dear Medical Writing: It's Not You, It's the Algorithm
Dear Medical Writing,
We need to talk.
It’s not that I don’t appreciate everything you’ve done. The late nights formatting tables. The delicate diplomacy of managing eleven co-authors who all believe their edits are non-negotiable. The way you transformed raw clinical data into documents that actually made regulators nod instead of frown.
You’ve been good to this industry. You’ve been good to me.
But something has changed. And I think you feel it too.
The Numbers Tell a Paradox
Let’s start with the facts that seem contradictory. The medical writing market reached approximately $4.3 billion in 2024 and is projected to grow to $10-12 billion by 2032, expanding at roughly 10-11% annually. At first glance, this seems impossible in an era when AI can draft documents in minutes.
But look closer. The FDA cleared 50 new drugs in 2024 and anticipates up to 70 approvals in 2025. Clinical trials are becoming demonstrably more complex—a Boston Consulting Group analysis of over 16,000 trials confirmed this trend. Cell and gene therapies demand extensive chemistry, manufacturing, and controls sections. Personalized medicine requires adaptive trial protocols that stratify by biomarker. The documentation burden is exploding.
Meanwhile, 83% of life sciences companies reported difficulty filling medical writing roles in 2024. Forecasts predict a 35% talent deficit by 2030.
The algorithm isn’t eliminating jobs. It’s filling a capacity gap that the industry cannot close with humans alone.
The Old Rituals
Remember when writing a clinical study report meant weeks of hunting for source documents across shared drives with folder names like “FINAL_v3_REVISED_USE THIS ONE”? Remember when the statistical outputs would arrive at 4:47 PM on a Friday, and you’d smile through the pain because that’s just how it was?
Remember the track changes? The glorious, maddening track changes. A single document passing through regulatory, medical, legal, and that one executive who hadn’t read anything but suddenly had thoughts about your executive summary. Each round a negotiation. Each comment a tiny battle.
These were your rituals. They were exhausting and inefficient and sometimes absurd. But they were yours.
In June 2024, Certara launched CoAuthor, a regulatory writing platform combining generative AI with document templates. The following month, Cognizant partnered with Yseop to accelerate scientific documentation using AI. The industry is moving fast.
The FDA Is Now Using AI to Read Your Work
Here’s something that should get your attention. On June 2, 2025, the FDA launched Elsa, a large language model built on Anthropic’s Claude, deployed agency-wide to help scientific reviewers work more efficiently.
Commissioner Marty Makary announced that tasks previously taking reviewers two to three days now take six minutes. The agency is using Elsa to accelerate clinical protocol reviews, summarize adverse events for safety assessments, and identify high-priority inspection targets. Makary’s stated vision includes “rapid or instant reviews” of drug applications.
But early reports reveal limitations. CNN and STAT News reported that Elsa has been producing fabricated citations and misrepresenting research—the same hallucination problems that plague all LLMs. FDA staff told reporters the tool cannot yet assist with formal review work because it lacks access to many relevant documents, including industry submissions.
What This Means for Medical Writers
Think about what’s happening. The regulator reviewing your submission is now using AI to read and summarize your work. If your documents are optimized for human readers but not for AI parsing, you may face unintended friction. If the FDA’s AI misinterprets your carefully crafted safety narrative, who catches the error?
This cuts both ways. On the one hand, AI-readable documents may be reviewed faster. On the other hand, if both sides are using AI—you to write, FDA to review—the human expertise that catches subtle errors becomes more valuable, not less.
The writers who thrive will be those who understand how these systems work on both ends of the regulatory submission.
What the Algorithm Cannot Do
A 2025 study by Boston Consulting Group, published in Clinical Trials, evaluated GPT-4’s ability to write sections of clinical trial protocols. The findings reveal the precise contours of AI’s limitations.
For content relevance and appropriate use of medical terminology, GPT-4 scored above 80% and 99%, respectively. The output looks professional. The jargon is correct.
But for clinical thinking and logic—whether the AI’s recommendations actually followed regulatory guidance—the off-the-shelf model scored approximately 40%. The AI confidently suggested excluding HIV patients from tuberculosis trials, directly contradicting FDA guidance that explicitly requires their inclusion.
When enhanced with retrieval-augmented generation (providing the AI access to current regulatory documents), that clinical logic score rose to approximately 80%. Still not perfect. Still requiring human oversight.
The AI cannot read the room during a sponsor meeting. It doesn’t know why the medical monitor just sighed.
It has never had to explain to a client, gently but firmly, that removing 40 pages will also remove the actual evidence.
The Uncomfortable Part
I won’t pretend this transition is painless. Companies that once needed ten writers may soon need three, armed with AI tools that can produce first drafts in seconds rather than days. BCG reports that AI-assisted writing reduces end-to-end document creation time by 25-50%, depending on the document type.
Real people with specialized skills and years of training will face real disruption.
This is not something to celebrate. It’s something to acknowledge honestly, even as we talk about evolution and opportunity.
My Prediction
Here’s what I believe will happen.
The FDA’s use of Elsa will accelerate, despite its current limitations. Other regulators—EMA, PMDA, NMPA—will follow within 18-24 months. Documents optimized for AI parsing will have an advantage.
This doesn’t hurt medical writers. It changes what they do.
When both submission and review are AI-assisted, the humans on each side become quality controllers, strategic advisors, and exception handlers. The medical writer ensures the AI-generated draft doesn’t contain the tuberculosis error. The FDA reviewer becomes the person who catches what Elsa missed.
The premium shifts from drafting speed to judgment, regulatory strategy, and the ability to verify AI outputs against evolving guidance.
The Reinvention
The medical writer of tomorrow is not a document generator. The algorithm handles that now. The medical writer of tomorrow is a document architect. A curator. A skeptic who reviews AI output with the same rigor once applied to junior writers.
You become the one who knows why certain information matters to a reviewer in Brussels versus one in Silver Spring. The one who understands that regulatory writing is not just about clarity but about strategy. The one who catches the hallucination before it becomes a 483 observation—or before the FDA’s AI misreads your intent.
The talent shortage tells us something important: the industry still needs human expertise. It just needs that expertise applied differently.
The Case for Optimism
Here’s the math that matters.
The volume of regulatory documentation is growing faster than AI can reduce costs. The talent gap is widening. The FDA itself is deploying AI because it cannot review submissions fast enough with humans alone.
Documents will be produced faster, which means treatments may reach patients sooner. Writers freed from formatting drudgery can focus on higher-value work. Small biotechs without massive budgets can compete with pharma giants on documentation quality. The projected 35% talent deficit by 2030 might be addressed through human-AI collaboration rather than through pure headcount growth.
The craft doesn’t disappear. It concentrates. It becomes more strategic, more specialized, more human in the ways that matter.
A Closing Thought
Dear Medical Writing, you are not dying. You are molting.
The exoskeleton of manual assembly and endless revision cycles is cracking. What emerges will be leaner, faster, and, frankly, less tedious.
The algorithm can do part of what you do. But 40% accuracy on clinical logic isn’t going to pass muster with the FDA—even if the FDA is also using AI that makes mistakes.
The rest, the hard part, the part that requires judgment and nuance and the occasional well-timed pushback, that remains yours.
So don’t mourn the old ways too long. They were never the point.
The point was always the work: translating science into decisions that help patients. That work continues. You just have a new collaborator now.
One that doesn’t need coffee breaks. But also one that recommends excluding HIV patients from tuberculosis trials.
With respect and only a little anxiety,
Eswar Krishnan, A Fellow Traveler in Drug Development


Excellent read. "Taste" and "discernment" are buzz words making rounds in AI generated content circles (writing, art, video). You allude to a similar pivot for medical writers of the future, less tedium, more human judgement/touch. To be optimistic, it may lead to a more satisfying and more effective) job/career.