When Good Writing Spreads Bad Medicine – Dr Helmy Hazmi

A writer, who is in the pink of health, had just gone for a medical blood screening and discovered that her uric acid was above the reference range. Worried, she opened her laptop and typed, “What is the best treatment for gout?” Within seconds, her screen filled with authoritative text — confident, polished, and convincing. She published it online and shared it on her social media.

Within minutes, her post gained 20 likes. In an hour, it had been shared six times.

But there was one problem: the article did not reflect the true context of “gout” or the meaning of high uric acid levels.

Why This Happens

The ease of using large language models (LLMs) to generate entire articles has become a tempting shortcut for content creators. The real challenge lies in ensuring the validity and credibility of the information produced.

Before the rise of LLMs, health and medical information was mainly obtained from published texts, whether online or in print. These materials were generally produced by authoritative publishers, medical professionals, or trained health journalists. Any piece of information passed through multiple layers of review, editing, and fact-checking before reaching the public.

LLMs bypass that process. With a well-crafted prompt, an article can be generated instantly — polished, authoritative, and seemingly reliable. While AI “hallucinations” have decreased over time, compared to earlier models, they still occur.

The bigger issue is context. Without sufficient detail, LLMs tend to generate generic responses that may not apply to local realities. For example:

  • Suggesting medications available only in the United States but not registered in Malaysia.
  • Recommending tests such as PET-CT scans that are inaccessible in district hospitals.
  • Advising diets like the Mediterranean diet, which may not be affordable or realistic for local communities.

In contrast, a medically trained person understands local limitations. Having treated patients and worked within the healthcare system, they are able to write realistic articles — the kind that help readers take safe and practical actions.

Risks and Fallacy

Artificial intelligence has made it easier than ever for non-medical writers to produce health articles that appear credible. But fluency in language is not the same as accuracy in fact.

When information is inaccurate or incomplete, the consequences can be serious. Readers may follow unsafe advice, delay seeking proper medical care, or unintentionally spread misinformation.

And if readers do spot the flaws? Trust can be lost — not only in the writer but also in legitimate medical sources.

People searching for health information are often vulnerable. They may be anxious, uncertain, or desperate for guidance. They may hesitate to see a doctor but are quick to accept the first answer they find. This makes them more likely to believe misinformation, which can worsen their condition.

For the writer, a viral post can feel rewarding. But the sense of “expertise” gained from likes and shares can mask dangerous blind spots. This is the Dunning–Kruger effect: when limited knowledge creates a false sense of authority.

Personal anecdotes can be valuable — they can inspire, guide, and comfort readers facing similar issues. But once they offer diagnoses or treatments, they cross into territory that belongs to licensed professionals. A responsible health article should encourage readers to ask better questions, not push them into actions without understanding the issues.

Solution

Healthcare workers support the empowerment of individuals to take charge of their own health. But boundaries must be respected between polished writing and professional medical advice.

A simple mechanism can help: always check the final draft with a medical professional before publishing.

Honesty matters too. Writers should acknowledge their own limits and always include disclaimers reminding readers to consult their doctors before making health decisions.

Finally…

Used well, AI can bridge the gap between medical knowledge and the public. It can simplify without dumbing down and make health information more accessible. But used carelessly, it can also spread misinformation faster than ever before.

The responsibility rests with the writer. Non-medical voices can contribute meaningfully to health communication — if they treat AI as a helper, not an authority, and if they approach the task with humility and care.

Because in the end, writing about health is not just about words. It is about choices, trust, and lives. And that is something no algorithm can replace. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Please type the characters of this captcha image in the input box

Please type the characters of this captcha image in the input box