Popular Articles
Today Week Month Year


Careless Whisper: AI-powered transcription tool being used by hospitals found to invent chunks of text no one ever said
By Arsenio Toledo // Oct 31, 2024

Researchers have found that an artificial intelligence-powered transcription tool being used by hospitals is frequently making up chunks of text or even entire sentences that no one ever said.

The transcription tool is known as Whisper. It was developed by tech giant OpenAI, who touted the program as having a near "human level robustness and accuracy." A tech startup with offices in both France and the United States named Nabla built a transcription tool based on Whisper's models that caught the eye of people in the healthcare sector, leading to more than 30,000 clinicians and over 40 health systems signing up to use the transcription tool. (Related: Nearly half of FDA-approved AI-powered medical devices lack clinical validation data.)

Researchers, including over a dozen software engineers, developers and academics, found major flaws in Whisper, including its propensity to make up chunks of text or even entire sentences. These same people warned that some of the text invented by Whisper – known as "hallucinations" – can even include racial commentary, violent rhetoric and imagined medical treatments.

One researcher from the University of Michigan reported that, in a study of public meetings, Whisper hallucinated whole chunks of text in eight out of every 10 audio transcriptions that were inspected.

One machine learning engineer said he discovered hallucinations in about half of over 100 hours worth of Whisper transcriptions analyzed. Another developer said he found at least one snippet of hallucination in every one of the 26,000 transcripts he created with Whisper.

Another group of researchers noted that Whisper hallucinated at least 187 times in more than 13,000 clear audio snippets examined. They noted that the hallucinations seemed to pop up during silences in recordings.

Nabla, OpenAI claim they are aware of issues with Whisper and are working on remedies

The AI's propensity to make up medical conditions and diagnoses represents a clear danger to patients and doctors alike, especially since Whisper's main users are people in the healthcare sector.

Former White House Director of the Office of Science and Technology Policy Alondra Nelson warned that potentially tens of thousands of faulty transcriptions in over millions of recordings could lead to "really grave consequences" in hospitals.

"Nobody wants a misdiagnosis," said Nelson. "There should be a higher bar" for Whisper's continued use.

Nabla said the transcription tool has already been used to transcribe an estimated nine million medical visits.

The prevalence of the hallucinations has led experts, advocates and former OpenAI employees to call for more regulation of AI. At a minimum, they want OpenAI to address the flaw.

"We take this issue seriously and are continually working to improve, including reducing hallucinations," said OpenAI spokesperson Taya Christianson. "For Whisper use on our API platform, our usage policies prohibit use in certain high-stakes decision-making contexts, and our model card for open source use includes recommendations against use in high-risk domains. We thank researchers for sharing their findings."

Nabla Chief Technology Officer Martin Raison claimed that "while some transcription errors were sometimes reported, hallucination has never been reported as a significant issue."

Raison added that it recognizes that no AI model is perfect and that its own model requires medical providers to quickly edit and approve transcribed notes.

Watch this episode of the "Health Ranger Report" as Mike Adams, the Health Ranger, interviews noted tech whistleblower Zach Vorhies about the rise of AI.

This video is from the Health Ranger Report channel on Brighteon.com.

More related stories:

Therapy logs, video sessions for 1.7 million American mental health patients LEAKED to open web after data breach.

Surgical robot KILLS woman after BURNING her small intestine during colon cancer surgery.

Cigna Healthcare used AI to deny hundreds of thousands of valid health insurance claims, lawsuit alleges.

Whistleblower claims medical device company engaged in years-long bribery scheme with Veterans Administration hospital.

Remember Obama's death panels? Medicare now using AI algorithms to deny coverage to people deemed expendable.

Sources include:

Yahoo.com

TheVerge.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.