/plushcap/analysis/deepgram/ai-hallucinations-bad-or-misunderstood

AI Hallucinations: Bad or Misunderstood?

What's this blog post about?

OpenAI's new AI-powered chatbot, ChatGPT, has raised concerns about its tendency to output false yet plausible and coherent information. This phenomenon, known as hallucination, occurs when an AI model generates untruthful information on a closed-domain task. While hallucinations can lead to errors and offend in some cases, they also point towards AI that is not merely generative but potentially creative. Machine learning models are built around generalization, making the distinction between factual content and text structure, syntax, and delivery too fine for non-human models. Hallucinations have been used to generate MRI scans to supplement CT scans in lung tumor segmentation and improve autonomous navigation by hallucinating obstacles.

Company
Deepgram

Date published
May 8, 2023

Author(s)
Ben Luks

Word count
1263

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.