AI hallucinations occur when artificial intelligence systems generate outputs that are inaccurate, misleading, or completely fabricated, often due to errors in processing or interpreting data.
- Hallucinations may cause AI to present false or irrelevant information.
- This poses serious risks in sensitive fields like healthcare and law, where AI has been caught adding information that never occured and inventing fake cases to cite
- Preventing hallucinations involves using quality training data and manual human review of output