• hedgehog
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.

    I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.