• Tyler_Zoro
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    What you are describing is true of older LLMs. GPT4, it’s less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.

    The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.

    So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.

    • jocanib@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      1 year ago

      They cannot be anything other than stochastic parrots because that is all the technology allows them to be. They are not intelligent, they don’t understand the question you ask or the answer they give you, they don’t know what truth is let alone how to determine it. They’re just good at producing answers that sound like a human might have written them. They’re a parlour trick. Hi-tech magic 8balls.

      • Tyler_Zoro
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        1 year ago

        They cannot be anything other than stochastic parrots because that is all the technology allows them to be.

        Are you referring to humans or AI? I’m not sure you’re wrong about humans…

          • nulldev@lemmy.vepta.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Have you even read the article?

            IMO it does not do a good job of disproving that “humans are stochastic parrots”.

            The example with the octopus isn’t really about stochastic parrots. It’s more about how LLMs are not multi-modal.