• very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    What did they mean by Ai in the question? Better question, what did the responders think they meant?

    Both the headline and the article made it clear that the survey was referring to generative AI — so the visible art slop that gives everything that nice shovelware look.

    The survey in question is actually an ongoing project and there’s a link to it in the article if you wanna share your own feelings.

    • vrek@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Is it visible art? Is it written scripts? As I said in the other response would “brushing” a forest into a game world count as generative Ai?

      We really need better terminology for this stuff

      • The_Decryptor@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        As I said in the other response would “brushing” a forest into a game world count as generative Ai?

        No, why would it?

        • vrek@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          I didn’t decide where to plant the trees. I didn’t decide what type of trees to plant. The algorithm generated what it thought a forest would look like…

          Isn’t that generative? It’s not a llm, it’s not making a tree but combining multiple trees to make a forest.

          • The_Decryptor@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            If it’s putting conifers in a desert then sure it’s generative AI, if it’s following a predefined set of rules written by a human that govern tree placement and density, then it’s procedural.

            Minecraft is a good example, the rules that govern world generation are handwritten, they’re not AI.

            • vrek@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 hours ago

              This again restates my point. We need a definition of generative Ai… Everyone thinks they know what it means but most don’t agree.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Alright. That wasn’t clear to me. I’m against slop as well. But that’s not really what generative AI means. That term encompasses text-to-speech output as well. Like for fantasy NPC characters. Some of them use reinforcement leaning as well so the lines are a bit blurry there. We also got speech input in modern flight simulators, that’s pretty much gen AI. And maybe procedurally generated maps or dynamically spawning mobs, depending on how exactly it’s implemented. Or what I said, an LLM-driven spaceship computer. Fan-made translations for Japanese games often start out with machine translation… I’m against slop artwork as well. Or the weird things EA does like replace human playtesters with AI feedback on the prototypes. That’s likely going to have the same effect AI has on other domains.

      • Dymonika@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        What you’re missing is that nothing that we have is “AI” in the true sense of the term. LLMs, ChatGPT, etc. are not “AI,” which is just an inaccurate buzzword being thrown around; they’re still advanced autocomplete algorithms with no inherent self-motivation, or else their hallucination rate would be continually dwindling without their maintainers’ help.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          23 hours ago

          Yeah, you’re right. I guess I disagree on some technicalities. I think they are AI and they even have a goal / motivation. And that is to mimic legible text. That’s also why they hallucinate, because that text being accurate isn’t what it’s about. At least not directly. The term is certainly ill-defined. And the word “intelligence” in it is a ruse. Sadly it makes it more likely people anthropomorphize the thing, which the AI industry can monetize… I’m still fairly sure there’s reinforcement-learning inside and a motivation / loss-function. It’s just not the one people think it is… Maybe we need some better phrasing?

          Btw, there’s a very long interview with Richard Sutton on Youtube, going in detail about this very thing. Motivation and goals of LLMs and how it’s not like traditional machine learning. I enjoyed that video, think he’s right with a lot of his nuanced opinions. Spoiler Alert: He knows what he’s talking about and doesn’t really share the enthusiasm/hype towards LLMs.