Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • eerongalA
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    1 year ago

    Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

    This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years. It’s only recently that we’ve been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.

    There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that’s just to be expected. But largely its just sheer raw size/scale that’s just been achievable recently.

    • Muehe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

      This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years.

      Well LLMs and particularly GPT and its competitors rely on Transformers, which is a relatively recent theoretical development in the machine learning field. Of course it’s based in prior research, and maybe there even is prior art buried in some obscure paper or 404 link, but if that’s your measure then there is no “novel theoretical approach” for anything, ever.

      I mean I’ll grant that the available input data and compute for machine learning has increased exponentially, and that’s certainly an obvious factor in the improved output quality. But that’s not all there is to the current “AI” summer, general scientific progress played a non-minor part as well.

      In summary, I disagree on data/compute scale being the deciding factor here, it’s deep learning architecture IMHO. The former didn’t change that much over the last half decade, the latter did.

      • pensivepangolin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Now as I stated in my first comment in these threads, I don’t know terribly much about the technical details behind current LLM’s and I’m basing my comments on my layman’s reading.

        Could you elaborate on what you mean about the development of of deep learning architecture in recent years? I’m curious; I’m not trying to be argumentative.

        • Muehe@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Could you elaborate on what you mean about the development of deep learning architecture in recent years?

          Transformers. Fun fact, the T in GPT and BERT stands for “transformer”. They are a neural network architecture that was first proposed in 2017 (or 2014 depending on how you want to measure). Their key novelty is the method of implementing an attention mechanism and a context window without recursion, which was the method most earlier NNs used for that.

          The wiki page I linked above is admittedly a bit technical, this articles explanation might be a bit more friendly to the layperson.

          • pensivepangolin@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Thanks for the reading material: I’m really not familiar with Transformers other than the most basic info. I’ll give it a read when I get a break from work.

    • pensivepangolin@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Okay, I’m glad I’m not too far off the mark then (I’m not an AI expert/it’s not my field of study).

      I think this also points to/is a great example of another worrying trend: the consolidation of computing power in the hands of a few large companies. Without even factoring in the development of true AI/whether that can or will happen anytime soon, the LLMs really show off the massive scale of both computational power consolidation AMD data harvesting by only a very few entities. I’m guessing I’m not alone here in finding that increasingly concerning, particularly since a lot of development is driving towards surveillance applications.

    • jumperalex@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      by that logic there was nothing novel about solid state transistors since they just did the same thing as vacuum tubes; no innovation there I guess. No new ideas came from finally having a way to pack cooler, less power hungry, smaller components together.