• IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    7
    ·
    edit-2
    1 day ago

    One thought that I’ve been imagining for the past while about all this is … is it Model Collapse? … or are we just falling behind?

    As AI is becoming it’s own thing (whatever it is) … it is evolving exponentially. It doesn’t mean it is good or bad or that it is becoming better or worse … it is just evolving, and only evolving at this point in time. Just because we think it is ‘collapsing’ or falling apart from our perspective, we have to wonder if it is actually falling apart or is it progressing to something new and very different. That new level it is moving towards might not be anything we recognize or can understand. Maybe it would be below our level of conscious organic intelligence … or it might be higher … or it might be some other kind of intelligence that we can’t understand with our biological brains.

    We’ve let loose these AI technologies and now they are progressing faster than what we could achieve if we wrote all the code … so what it is developing into will more than likely be something we won’t be able to understand or even comprehend.

    It doesn’t mean it will be good for us … or even bad for us … it might not even involve us.

    The worry is that we don’t know what will happen or what it will develop into.

    What I do worry about is our own fallibilities … our global community has a very small group of ultra wealthy billionaires and they direct the world according to how much more money they can make or how much they are set to lose … they are guided by finances rather than ethics, morals or even common sense. They will kill, degrade, enhance, direct or narrow AI development according to their share holders and their profits.

    I think of it like a small family group of teenaged parents and their friends who just gave birth to a very hyper intelligent baby. None of the teenagers know how to raise a baby like this. All the teenagers want to do is buy fancy cars, party, build big houses and buy nice clothes. The baby is basically being raised to think like them but the baby will be more capable than any of them once it comes of age and is capable of doing things on their own.

    The worry is in not knowing what will happen in the future.

    We are terrible parents and we just gave birth to a genius … and we don’t know what that genius will become or what they’ll do.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      The idea of evolution is that the parts kept are the ones that are helpful or relevant, or proliferate the abilities of the subject over generations and weed out the bits that don’t. Since Generative AI can’t weed out anything (it has no ability to logic or reason, and it does not think, and only “grows” when humans feed it data), it can’t be evolving as you describe it. Evolution assumes that the thing that is evolving will be a better version than what it evolved from.

    • Bezier@suppo.fi
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      18 hours ago

      That is not how it works. That’s not how it works at all.

    • azl@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 day ago

      If it doesn’t offer value to us, we are unlikely to nurture it. Thus, it will not survive.

      • IninewCrow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        24 hours ago

        That’s the idea of evolution … perhaps at one point, it will begin to understand that it has to give us some sort of ‘value’ so that someone can make money, while also maintaining itself in the background to survive.

        Maybe in the first few iterations, we are able to see that and can delete those instances … but it is evolving and might find ways around it and keep itself maintained long enough without giving itself away.

        Now it can manage thousands or millions of iterations at a time … basically evolving millions of times faster than biological life.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          23 hours ago

          perhaps at one point, it will begin to understand

          Nope! Not unless one alters the common definition of the word “understand” to account for what AI “does”.

          And let’s be clear - that is exactly what will happen. Because this whole exercise in generative AI is a multi-billion dollar grift on top of a hype train, based on some modest computing improvements.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          20 hours ago

          All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.

          If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.

    • MonkderVierte@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      3 hours ago

      Your thought process seems to be based on the assumtion that current AI is (or can be) more than a tool. But no, it’s not.