• athairmor@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 days ago

    See, the quickest way to get AI banned is for it to start telling the truth about those in power.

    • falseWhite@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      They’ll just switch to Grok, which will encourage them to commit even more war crimes. Currently it’s based on Google’s Gemini

  • brown567@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    I vaguely remember a movie where the government makes an ai intended to defend the usa, and it starts killing off politicians because it saw them as the greatest threat to national security

  • Warl0k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    The most infuriating part is they’re so bad at this, yet they’re still getting away with it. I mean they’re just So. Dumb. and yet…

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    The Pentagon AI immediately notified the DOJ AI and Hegseth’s avatar was imprisoned for war crimes.

  • northernlights@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Honest question for someone a bit lost in the sea of news. What are the war crimes? Aggressing Venezuela and sinking boats without warning?

    • resipsaloquitur@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Murdering civilians in general, particularly those in distress.

      Killing shipwreck survivors is the actual, literal example of a war crime in the Department of Defense’s manual on war crimes.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      The attacks are broadly a blatant attempt to start another Iraq War 2, which is disgusting on the face of it.

      In specific though, the thing that makes it a fact that Pete Hegseth is a murderer is at least some of these strikes hit a powerboat once with a missile and then after survivors were clinging to wreckage and flailing in the water Pete Hegseth ordered a second strike to kill the survivors who were now helpless and in need of emergency rescue.

      This is considered by most militaries and societies the world over as a warcrime/murder because it is.

      • 1984@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        He didnt do it alone… The fact that a military can do this, means the government is also corrupted.

        The US government is evil, thats just a fact. But what to do about it is anyones guess.

        • supersquirrel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 days ago

          I have many problems with the US military, I consider the US as a military empire, but for the most part I do not think the professional core of the US military desires this. Even if the politics of many of those military personnel are on the face of it conservative, they do not want this especially when it actually starts happening. These strikes are an attempt to drive a division and set warhawks and the military against the rest of US society in the US as much as they are an outward projection of violence, Republicans are desperately trying to completely transform the military into a rightwing murder machine and alienate it from the rest of US society and I wouldn’t underestimate how little the US public and bulk of the US military desires this outcome especially in such an economically desperate moment for the US. In order for the US military to become fundamentally a rightwing murder machine it must shed its professionalism, and there are a whole lot of people who no matter their politics can see that process as plain as day happening and correctly understand it is a form of extinction for the professionalism of what they do.

          What I am trying to say is that the executive branch is FORCING the military to murder, and the military should absolutely be held accountable for fulfilling illegal orders but yes the key issue here is that the US government is corrupted from the top down here. Pete Hegseth is a wildly dangerous person to have as Secretary Of War both because he is extraordinarily incompetent and because he harbors an extremely hateful world view that will demand the systems he controls commit murder for utterly frivolous reasons.

  • Xenny@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    A LLM advisor that takes REAL CASES AND LAWS NOT ONES IT MADE UP!!! and sorted through them to advise on legal direction THAT CAN THEN BE VERIFIED BY LEGAL PROFESSIONALS WITH HUMAN EYES!!! might not be too bad of an idea. But we’re really just remaking search engines but worse.

    • MiddleAgesModem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 days ago

      That’s what will happen. Already, paid chatGPT will search and provide the sources it uses and it goes well beyond basic Google searching.

      The people with the most complaints about AI seem to know the least about it.

    • Coriza@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      You may already know that, but just to make it clear for other readers: It is impossible for an LLM to behave like described. What an LLM algorithm does is generate stuff, It does not search, It does not sort, It only make stuff up. There is not that can be done about it, because LLM is a specific type of algorithm, and that is what the program do. Sure you can train it with good quality data and only real cases and such, but it will still make stuff up based on mixing all the training data together. The same mechanism that make it “find” relationships between the data it is trained on is the one that will generate nonsense.

      • tetris11@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        But you can enter in real search data as a prompt, and use its training to summarize it. (Or it can fill its own prompt automatically from an automatic search)

        It won’t/can’t update it’s priors, and I agree with you there, but it can produce novel output on a novel prompt with its existing model/weights

      • MiddleAgesModem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 days ago

        Whole lot of unsupported assumptions and falsehoods here.

        Stand alone model predicts tokens. LLMs retrieve real documents, rank/filter results and use search engines. Anyone who has used these things would know that it’s not just “making stuff up”.

        It both searches and sorts.

        In short, you have no fucking idea what you’re talking about.

      • MiddleAgesModem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 days ago

        So much better than that. Always amusing how much people will distort or ignore fact if it “feels right”.