• darkmode [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    3 days ago

    There’s no reason to take Anthropic at its word, they probably just wanted more money. This analysis is vibes based but given the whole snowden thing i don’t have a reason to believe llms haven’t already been deployed for ongoing mass surveillance and murder.

    • caesarsushi404 [any]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      3 days ago

      Anthropic is definitely a bloodthirsty corpo, they just couldn’t keep up with the US admin.

      CEO: “Anthropic has therefore worked proactively to deploy our models to the Department of War […] Claude is extensively deployed across the Department of War”

  • InevitableSwing [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 days ago

    Altman is so full of shit.

    Hours after the Trump administration’s comments, OpenAI CEO Sam Altman posted on X Friday night that the company had struck a deal with the Department of Defense to deploy its models on the department’s classified networks. Altman said the Department of Defense “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome” in their interactions.

    “AI safety and wide distribution of benefits are the core of our mission,” Altman wrote. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

    Altman also said OpenAI will create “safeguards to ensure our models behave as they should, which the DoW also wanted.” It is unclear if or how the safety-focused measures in OpenAI’s agreement differ from those in the Anthropic negotiations.

    • Awoo [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      human responsibility for the use of force, including for autonomous weapon systems

      Autonomous weapon systems that ask a human for confirmation before killing something is not really autonomous is it? So why say autonomous at all?

    • BodyBySisyphus [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      “Hi we would like to use our text classification algorithm to run your murderbots”: utterly deranged statement hall of fame contender that somehow managed to impinge on our reality.