• AcausalRobotGod@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    17 days ago

    A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 days ago

      yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

      and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working