• swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    10 months ago

    Reasoning about future AIs is hard

    “so let’s just theorycraft eugenics instead” is like 50% of rationalism.

  • -dsr-@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    Shorter: “Let’s assume that I’m a godling. I will definitely be an evil god. Here’s how.”

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 months ago

    It’s not even eugenics to optimize ze genome to make ze uberbabies, OP mostly seems mad people are allowed to have non-procreative sex and couches it in a heavily loaded interpretation of inclusive fitness.

  • titotal@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.

    As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.

    This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.

    You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      OK, so obviously “alignment” means “teach AI not to kill all humans”, but now I figure they also want to prevent AI from using all that computing power to endlessly masturbate, or compose hippie poems, or figure out Communism is the answer to humanity’s problems.

      • locallynonlinear@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        In practice, alignment means “control”.

        And the the existential panic is realizing that control doesn’t scale. So rather than admit that goal “alignment” doesn’t mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we’re constantly in tenuous relationships at the edge of uncertainty with all of it,

        it’s the end of all meaning aka the robot overlord.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      10 months ago

      It seems to be a nazified reading of “inclusive fitness” which is a refinement of Darwin’s original idea but extended slightly to groups of individuals

      https://en.wikipedia.org/wiki/Inclusive_fitness

      I read somewhere that human genetic evolution essentially stopped once the bicycle enabled people to cycle over to the next village to have sex instead of having to bonk their closest relatives, so I don’t really see the point from an evolutionary point of view to enforce biological kinship by divine fiat, unless you’re unhealthily obsessed with “the purity of the blood”. Also, the obsession with only allowing sex for procreation is weirdly reactionary and goes directly against other evopsych fetishes like “alphas” impregnating more females compared to “betas”.

      Edit Genghis Khan is mentioned in the comments, as someone who has maximized IGF but hardly monogamously.

  • Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    I’ll never stop being annoyed by the fact that the most wrong group managed to take such a cool name