• blarghly@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Seem like an uphill battle, legally. I assume a good analogy would be a bar. Suppose two people meet in a bar, they consensually leave together, and then one rapes the other. Even if the bar was informed that one of these people has raped people he met in the bar before, afaik, the bar doesnt have a legal responsibility to ban him, since the bar isn’t a court of law and it would be way too much responsibility to saddle every bar owner of deciding the guilt or innocence of someone.

    Otoh, even if the case doesn’t pan out, it might push match group to be more aware of these sorts of things and implement features that actually work to reduce incidences. But still, it’s a difficult problem to solve. They can’t discriminate based on sex/gender, so all reports would need to be handled via the same mechanism. So imagine they implement a reporting system with harsher penalties - if you are accused of assault, you are instantly permabanned. Well, now expect things like, say, some neckbeard spamming assault reports against every woman he matches with who doesn’t agree to go out with him.

    Also, iirc, the reason that you can rejoin after being banned is that the apps delete all your data 6 months after you delete your account. Assuming they actually do this (I’m a little doubtful), this would get the privacy people all riled up.

      • astutemural@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Up until they get hacked, and then face lawsuits for improper storage of personal information. This seems like a no-win scenario for these apps. Not that I’m losing any sleep over it, mind.

        • chickenf622@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          That’s the point of hashing data. When it gets stolen it’s very difficult to reverse the hash, assuming a good algorithm has been used.

      • DoctorPress@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        That is 100% useless without a proper way to knowing if someone is dangerous, and reports mustn’t define it.

        • chickenf622@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          23 hours ago

          Person is reported by multiple people. Take hashes of there PII. If the person with the same PII tries to sign up, as stated in the article, don’t let them sign up if some or all of the hashes match.

          • DoctorPress@lemmy.zip
            link
            fedilink
            arrow-up
            4
            ·
            23 hours ago

            Multiple people doesn’t make it correct or truthful.

            In fact you can already see abuses of systems like this where people mass report a target to kick them out, whenever reports are actually valid or not.

            • chickenf622@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              22 hours ago

              Good point I was just trying to figure out a solution to identify a known bad actor without storing their PII