With many jurisdictions introducing age verification laws for various things on the internet, a lot of questions have come up about implementation and privacy. I haven’t seen anyone come up with a real working example of how to implement it technically/cryptographically that don’t have any major flaws.

Setting aside the ethics of age verification and whether or not it’s a good idea - is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?

For an implementation to work, it should:

  • Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)
  • Not let the service know any other information about the user besides what they already learn through http or TCP/IP
  • Not let a government or age verification authority know whenever a user is accessing 18+ content
  • Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.
  • Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware
  • Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach

I think the first two points are fairly simple (lots of possible implementations with zero-knowledge proofs and anonymous signing keys, credentials with partial disclosure, authenticating with a trusted age verification system, etc. etc.)

The rest of the points are the difficult ones. Some children will circumvent any system (eg. By getting an adult to log in for them) but a working system should deter most children and require more than a quick download or a web search for instructions on how to circumvent.

The last point might already be a lost cause depending on your government, so unfortunately it’s probably not as important.

  • Mesa@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    18 hours ago

    Wrote a comment recently. Age verification? Unnecessary. OS-level parental controls? Possibly meriting.

    https://programming.dev/comment/22589550

    I am still against where all this age verification crap is coming from, and I’m against what specifically “age verification” entails; but here’s the thing: We keep saying, “It should be the parent’s responsibility to secure their kids”—and while that’s true, you can do all the talking and educating you want, but the fact is that the internet is now nigh-fully integrated with our lives, and unless you are surveilling your kid at every moment they are on the internet (don’t recommend), not every parent has the time, resources, or know-how to keep their children safe on the internet without help.

    There are some states pushing for “OS-level age verification,” and I’m not convinced the proponents for this idea know what this combination of words means—but the idea isn’t all bad. An interface for apps to query the device for a simple “can access adult content” value would be helpful for parents to better manage what their kids can access without having to hover 24/7. There is zero need for any sort of identification at any point in the process. The fact that legislation is promoting cumbersome identification collection and not the already existing idea of parental controls is evidence enough that this is designed to surveil.

    This may address the privacy concern, but the issue still remains of a centralized power deciding what is and what isn’t “safe for kids.”

    I don’t think we’re gonna get around the child internet safety conversation, and for good reason; but the conversation should be around how we can do it without jeopardizing individuals’ safety and privacy, including children.

    • dogs0n@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      That’s what the router setting to block adult websites is for… you don’t have to monitor 24/7, have some idea that bad sites are blocked, and you can just be doing regular checkups on your child then.

      There is and was never a need to involve IDs, other than more control over us as a whole and being able to extract more data.

      • Mesa@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        16 hours ago

        I think maybe the barrier could be a little higher than just disconnecting from your home’s network.

        If we were to accept the premise that there is currently an issue with child internet safety, then clearly this still an issue despite the existence of router controls. But now the question of if this premise is valid. What do you look at to determine whether “internet safety for children” is adequate? I don’t really know, and so I guess I have more reading to do.

        I was gonna say something about PSAs, but no time.

  • parlaptie@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    20 hours ago

    The problem with this question is assuming that violating privacy isn’t the entire point of age verification laws.

  • Pommes_für_dein_Balg@feddit.org
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    1 day ago

    The German government ID card has an age verification function:
    It only sends one bit to the requesting service: Yes, over 18 or No, not over 18.
    And it doesn’t transmit back any data, so the state doesn’t know what services you access.
    Since you are required to have an ID card and the state knows your age, this would be a pretty good option (in Germany).

    • PosiePoser@feddit.org
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 day ago

      Yeah this. I don’t know why people are trying to make this into some incredibly complicated multi step process.

      • Skankhunt420@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        15 hours ago

        I can’t speak for every government in the world, but as far as the major ones go, USA, Russia, China - I have less than zero percent trust in any of their companies to handle my data in a private and safe way.

        It just isn’t happening. They will dissect it, sell the data points they can, surveil you with the other data points, train AI with it and all kinds of other shenanigans. And most people know this.

        That’s why you can’t just go right in to doing that, gotta help me adults think they’re helping children and society at large first. Start with something small, just a small inconsequential right they lose (“Oh, I have to input my age to access this site”) and then raise the stakes a little (“Oh, now I have to input my picture ID, ok”)

        Until it escalates into full inescapable 24/7 surveillance against you. And far from before that moment in time its already too late.

        Once they implement this, its already too late.

    • TechLich@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      How does this work to protect privacy though? Wouldn’t the site need to know who you are to be able to look you up with the government?

      Or is it more like an SSO/Oauth callback style thing where you sign into the government and they send the “age bit” digitally signed and your browser gives it back the service? Either way the government would know when you’re accessing 18+ material and possibly what specific site you’re accessing? Or is there more to it?

      • Pommes_für_dein_Balg@feddit.org
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        1 day ago

        The site doesn’t need to identify me, it only needs to know that a “Yes” bit was sent with a valid certificate from the government. And no data needs to be sent back to the government for that. The info is stored locally on a chip in the card.
        If a child has access to my ID card, that’s on me.

        • TechLich@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          23 hours ago

          Ah misread that it was card, not a service. That mostly works and is the same kind of thing as the other crypto solutions.

          Though a bad actor could still set up a service with a legit card that provides government signed anonymous “yes” responses on demand.

          I worry that the response will be to require an account and a full ID from it. Social media sites saying “we need to verify your identity to ensure you’re an adult human and to combat bots. Scan your id card…”

          Still one of the better technical solutions here though.

  • Godnroc@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 days ago

    You know how there are stores that sell restricted substances and verify your age by checking a provided ID? Have those same stores sell a cheap, sealed card with a confirmation code on it. You can enter that code online to verify any service. The code expires after a set period of time after it’s first use to prevent sharing and misuse.

    This system would be as secure as the restrictions on the restricted substance, such as alcohol, so it should be fine for “protecting the children”

    • FinjaminPoach@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      Interesting idea. Could also give it out free with packs of beer like a golden ticket from Charlie And The Chocolate Factory.

      And all across the whole world, 18 year old men will jump for joy when picking up birthday booze - “I can finally look at boobs on the internet!”

    • dustycups@aussie.zone
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      +1 At least in WA there are restrictions on licenced premises recording your ID, they are meant to just check it.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      Were you ever a teenager? This would be abused immediately, unless the codes were single use, and in that case it’s a non-starter.

      • Godnroc@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        Yes, and one with unrestricted internet access. Can you elaborate on how someone underage would abuse this system? They can’t buy one at the store, can’t reuse one that has expired so finding one won’t help, and if theft is a concern they would just need to be secured like any other restricted good. I would say it’s at least as secure alcohol, tobacco, or firearms.

        • bamboo@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Alcohol / tobacco / firearms can’t be digitally shared or reproduced. Imagine a high school with a mix of 14 - 18 year olds. If an 18 year old can get a valid code without hassle, they can share it with their friends who are in the same class, but are still 17. Or maybe they’ll share it with a sibling who is 16. What’s to stop it spreading from there? It will probably take just an hour for half of the school to get access to the one code. If the system assumes that kids won’t directly or indirectly share their codes with one another, then the system doesn’t understand teenage behavior and is flawed.

          • PosiePoser@feddit.org
            link
            fedilink
            arrow-up
            10
            ·
            1 day ago

            So… the same flaw we abused to have our older friends buy us booze and cigarettes when we were underage lol I’ll still take it. You’re not going to get a perfect solution that works all the time. Point is HARM REDUCTION.

            REDUCTION.

            Not a perfect, flawless, impossible to abuse system. Just a system that helps to make it a bit more difficult and then hope that parents take care of the rest. Some will always still slip through, thems the breaks.

            Yeesh I thought I was a nerd but reading some of the replies in this thread it’s like some people never even thought how to get access to alcohol and smokes when they were underage. Never even mind porn. We had older friends buy us those magazines too.

            • bamboo@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              But why create a system which inconveniences everyone, introduces privacy leakage, and which would be inadequate to curb the problem? Sure the comparison with booze and cigarettes at point of sale sounds like it accomplishes the same thing to restrict access to adults, but one kid buying a six pack with a fake ID can only share it with a few friends, and if they try to buy multiple kegs for a party with the whole school, there’s is probably some more scrutiny, and of course the cost, which makes it unlikely. Compare this to a code which could be texted to an entire class the moment someone gets their hand on one.

              And from an implementation side, if platforms and services exist which don’t comply with the law, for example 4chan [https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/investigation-into-4chan-and-its-compliance-with-duties-to-protect-its-users-from-illegal-content], then implementing these restrictions will just push kids to the unregulated platforms. It’ll have the unintended outcomes, and take away the controls from the parents, which will do more harm than good.

              • PosiePoser@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                18 hours ago

                I’d do what we’re already doing: https://feddit.org/post/26849555/11924777 though this card idea might work too for the sake of data privacy. Not so much for face-to-face privacy, same problem when we got porn on magazines. Let’s remember that buying adults goods with the intent to distribute to kids is punishable too. If some 18 year old goes to the store and buys 30 “I’m an adult” cards, there’s a good chance the clerk’s gonna put a stop to that. After that it becomes effort vs. benefit. Does one really want to go to several different stores to buy multiple cards to sell them for a slight markup? With the threat of getting caught and being charged? There’s always the slight embarrassment factor too. Of course adult access could mean a lot of others things than porn too but let’s face it: that’s what people are going to think, and that’s what people are going to think people are going to think.

                Again, the magic word is reduction.

                There will always be sites that won’t comply with any solution, there are p2p sharing methods, TOR and all that. There are always going to be nerdy kids who find their way to forbidden content. But that then becomes easier for parents to keep an eye on. Maybe get the kid off the device more so they simply won’t have the opportunity to figure it out. Make sure they only access sites that are in compliance etc. Still not a 100% fool proof solution but again, the point is reduction.

  • one_old_coder@piefed.social
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    15
    ·
    2 days ago

    I’m pretty sure there is already a cryptographic protocol that can do this, but that’s not the point. We do NOT need age verification in software, it makes no sense. We need parents to take care of their own children because why would open-source software do the job of failed parenting? It’s a social issue, not something that can be solved with technology. Or we would have put shock-collars on every kids when they don’t behave.

    • Voidian@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      21
      arrow-down
      2
      ·
      2 days ago

      Great idea, let’s get parents to raise their kids.

      Now, how do we suddenly make them actually do that? Last I checked this idea has been around about as long as people have been around but it’s still not happening.

      Parenting matters, but it’s not the only layer of protection. We don’t rely solely on parents to keep kids from walking into bars or buying cigarettes, we have laws and systems to back them up. Why should the internet be different?

      • dogs0n@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        17 hours ago

        Let’s not pretend that these laws actually do protect children…

        There is always a way around something and if there’s any population to figure it out, it’s the ones with the most free time.

        The difference between going to a bar and using the internet: Showing your ID at a bar doesn’t mean it’s stored on some server possibly ready to be stolen by hackers. It also doesn’t automatically link all of your user data to your id (like it does right now) and make it easier to track your movements everywhere you go.

        These laws help no one except the elite. They restrict us, limit access to information and eventually cause our data to be comprimised.

        Bad parents exist, but does that mean we lockdown the most expansive knowledge base for everyone? I don’t believe this will stop any children of bad parents from being exposed to horrible things online. Age gates don’t stop that (because they either get bypassed or another site exposes even worse stuff without the age gate).

      • Waveform@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        2 days ago

        You see, if we tell parents that it’s actually super important that they raise their kids, I’m sure they will do it. Just like if we tell everyone that a vaccine for a dangerous disease is a really good idea, everyone will just settle down and go get it.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        How am I supposed to take care of my kids? My kid has got up at 3am and used his school device to do things I don’t want. The thing wasn’t supposed to be allow by the school but the bypass (web site not blocked) wasn’t one the school will find out and block. Bypasses like that spread fast in schools.

        • Voidian@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          2 days ago

          My point is that we can’t rely on parental oversight only because some plain won’t… and in your case, even actively trying may fail (it’s not your fault). And there’s always going to be loopholes in every system. Clever kids will get by most verifications, and if they don’t, that’s likely to mean the verification gets too invasive to be worth it. The best, though not perfect system is to have parental oversight + impartial verification + platform responsibility. This will reduce but not eradicate the problem.

          • bluGill@fedia.io
            link
            fedilink
            arrow-up
            5
            ·
            2 days ago

            Problem is an OS is not a useful part of this. My kids are perfectly able to install linux on a pi - and this is something I want to encourage in general (I don’t think they have, but they could), thus giving them root access - including access to things in the package repo that I may not approve of. It is a hard problem and I can’t always be there.

      • TORFdot0@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        I can flash my id to a bartender who doesn’t need to take a copy or otherwise retain my PII to serve me. This isn’t how we do age attestation in most cases right now. We require a third party to issue and verify identity and said third parties have been show to be poor stewards of our identity.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Need parents to use already existing parental controls and for society to blame parents more for incompetence

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      As a parent, an extra layer of protection would be a positive. Balancing everything, and not leaving holes is hard enough, and I’ve yet to deal with the teenage phase.

      As the same time, as a Netizen, the risk of abuse to datamine me is FAR too great.

      The only way I would accept it is via zero knowledge proof type tokens. I can prove I am of age, but nothing more about me can be determined by any party.

      The current laws seem aimed at using “protect the children” to remove anonymity from the web, and are a data miner’s wet dream.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Agreed, and for every site which would comply with these rules, there are 10-100 which won’t and are not able to be controlled in the jurisdiction. Teenagers will find a way to get around restrictions, and will go to sites which are less regulated, and possibly not have the controls in place to flag grooming interactions, promoting self harm, etc.

  • psycotica0@lemmy.ca
    link
    fedilink
    arrow-up
    20
    arrow-down
    5
    ·
    edit-2
    2 days ago

    I’m not sure if this is part of the “setting aside” stuff, but I’d ask why age needs to be verified and not simply stated.

    I’m the admin on this device, I say I’m 50, why does the website need to check some ID to prove I’m 50? They trust what I reported, and if I lied to them that’s on me. It shouldn’t be the websites’ job to validate.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      2 days ago

      Exactly, it should be a parent’s job to limit a child’s access not a website.

      • porcoesphino@mander.xyz
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        2 days ago

        Yes… liquor, guns, driving, and physical punishment should solely be parents choice. Wait… those caused issues and the government decided to mitigate some of the negative consequences?

        • Omgpwnies@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          when you go to a website, do you drive to the website store and ask the salesperson for a copy? All of your examples are solved because there is real, in-person interaction as part of the process.

          • porcoesphino@mander.xyz
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            I wasn’t replying to a comment about if this is solved or not, or the complexities of getting an outcome that most people are happy with. I replied to a comment that simplified the issue all the way to “it’s the parent’s choice”.

            Your comment is opening up new issues, that I agree make enforcement while respecting privacy more difficult (but that I personally don’t think are insurmountable)

      • lcmpbll@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I agree, but also parents need better tools to be able to effectively limit their child’s access. App and device level parental controls are not sufficient as they currently work.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Also, more and more local router parental controls come with a monthly fee. Legislation should be attacking those subscriptions for software that runs on hardware you own, not privacy.

  • Voidian@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    Despite our current parliament sucking ass, I still have some general trust in my country’s government (and culture). So with that in mind:

    Our government bodies already have my basic data. Healthcare, census etc. and we use our online banking services to verify identity when accessing the data. It’s simple, and extremely widely used. I really don’t see why it would be so hard to make a relatively simple service that just gives sites that need to know a yes or no answer on if I’m over 18. They don’t need to know my birth date or any other information.

    Not let a government or age verification authority know whenever a user is accessing 18+ content

    This should be possible but of course the question is if one trusts the government to actually uphold this. Again, with my background, it’s a bit easier for me to speak.

    Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.

    You’ll never patch all the holes. In a perfect world, we wouldn’t be having this conversation. In a perfect world, parents would actually parent their kids and monitor their internet use. Access to adult content doesn’t even come close to being the biggest problem in many cases where some kids parents are fucking up their duties. Drugs, gangs, petty (and not so petty) crime comes to mind. Collective responsibility would be great but since we don’t live in a perfect world where everyone can just agree to a good idea like “take responsibility of your kids”, I’ll settle for trusting a democratic government to have some capacity to pick up those that fall.

    I happen to agree with age verification laws. This is a tangent but I would also go a step further in saying that MAINSTREAM internet should not be possible to use without verifying that the user is a real individual person. This would be another yes/no question via a service. Outwardly they don’t have to reveal their identity but even JizzMcCumsocks needs to have a backend verification as a real person. Basically, if any government member uses some service with their own name and has a verification about that, that service must also have a way of verifying that any user is a real person. We have given Xitter way too much power and at the same time, allowed anonymity. Meta services too of course but I think Xitter is one of the worst due to easy and straight forward use. Humanity has shown that we are not equipped to handle the kind of (mis)information flow there is in these spaces. Spaces such as Lemmy can and should operate in full anonymity, as there are natural barriers to entry here, plus it’s less appealing when it’s not even really intended for the kind of use mainstream social media sites are. Here we have a collective and individual responsibility to account for the anonymity and the challenges it brings.

  • Scott 🇨🇦🏴‍☠️@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    arrow-down
    7
    ·
    2 days ago

    Parental Controls. Most devices have this setting. Parents need to be taught how to turn it on, and penalized when they don’t turn it on. This way there would be no centralized database that could be hacked thereby violating user privacy. Adults wouldn’t have to give up their government issued ID to websites.

    • bluGill@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Devices have them, but they are not very good. I’ma parent and there are thing i want to block that I can’t and others I want to allow but a on different rules than their system has.

  • A1kmm@lemmy.amxl.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?

    With your constraints yes, but there are open questions as to whether that would actually be enough.

    Suppose there was a well-known government public key P_g, and a well protected corresponding government private key p_g, and every person i (i being their national identity number) had their own keypair p_i / P_i. The government would issue a certificate C_i including the date of birth and national identity number attesting that the owner of P_i has date of birth d.

    Now when the person who knows p_i wants to access an age restricted site s, they generate a second site (or session) specific keypair P_s_i / p_s_i. They use ZK-STARKs to create a zero-knowledge proof that they have a C_i (secret parameter) that has a valid signature by P_g (public parameter), with a date of birth before some cutoff (DOB secret parameter, cutoff public parameter), and which includes key P_i (secret parameter), that they know p_i (secret parameter) corresponding to P_i, and that they know a hash h (secret parameter) such that h = H(s | P_s_i | p_i | t), where t is an issue time (public parameter, and s and P_s_i are also public parameters. They send the proof transcript to the site, and authenticate to the site using their site / session specific P_s_i key.

    Know as to how this fits your constraints:

    Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)

    Yep - the service verifies the ZK-STARK proof to ensure the required properties hold.

    Not let the service know any other information about the user besides what they already learn through http or TCP/IP

    Due to the use of a ZKP, the service can only see the public parameters (plus network metadata). They’ll see P_s_i (session specific), the DOB cutoff (so they’ll know the user is born before the cutoff, but otherwise have no information about date of birth), and the site for which the session exists (which they’d know anyway).

    Generating a ZK-STARK proof of a complexity similar to this (depending on the choice of hash, signing algorithm etc…) could potentially take about a minute on a fast desktop computer, and longer on slower mobile devices - so users might want to re-use the same proof across sessions, in which case this could let the service track users across sessions (although naive users probably allow this anyway through cookies, and privacy conscious users could pay the compute cost to generate a new session key every time).

    Sites would likely want to limit how long proofs are valid for.

    Not let a government or age verification authority know whenever a user is accessing 18+ content

    In the above scheme, even if the government and the site collude, the zero-knowledge proof doesn’t reveal the linkage between the session key and the ID of the user.

    Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.

    An adult could share / leak their P_s_i and p_s_i keypair anonymously, along with the proof. If sites had a limited validity period, this would limit the impact of a one-off-leak.

    If the adult leaks the p_i and C_i, they would identify themselves.

    However, if there were adults willing to circumvent the system in a more online way, they could set up an online system which allows anyone to generate a proof of age and generates keypairs on demand for a requested site. It would be impossible to defend against such online attacks in general, and by the anonymity properties (your second and third constraints), there would never be accountability for it (apart from tracking down the server generating the keypairs if it’s a public offering, which would be quite difficult but not strictly impossible if it’s say a Tor hidden service). What would be possible would be to limit the number of sessions per user per day (by including a hash of s, p_i and the day as a public parameter), and perhaps for sites to limit the amount of content per session.

    Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware

    ZK-STARK proof generation can run on a CPU or GPU, and could be packaged up as say, a browser addon. The biggest frustration would be the proof generation time. It could be offloaded to cloud for users who trust the cloud provider but not the government or service provider.

    Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach

    Governments already store people’s date of birth (think birth certificates, passports, etc…), and would need to continue to do so to generate such certificates. They shouldn’t need to store extra information.

    • TechLich@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      they could set up an online system which allows anyone to generate a proof of age and generates keypairs on demand for a requested site

      This is the issue I have with most cryptographic solutions. There’s usually a way for someone to just share their private keys or run a service that generates valid site-specific credentials. If a user can generate something that says they’re over 18, it would be trivial to do that on behalf of others and set up an easy automated system for it. Adding some kind of rate or use limiting would just make it frustrating to use on multiple sites and add more implementation complexity on the side of the site.

      Once such a system exists, the whole thing becomes trivial to circumvent. I guess the governments could try to play whack-a-mole with some kind of revocation capability but if the resulting keypairs are anonymous, then that wouldn’t work because they wouldn’t know who is creating them.

  • Zagorath@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Here’s one good answer: https://crypto.stackexchange.com/a/96283

    It has the downside of requiring a physical device like a passport or some specific trusted long-running locally-kept identity store held by the user. But it’s otherwise very good.

    Another option does not require anything extra be kept by the user, but does slightly compromise privacy. The Government will not be able to track each time the user tries to access age-gated content, or even know what sources of age-gated content are being accessed, but they will know how many different sites the user has requested access to. It works like this:

    1. The user creates or logs in to an account on the age-gated site.
    2. The site creates a token T that can uniquely identify that user.
    3. That token is then blinded B(T). Nobody who receives B(T) can learn anything about the user.
    4. The user takes the token to the government age verification service (AVS).
    5. The user presents the AVS with B(T) and whatever evidence is needed to verify age.
    6. The AVS checks if the person should be verified. If not, we can end the flow here. If so, move on.
    7. The AVS signs the blinded token using a trusted AVS certificate, S(B(T)) and returns it to the user.
    8. The user returns the token to the site.
    9. The site unblinds the token and obtains S(T). This allows them to see that it is the same token T representing the user, and to know that it was signed by the AVS, indicating that the user is of age.
    10. The site marks in their database that the user has been age verified. On future visits to that site, the user can just log in as normal, no need to re-verify.

    All of the moving around of the token can be automated by the browser/app, if it’s designed to be able to do that. Unfortunately a typical OAuth-style redirect system probably would not work (someone with more knowledge please correct me), because it would expose to the AVS what site the token is being generated for. So the behaviour would need to be created bespoke. Or a user could have a file downloaded and be asked to share it manually.

    There’s also a potential exposure of information due to timing. If site X has a user begin the age verification flow at 8:01, and the AVS receives a request at 8:02, and the site receives a return response with a signed token at 8:05, then the government can, with a subpoena (or the consent of site X) work out that the user who started it at 8:01 and return at 8:05 is probably the same person who started verifying themselves at 8:02. Or at least narrow it down considerably. Making the redirect process manual would give the user the option to delay that, if they wanted even more privacy.

    The site would probably want to store the unblinded, signed token, as long-term proof that they have indeed verified the user’s age with the AVS. A subsequent subpoena would not give the Government any information they could not have obtained from a subpoena in an un-age-verified system, assuming the token does not include a timestamp.

    • TechLich@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      It would also reveal to the government that the user was accessing 18+ content (though not what that content is if the token is blinded).

      It also doesn’t stop the easy circumvent of someone who is an adult providing a service for children or others who don’t want to auth with the government.

      1. The 18+ site provides Child c with a token T and it’s blinded to b(T)
      2. The child sends b(T) to a malicious service run by a real adult (Mal)
      3. Mal sends the token to the AVS to create s(b(T))
      4. Mal provides s(b(T)) to the child who gives it to the 18+ site as a legit S(T)
      • Zagorath@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        It would also reveal to the government that the user was accessing 18+ content

        Yes, I did mention that. Although ironically, Australia’s social media minimum age law, and other similar laws being considered around the world, would actually increase privacy in this respect. The government could have separate keys for each age of legal significance (16 and 18, in Australia) and sign with the appropriate one (either the highest the user meets, or all the user meets—the latter would give the site less information about the user’s and).

        I don’t believe it is technically possible to get around the example you shared there. Even in the real world, it’s not dissimilar to a child asking an adult to buy alcohol for them.

        • TechLich@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          23 hours ago

          The difference with the asking an adult to buy alcohol is mostly that, because the whole thing is online, they wouldn’t need to ever really interact with an adult.

          If the circumvention is as easy as looking up “free age verification” in a search engine, typing a url and clicking a button then it might not be very effective.

          If it at least required them to steal dad’s id card or get uncle Bob to help or something that’s a different story.

          • Zagorath@aussie.zone
            link
            fedilink
            arrow-up
            2
            ·
            22 hours ago

            Actually something just occurred to me. Because my system, unlike the one from the Stack Exchange link or the one described elsewhere in the thread using an ID card, relies on a per-site untraceable request to the government, the government would be able to detect if one user is making a suspicious number of requests. It’s reasonable for one person to make tens of requests, maybe even low hundreds over the course of a lifetime. It’s not reasonable to be making hundreds or more in a day. They wouldn’t know which sites are being accessed with it, or even what accounts on those sites. But they could set rate limits to prevent one person creating too many accounts for others, and potentially threaten legal action against them for doing so.

            That threat of legal action is part of the same thing that prevents children from being able to go up to a random adult, handing them a $50 note, and asking for $20 worth of alcohol in exchange. You’re not going to be able to prevent it on a smaller scale, but you can definitely prevent a small handful of people being able to age verify on behalf of thousands of children.

            An additional protection could be added depending on how the age verification works. If she verification is “upload a scan of your photo ID”, then yeah, mass verification becomes possible. But if each verification requires you to hold up your photo ID next to your face, speak a specific phrase aloud (with automated lip reading attempting a rough lip flap match), nod your head, write a specific phrase on a piece of paper, and more, all in randomised orders, it becomes a much bigger burden for someone to provide for others.

            I’m certainly not advocating this. The level of burden for legitimate users would be too high to consider it reasonable. But it would be possible. Something like this has been used in the past for things like EV code signing certificates, where a larger burden is relatively more reasonable.

  • Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Why prove it at all?? An assertion from the OS should be good enough. Just have the OS ask once, and send that info when it has to as a general age range. A few different age ranges for kids/teens, an 18-21 group, and 21+ is all the info they really need at most.

    If age verification has to be a thing, let the user supply it at install/profile creation time, and just leave it at that.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      This is the way. I think this is what Apple is finally implementing, but since they took too long to do so, governments have been passing laws which require privacy invasive measures that fill the void. Hard to say if that will reverse itself now that there’s a whole age-verification industry that popped up. Actually it’s unclear to me if the age-verification industry manufactured a problem to push their solution?

      Had Apple implemented this in their Parental Controls setting, it would have avoided the government intervention and shady age-verification companies from popping up.

  • crwth@piefed.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago
    1. Apply for access to age-gated content.
    2. Ignore application for 18 years.
    3. Your account has been approved.
    • TechLich@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      This is the first perfect solution I’ve heard!

      Granted it’s a little slow but it meets all the requirements xD

  • neidu3@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    When software poses a requirement, software should be ditched in favor of protocols. This is why any software that relies on a closed spec protocol should be avoided.

    You’ll never see an age verification requirement on IRC or XMPP. And any software using these protocols that try to implement age verification will simply be left at the curbside, replaced by an alternative.