The GNOME.org Extensions hosting for GNOME Shell extensions will no longer accept new contributions with AI-generated code. A new rule has been added to their review guidelines to forbid AI-generated code.

Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it’s now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected

  • i_stole_ur_taco@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    6 天前

    extension developers should be able to justify and explain the code they submit, within reason

    I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.

    Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.

    • skepller@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 天前

      Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work.

      Holy shit, preach!

      Once you give a shit ton of prompts and the feature finally starts working, the code is most likely complete ass, probably filled with a ton of useless leftovers from previous iterations, redundant and unoptimized code. That’s when you start reading/understanding the code and polishing it, not when you ship it lol

  • buddascrayon@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 天前

    This is one of the things that people who use AI to vibe code don’t get. Sure your AI genned code ends up working but when you actually look at the code it’s sloppy as all fuck, with a lot of unnecessary junk in it. And if you ever have to fix it, good fucking luck finding what’s actually going on. Since you didn’t write it there’s no way for you to know exactly what it is that’s actually fucking up.

    Really you end up being no better than some homebody who copy-pasted some code they found on the internet and plugged it into their shit with no idea of how any of it actually works.

  • itsathursday@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    6 天前

    You used to be able to tell an image was photoshopped because of the pixels. Now with code you can tell it was written with AI because of the comments.

    • uncouple9831@lemmy.zip
      link
      fedilink
      arrow-up
      0
      arrow-down
      3
      ·
      6 天前

      Why? If the code works the code works, and a person had to make it work. If they generated some functions who cares? If they let the computer handle the boilerplate, who cares? “Oh no the style is inconsistent…” Who cares?

        • ikidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 天前

          This is Gnome we’re talking about here, they don’t GAF if extensions work or not. They’ll break them tomorrow if they feel like it.

              • lastweakness@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                4 天前

                You’re literally looking at a post that is a result of that effort… The human review process exists to try and reduce GNOME Shell extensions that could potentially break the shell. The link I posted details other steps as well, but of course you didn’t bother reading that. And again, it’s impossible to never break extensions because extensions are just scripts that monkey-patch the GNOME Shell process. Trying their best is all they can do.

                With how Reddit and Lemmy react to GNOME, you would think GNOME killed their dog or something.

            • ikidd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 天前

              uninformed

              I’ve used Gnome on and off for about a quarter century. There have been devs with very popular extensions that have sworn off Gnome because of their attitude towards breaking extensions. So if they’ve suddenly become concerned about breaking things people rely on to make Gnome marginally usable after Gnome itself has removed popular features, then that’s a recent trend. So pull the other one.

              • lastweakness@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                5 天前

                Of course there are extension devs who left GNOME due to the lack of a stable API. But they were all looking for something that was inherently not possible with how extensions work in GNOME. I can’t blame them, “extensions” is a misnomer in this case after all. It’s actually more like userscripts being applied on a web page in a browser.

                If possible, take the time to read the link in my earlier comment, it should clear up a lot of misunderstandings about “GNOME devs intentionally breaking extensions” as most people seem to think of it as.

                Given how extensions work (monkey-patching), it’s actually really impressive that most extensions haven’t really broken since GNOME 45 and the steps taken by GNOME to that end are impressive. Even the human review being discussed here is part of that, it’s exactly because an extension can literally bring down a user’s shell (also similar to how a web page can crash due to a userscript), so they’re trying to reduce the chances of that happening.

                GNOME has always had a bit of a communication problem. They’re working on it. But I promise you, they’re all wonderful folks trying their best, even if they fail to convey that well sometimes.

        • uncouple9831@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          6 天前

          Why would that be anyone other than the original author? This sounds like a hosting service is refusing to host things based on what tool was used in creation. “Anyone using emacs can’t upload code to GitHub anymore” seems equivalently valid.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            6 天前

            in the case of ai generated code, that is almost always the case. People say “but I review all my pet neural network’s code!” but they don’t. If they did, the job would actuallydtake longer. Reading and understanding code takes longer than writing it.

          • imecth@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            6 天前

            GNOME manually reviews every extension, and they understandably don’t want to review AI generated code.

            • uncouple9831@lemmy.zip
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              6 天前

              Oh…an actually human response. How refreshing. At least one person here got their rabies shot.

              Do they actually review it or is it like how android and apple “review” apps? And why would they be reviewing the code rather than putting it through some test suite/virus scanning suite or something? That is, this shit isn’t going away any time soon even if the bubble pops, so why not find a way to avoid the work rather than ban people who make the work “too hard”?

      • brian@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        6 天前

        you shouldn’t be able to tell if someone used ai to write something. if you can then it is bad code. they’re not talking about getting completion on a fn, they’re talking about letting an agent go and write chunks of the project.

    • IngeniousRocks (They/She) @lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 天前

      Just an example:

      I’m a programming student. In one of my classes we had a simple assignment. Write a simple script to calculate factorials. The purpose of this assignment was to teach recursion. Should be doable in 4-5 lines max, probably less. My coed decided to vibe code his assignment and ended up with a 55 line script. It worked, but it was literally %1100 of the length it needed to be with lots of dead functions and ‘None->None(None)’ style explicit typing where it just simply wasn’t needed.

      The code was hilariously obviously AI code.

      Edit: I had like 3/4 typos here

    • kadu@scribe.disroot.org
      link
      fedilink
      arrow-up
      1
      ·
      6 天前

      I guess the practical idea is that if your AI generated code is so good and you’ve reviewed it so well that it fools the reviewer, the rule did it’s job and then it doesn’t matter.

      But most of the time the AI code jumps out immediately to any experienced reviewer, and usually for bad reasons.

      • refalo@programming.dev
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        6 天前

        So then it’s not really a blanket “no-AI” rule if it can’t be enforceable if it’s good enough? I suppose the rule should have been “no obviously bad AI” or some other equally subjective thing?

  • danhab99@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    6 天前

    So what does this mean? Bc like (at least with my boss) whenever I submit ai generated code at work I still have to have a deep and comprehensive understanding of the changes that I made, and I have to be right (meaning I have to be right about what I say bc I cannot say the AI solved the problem). What’s the difference between that and me writing the code myself (+googling and stack overflow)?

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 天前

      The difference is people aren’t being responsible with AI

      You’re projecting competence onto others. You speak like you’re using AI responsibly

      I use AI when it makes things easier. All the time. I bet you do too. Many people are using AI without a steady hand, without the intellectual strength to use it properly in a controlled manner

      • Hawk@lemmynsfw.com
        link
        fedilink
        arrow-up
        1
        ·
        6 天前

        Its like a gas can over a match. Great for starting a campfire. Excellent for starting a wildfire.

        Learning the basics and developing a workflow with VC is the answer.

          • Hawk@lemmynsfw.com
            link
            fedilink
            arrow-up
            1
            ·
            6 天前

            Large language models are incredibly useful for replicating patterns.

            They’re pretty hit and miss with writing code, but once I have a pattern that can’t easily be abstracted, I use it all the time and simply review the commit.

            Or a quick proof of concept to ensure a higher level idea can work. They’re great for that too.

            It is very annoying though when I have people submit me code that is all AI and incredibly incorrect.

            Its just another tool on my belt. Its not going anywhere so the real trick is figuring out when to use it and why and when not to use it.

            To be clear VC was version control. I should have been more clear.

      • uncouple9831@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        6 天前

        Banning a tool because the people using it don’t check their work seems shortsighted. Ban the poor users, not the tool.

        • logging_strict@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          6 天前

          They should state a justification. Not merely what they are looking for to identify AI generated code.

          The justification could be the author is unlikely to be capable of maintenance. In which case the extension is just going to inconvenience/burden onto others.

          So far their is no justification stated besides, da fuk and yuk.

          • uncouple9831@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            6 天前

            Exactly, there isn’t a criteria other than the reviewer getting butthurt. Granted this is gnome, so doing whatever they feel like regardless of consequences is kind of their thing, but a saner organization would try to make the actual measurable badness more clear.

            • logging_strict@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              4 天前

              A saner organization would also hit up submitters for a reviewer’s fee. This would reduce AI spam. Barriers to entry matter.

              A reviewers fee is equivalent to Canonical offering customer support contracts. Obviously a person that needs to lean on AI as a crutch, is just screaming out for reviewers to act as advisers. The reviewer just wielding the giant DENIED stamp is fun, but doesn’t address the issue of noobs implicitly asking to work with a consultant.

              gnome reviewers obviously never missing an opportunity to miss an opportunity.

            • Quatlicopatlix@feddit.org
              link
              fedilink
              arrow-up
              0
              ·
              5 天前

              Have you read the first paragraph if the lidnked articel? It quotes the criteria right there: "Extensions must not be AI-generated

              While it is not prohibited to use AI as a learning aid or a development tool (i.e. code completions), extension developers should be able to justify and explain the code they submit, within reason.

              Submissions with large amounts of unnecessary code, inconsistent code style, imaginary API usage, comments serving as LLM prompts, or other indications of AI-generated output will be rejected."

              Maybe instead of commenting under every comment that lines this change read the articlw first? Ai is fine if your code is fine and you uderstand it. If the reviewer has to argue with a llm because the submitter just pasts the text into his llm and then posts the output of said llm back to the reviwer it is a huge waste of time. Thiss doesnt happen if the person submitting the code understands it and made shure that the code is fine.

              • logging_strict@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                4 天前

                Everyone commenting have read and understood the article. Perhaps the nuance of the conversation is just going over your head. Your commentary is your personal opinion, which is outside of the source material. What you copy+pasted is exactly what we’ve commented on.

                The article never said “reviewing Gnome extensions, where LLM was, used is a huge waste of time”. You are adding to what’s said. The adults are not pulling from outside the article.

                We are stating what the article lacks. We are not hallucinating. So if we are not hallucinating then you must not be following. Reread it a few times until you get it.

                • Quatlicopatlix@feddit.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 天前

                  Maybe read the original blog post from the gnome dev then? The post the article references… it says right there why the ai code is a problem, it has to much unnessecery code in it and reviewing that takes time. The author also says the submitted ai code doesnt adhere to good practices.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 天前

          We do this all the time. I’m certified for a whole bunch of heavy machinery, if I were worse people would’ve died

          And even then, I’ve nearly killed someone. I haven’t, but on a couple occasions I’ve come way too close

          It’s good that I went through training. Sometimes, it’s better to restrict who is able to use powerful tools

          • uncouple9831@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            6 天前

            Yeah something tells me operating heavy machinery is different from uploading an extension for a desktop environment. This isn’t building medical devices, this isn’t some misra compliance thing, this is a widget. Come on, man, you have to know the comparison is insane.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 天前

              People have already died to AI. It’s cute when the AI tells you to put glue on your pizza or asks you to leave your wife, it’s not so cute when architects and doctors use it

              Bad information can be deadly. And if you rely too hard on AI, your cognitive abilities drop. It’s a simple mental shortcut that works on almost everything

              It’s only been like 18 months, and already it’s become very apparent a lot of people can’t be trusted with it. Blame and punish those people all you want, it’ll just keep happening. Humans love their mental shortcuts

              Realistically, I think we should just make it illegal to have customer facing LLMs as a service. You want an AI? Set it up yourself. It’s not hard, but realizing it’s just a file on your computer would do a lot to demystify it