• LordCrom@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    16 hours ago

    I was asked to create a simple script… Great, I could have knocked that out in maybe 3 or 4 hours.

    Boss insisted I use A.I. … Fine whatever.

    The code it spit out was OK, but didn’t work… So I took it and started re coding and fixing the bugs.

    It took over 3 hours to get that sloppy code to a working state.

    Boss asked why it took so long, ai works in seconds. He didn’t understand that I had to fix that crap code he forced me to use

    Look, ai does pattern matching like a champ. But it can not create… It doesn’t imagine…

    • Smaile@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      16 hours ago

      yup, they don’t realize it will replace them, not their workers. and if you are that manager reading this, remember their goal is no middle class.

      That means you.

      Not your grunts that get paid dogshit and are little more the soulless husks these days.

      You.

    • Kaz@lemmy.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      20 hours ago

      This, because all management does is communicate they think it’s amazing…

      Try and get it to do complicated or edge case things and it struggles, but management never ever touch complicated stuff! They offload it

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    20 hours ago

    I have a simple anwer why managers think its smart and workers things its dumb. The managers see all kinds of documentaion from workers and to them the AI slop look the same. It looks the same due to the fact that the managers never take the time to comprehend what they are reading.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 hours ago

      I think it’s more that AI is a soulless bullshit generator with no imagination and no deep understanding, and managers tend to notice that it can do most of the work they do. There’s a lot of skill overlap with management there, so naturally they would be impressed with it.

      • Itdidnttrickledown@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        17 hours ago

        Without a doubt. The skill set to be in management has nothing to do with intelligence. It has to do with selfish manipulation and no empathy. That way you can be cruel without missing a second of sleep.

  • RBWells@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    5 hours ago

    They are pushing it at my work. I spent half a day trying to train Copilot to build me a report from one PDF and one way too formatted excel sheet, no go, the too-formatted excel stumped it, I had to clean it up first. I am booking payroll and the fucking system we use refuses to generate a report with the whole cost, there is one for gross to net and a separate one, not available in excel, and not in a format that can be put in a spreadsheet, for the employer cost. I need to split the total into departments & job cost codes. (ETA the payroll system also doesn’t handle the job costing, even after I get total cost, more manual work)

    I worked with the department who sends me this trash and glory be, there was a CSV for the gross to net one. Finally wrestled it into getting this right and asked it “what do I ask next time to get this result the first time” and it does now do a reliable job of this BUT:

    All it’s doing is making a report that the payroll system really and truly ought to be capable of producing. And I guess letting me honestly say, “sure boss, I use the copilot”. It’s not adding anything at all, just making up for a glaring defect in the reporting available from the payroll company. Give me access to that system and I could build the report, it doesn’t need AI at all.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      4
      ·
      18 hours ago

      This is my problem with AI where I work. I can use it to get the result I want (eventually) although I have to do some editing.

      But I can also use the python script that has been working fine for years, which gets me 99% of the way there in 15 seconds. It would be faster but the script is terribly unoptimized because I’m not a programmer.

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    1 day ago

    I don’t work with computers or coding, yet even in early childhood education/therapy some people are pushing for AI. Someone used it to make “busy scene” pictures for students to find specific things in. I hate using them. Prior to this, we used “busy scene” images that are easy to find online, full of quirky, funny details that the kids enjoy spotting.

    But I can barely look at the slop images that were generated. So many of the characters have faces that look like wax figures left in the hot summer sun. The “toys” in the scene are nonsensical shapes somewhere between unusable building blocks and poorly-formed puzzle pieces. Looking at the previous, human-made pictures brought me joy, but this AI garbage is a mess that makes me sad. There’s no direction, no fun details to find, just a chaotic, repetitive scene. I bet the kids I work with could draw something more interesting than this.

    • Hazor@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      1 day ago

      I’ve never understood these use cases, pushing for generative AI in places where there’s already an abundance of human-made resources. Often for free. Is it just laziness? A case of “Why take 2 minutes for a Google search when I could take 1 minute for a generative AI prompt?”

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 day ago

    Honestly at this point AI is bad and human critical thinking is the worst I’ve ever seen in my life.

    I know people that I expect would collapse inward without AI holding their hands, and here’s the surprise of this statement. Can’t wait to see it happen. I’m really holding on for the implosions and REALLY hope they happen when I’m nearby.

  • Reygle@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    Most of my conversations with my management is forced to be talking them out of the heinous baloney they’re convinced of because “Gemini says…” No boss, Gemini made some shit up. Scroll past it or stop wasting my time.

  • Sam_Bass@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    23 hours ago

    All the sweet talk in the world ain’t gonna save their jobs when their ai babies take over

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    Management never has a clue what their employees actually do day-to-day. We’re just another black box to them, tracked on a spreadsheet by accounting. Stuff goes in, stuff comes out, you can’t explain that.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      It’s really the middle management they don’t understand, not the floor staff, the people who do all the checking and compliance which the top management now think can be replaced by AI

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      I’m vaguely on the periphery of a project to create a sort of info-hub chat-bot. The project lead was really enthusiastic about getting me on board and helping me develop my skills in that direction.

      Apparently there’s a lot of people calling the wrong departments about stuff. Think along the stereotype of people calling the IT “Help Desk” for a broken light. The bot should help them find the right info, or at least the right department.

      The issue, according to management, is that information is spread all over the place. Some departments use Confluence, others maintain pages on the intranet webserver. One has their own platform for FAQ and tickets, except it’s not actually for tickets any more, which you’ll only find out when they unhelpfully close your ticket with that remark. Wanna guess what confused users do? Right, call some other department.

      The obvious solution would be getting each department to be more transparent and consistent about their information, responsibilities and ways to reach them, possibly even making them all provide their info on some shared knowledgebase with a useful search function. But that would require people to change their stuck habits.

      So instead they develop a bot supposed to know all the knowledgebases and access them for users, answer simple queries, point them the right way for complex ones and potentially even help them raise tickets with the relevant departments. Surely, that will improve things?

      The one time I tried it, I asked it a question that would have been my area of responsibility to see if people would actually find me or at least the general department. Yeah, nah, it pointed me at someone not just unrelated to that function or department, but also responsible for a different geographical area. IDK what they trained it on, but it probably didn’t include any mentions of that topic, which is fair, given it’s still in development.

      But instead of saying “I have no information on that” or direct me to a general contact, it confidently told me to do the thing it’s supposed to fix: bother the wrong person.

      And the project lead wonders why I didn’t inmediately jump at the offer to join his department.

      • Jtotheb@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        22 hours ago

        My wife, who works at a college, was recently trying to locate some information from an old college newspaper that may not have been digitized yet and used their new work AI for help finding it. It directed her to the school’s archives, but provided made-up contact info for the office, and also recommended she contact herself.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 day ago

    We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.

    On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.

    On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.

  • Infrapink@thebrainbin.org
    link
    fedilink
    arrow-up
    118
    ·
    2 days ago

    I’m a line worker in a factory, and I recently managed to give a presentation on “AI” to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is “We have this problem; what tools do we have to fix it”, and so isn’t impressed by " AI" yet. The C_Os, alas, insist it’s the future. They keep hammering on at him to get everybody to integrate “AI” in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say “We have this tool, use it somehow”.

    The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don’t embrace " AI". I quipped that it’s fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.

    • ravelin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      As usual, I fear for the reasonable manager’s job.

      Reasonable managers usually get plowed out of the way by unreasonable C levels who just see their reasonable concerns as obstructions.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      Everyone bought so hard in on it that they need to (make you/us) use it. Otherwise it will be a financial disaster. It shit leaking down all the way.

      (of course it has uses. But it’s not AGI!)

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        In the early stages, it had potential to develop into something useful. Legislators had a chance to regulate it so it wouldn’t become toxic and destructive of all things good, but they didn’t do that because it would “hinder growth,” again falling for the fallacy that growth is always good and desirable.

        But to be honest, some of the earlier LLMs were much better than the ones now. They could have been forked, and developed into specialized models trained exclusively on technical documents relative to their field.

        Instead, AI companies all wanted to have the biggest, most generalized models they could possibly develop, so they scraped as much data as they possibly could and trained their LLMs on enormous amounts of garbage, thinking “oh just a few billion more data points and it will become sentient” or something stupid like that. And now you have Artificial Idiocy that hallucinates nonstop.

        Like, an LLM trained exclusively on peer-reviewed journals could make a decent research assistant or expedited search engine. It would help with things like literature reviews, collating data, and meta-analyses, saving time for researchers so they could dedicate more of their effort towards the specifically human activities of critical thinking, abstract analysis, and synthesizing novel ideas.

        An ML model trained exclusively on technical diagrams could render more accurate simulations than one trained on a digital fuckton of slop.

      • RaoulDook@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        That’s what I suspect. All the corporate bosses pushing AI to keep the bubble inflated so that their investments don’t get drowned in the crash.

        I gotta work on my 401k stuff to find ways to divest from AI + tech too.

  • fibojoly@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    2 days ago

    Our new tech lead loves fucking AI, which let’s him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

    Funnily, he won’t touch our legacy code. Like, he just answers “that’s outside my perimeter” when he’s clearly the one who should be helping us handle that shit. Also it’s for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

  • SpookyBogMonster@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    My workplace was holding the yearly meeting where they lay out a bunch of rules that get followed for a month, and then get forgotten about.

    And one of the things in question was attendance. The boss smugly days “We have an AI tracker that can tell us if you’ve come in late”

    I can’t think of anything that could give me lrss faith in the accuracy of such a system.