• 6 Posts
  • 2.92K Comments
Joined 1 year ago
cake
Cake day: November 3rd, 2024

help-circle


  • Edit: I’m an idiot.

    Same here. Nobody knows what the eff they are doing. Especially the people in charge. Much of life is us believing confident people who talk a good game but dont know wtf they are doing and really shouldnt be allowed to make even basic decisions outside a very narrow range of competence.

    We have an illusion of broad meritocracy and accountability in life but its mostly just not there.


  • I work in an company who is all-in on selling AI and we are trying desperately to use this AI ourselves. We’ve concluded internally that AI can only be trusted with small use cases that are easily validated by humans, or for fast prototyping work… hack day stuff to validate a possibility but not an actual high quality safe and scalable implementation, or in writing tests of existing code, to increase test coverage. yes, I know thats a bad idea but QA blessed the result… so um … cool.

    The use case we zeroed in on is writing well schema’d configs in yaml or json. Even then, a good percentage of the time the AI will miss very significant mandatory sections, or add hallucinations that are unrelated to the task at hand. We then can use AI to test AI’s work, several times using several AIs. And to a degree, it’ll catch a lot of the issues, but not all. So we then code review and lint with code we wrote that AI never touched, and send all the erroring configs to a human. It does work, but cant be used for mission critical applications. And nothing about the AI or the process of using it is free. Its also disturbingly not idempotent. Did it fail? Run it again a few times and it’ll pass. We think it still saves money when done at scale, but not as much as we promise external AI consumers. The Senior leadership know its currently overhyped trash and pressure us to use it anyway on expectations it’ll improve in the future, so we give the mandatory crisp salute of alignment and we’re off.

    I will say its great for writing yearly personnel reviews. It adds nonsense and doesnt get the whole review correct, but it writes very flowery stuff so managers dont have to. So we use it for first drafts and then remove a lot of the true BS out of it. If it gets stuff wrong, oh well, human perception is flawed.

    This is our shared future. One of the biggest use cases identified for the industry is health care. Because its hard to assign blame on errors when AI gets it wrong, and AI will do whatever the insurance middle men tell it to do.

    I think we desperately need a law saying no AI use in health care decisions, before its too late. This half-assed tech is 100% going to kill a lot of sick people.














  • The output AI provides does not offset the tediousness of using it, not to mention that you have to always check its work thoroughly because it can tell you absolute nonsense. Google AI told me my local supermarket was open on thanksgiving day and it wasnt. Then I drove to a safeway nearby that google said was open 24/7 on thanksgiving day, and it was closing at 6. I got what I needed with seconds to spare.

    EVERYTHING about using AI is like this. You cant trust it for shit and when you really need it to be accurate, it wont be. Nobody needs that in their lives, and CEOs and AI freeks are glossing over how much of a pain in the arse it is to use for anything but very very narrow use cases, where a google search would do fine anyway. I not only dont need it, I hate it.