• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    From HN comments:

    I just used Groq / llama-7b to classify 20,000 rows of Google sheets data (Sidebar archive links) that would have taken me way longer… Every one I’ve spot checked right now has been correct, and I might write another checker to scan the results just in case. // Even w/ a 20% failure it’s better than not having the classifications

    I classified ~1000 GBA game roms files by using their file names to put each in a category folder. It worked like 90% correctly. Used GPT 3.5 and therefore it didn’t adhere to my provided list of categories but they were mostly not wrong otherwise.

    Both are best case scenarios for the usage of LLMs: simple categorisation of stuff where mistakes are not a big deal.

    [A] I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming.

    [B] Copilot isn’t underwhelming, it’s shit. What’s impressive is how Microsoft managed to gut GPT-4 to the point of near-uselessness. It refuses to do work even more than OpenAI models refuse to advise on criminal behavior. In my experience, the only thing it does well is scan documents on corporate SharePoint. For anything else, it’s better to copy-paste to a proper GPT-4 yourself.

    [C] lol I can’t help but assume that people who think copilot is shit have no idea what they are doing.

    [D] I have it enabled company-wide at enterprise level, so I know what it can and can’t do in day-to-day practice. // Here’s an example: I mentioned PowerPoint in my earlier comment. You know what’s the correct way to use AI to make you PowerPoint slides? A way that works? It’s to not use the O365 Copilot inside PowerPoint, but rather, ask GPT-4o in ChatGPT app to use Python and pandoc to make you a PowerPoint.

    A: see, it’s this kind of stuff that makes me mock HN as “Reddit LARPing as h4x0rz”. If a Reddit comment starts out by prefacing the alleged authority of the author over a subject, and then makes a claim, there’s high likelihood that the claim is some obtuse shit. Like this - the problem is not just LLMs, it’s Copilot being extra shite.

    B: surprisingly sane comment for HN standards, even offering a way to prove their own claim.

    C: yeah of course you assume = make shit up. Specially about things that you cannot reliably know. And while shifting the discussion from “what” is said to “who” says it. Muppet.

    Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.

    Emphasis mine. It’s like “C” from the quote above, except towards the author of the article. Next~

    I didn’t find this article refreshing. If anything, it’s just the same dismissive attitude that’s dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that’s optimistic.

    I’m glad to see that I’m not the only one who typically doesn’t bother reading HN comments. This guy doesn’t either - otherwise they’d know that most comments are in the opposite direction, blinded with idiocy/faith/stupidity (my bad, I listed three synonyms for the same thing.)

    I’m just going to say it. // The author is an idiot who is using insults as a crutch to make his case.

    I’m just going to say it: the author of this comment is an idiot who is using insults as a crutch to make his case.

    I’m half-joking by being cheeky with the recursion. (It does highlight the hypocrisy though; the commenter is whining about insults while insulting the author.)

    Serious now: if you’re unable to extract the argumentation from the insults, or to understand why the insults are there (it’s a rant dammit), odds are that you’d do a great favour for everyone on the internet by going offline. Forever.


    “But LLMs are intellig–” PILEDRIVE TIME!!!