• 3 Posts
  • 70 Comments
Joined 5 months ago
cake
Cake day: July 14th, 2025

help-circle
  • The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:

    Corporations have been running algorithms for decades.

    Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.

    Markets weren’t inefficient because technology didn’t exist to make them efficient. Markets were asymmetrically efficient on purpose. One side had computational power. The other side had a browser and maybe some browser tabs open for comparison shopping.

    I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.

    Consider what it took to use business intelligence software in 2015. […] Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

    You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.

    The moment ChatGPT became public, people started using it to avoid work they hated. Not important work. Not meaningful work. The bureaucratic compliance tasks that filled their days without adding value to anything.

    Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.

    Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.

    People weren’t using ChatGPT to think. They were using it to stop pretending that performance reviews, status update emails, and quarterly reports required thought.

    Basically, users used it to create the layer of communication that existed to satisfy organizational requirements rather than to advance any actual goal.

    Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.

    This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.

    You want the cheapest milk within ten miles? You can build that.

    The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.

    You want to track price changes across every retailer in your area? You can do that now

    From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.

    You want something to read every clause of your insurance policy and identify the loopholes?

    Lmao DO NOT use an LLM for this. They are not reliable enough for this.

    You want an agent that will spend forty hours fighting a medical billing error that you’d normally just pay because fighting it would cost more in time than the bill? You can have that.

    You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.

    The marginal cost of algorithmic labor has effectively collapsed. Using a GPT-5.2–class model, pricing is on the order of $0.25 per million input tokens and about $2.00 per million output tokens. A token is roughly three-quarters of a word, which means one million tokens equals about 750,000 words. Even assuming a blended input/output cost of roughly $1.50 per million tokens, you can process 750,000 words for about $1.50. War and Peace is approximately 587,000 words, meaning you can run an AI across one of the longest novels ever written for around a dollar. That’s not intelligence becoming cheaper. That’s the marginal cost of cognitive labor approaching zero.

    Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.

    We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.

    Institutions built their advantages on exhaustion tactics. They had more time, more money, and more stamina than you did. They could bury you in paperwork. They could drag out disputes. They could wait you out. That strategy assumed you had finite patience and finite resources. It assumed you’d eventually give up because you had other things to do.

    An AI assistant breaks that assumption.

    No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.

    Usage numbers tell the story clearly. ChatGPT reached 100 million monthly active users in two months. That made it the fastest-growing consumer application in history. TikTok took nine months to hit 100 million users. Instagram took two and a half years. The demand was obviously already there. People were apparently just waiting for something like this to exist.

    Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)

    Venture capital funding for generative AI companies reached $25.2 billion in 2023 according to PitchBook data. That was up from $4.5 billion in 2022. Investment wasn’t going into making better algorithms. It was going into making those algorithms accessible.

    I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.

    DoNotPay, an AI-powered consumer advocacy service, claimed to help users fight more than 200,000 parking tickets before the company pivoted to other services. LegalZoom reported that AI-assisted document preparation reduced the time required to create basic legal documents by 60% in 2023.

    I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.

    The credential barrier still exists. You can’t get a prescription from ChatGPT. The legal liability still flows through licensed professionals. The system still requires human gatekeepers. The question is how long those requirements survive when the public realizes they’re paying $200 for a consultation that an AI handles better for pennies.

    Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.

    Oh, I just about hit the character limit. I guess I’ll stop there.
    Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.


  • Metro pretending to be a victim here would be funny if it wasn’t so sad.

    In 2025, Metro had net earnings of just under $1.02 billion, growing by 9.4% over 2024. The thieves’ estimated damages is $3000. That is around 0.0003% of their earnings. They make more money than that in 2 minutes.

    All this information is sourced from Metro’s own financial report: https://corpo.metro.ca/userfiles/file/PDF/Rapport-Annuel/2025/en/annual_report_2025_EN.pdf
    I don’t understand how they have a 2025 financial report before 2025 has even finished, but it doesn’t really matter (and I can’t be bothered to figure it out). It is the most recent report.

    Statcan estimates the inflation rate between November 2024 and November 2025 to be 2.2%. Metro has more than quadrupled inflation. Unless they are somehow making their money from something other than selling goods to consumers, they are definitely charging too much.

    Grégoire also defended the company’s philanthropic efforts, saying that in 2025, Metro donated $1.15 million to food banks, and provided millions of dollars worth of food donations to other organizations.

    With their own financial report in mind, this is basically the corporate equivalent to virtue signalling. If they actually cared about making things affordable, they’d either have reduced their in-store prices to make their net earnings plateau or donated 100x more to food banks.



  • From my, admittedly limited, interaction with mathematicians in my life and a bit of extrapolation:

    1. Academia: teach advanced mathematics and do research in mathematics for a university. There’s still lots of unsolved problems in math and also plenty of overlap with computer science, which also has lots of research possibilities
    2. Public sector: governments of all levels need at least statisticians, if not more specific mathematics skills depending on what they’re trying to do (e.g. research, engineering, economics, etc.)
    3. Private sector: lots of engineering companies employ a few mathematicians or at least physicists who are really good at math to make sure their next bridge/plane/ocean-boiler will actually work

    There’s a lot of overlap between all three but I roughly split them up based on where I’d expect the majority of jobs like that would be (e.g. I’m sure NASA employs a good deal of mathematicians, but so does Lockheed Martin and friends). Also a lot of people get a degree in mathematics and then specialize further with a masters and/or doctorate in computer science or physics, since both of those can be quite math-heavy and are better-funded fields.















  • Yeah, it’s a Voyager (the app) thing. It’s the default now when sharing a link. I’m not sure why, it seems completely useless, more expensive for the devs, and a privacy problem for everyone else (redirect links are a form of tracking).

    Friendly reminder for everyone using Voyager to turn it off. I already did.



  • The strict_* set of integer function look interesting though I’m unlikely to use something that panics by design. I’m sure that’s useful in programs that panic to indicate problems. Do those exist? I always treat panics as a design failure.

    Duration::from_mins() is useful for me since I’ve been doing Duration::from_secs(minutes * 60) for some things in my projects, which bugged me a bit.