

Metro pretending to be a victim here would be funny if it wasn’t so sad.
In 2025, Metro had net earnings of just under $1.02 billion, growing by 9.4% over 2024. The thieves’ estimated damages is $3000. That is around 0.0003% of their earnings. They make more money than that in 2 minutes.
All this information is sourced from Metro’s own financial report: https://corpo.metro.ca/userfiles/file/PDF/Rapport-Annuel/2025/en/annual_report_2025_EN.pdf
I don’t understand how they have a 2025 financial report before 2025 has even finished, but it doesn’t really matter (and I can’t be bothered to figure it out). It is the most recent report.
Statcan estimates the inflation rate between November 2024 and November 2025 to be 2.2%. Metro has more than quadrupled inflation. Unless they are somehow making their money from something other than selling goods to consumers, they are definitely charging too much.
Grégoire also defended the company’s philanthropic efforts, saying that in 2025, Metro donated $1.15 million to food banks, and provided millions of dollars worth of food donations to other organizations.
With their own financial report in mind, this is basically the corporate equivalent to virtue signalling. If they actually cared about making things affordable, they’d either have reduced their in-store prices to make their net earnings plateau or donated 100x more to food banks.












The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:
Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.
I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.
You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.
Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.
Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.
Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.
This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.
The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.
From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.
Lmao DO NOT use an LLM for this. They are not reliable enough for this.
You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.
Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.
We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.
No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.
Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)
I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.
I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.
Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.
Oh, I just about hit the character limit. I guess I’ll stop there.
Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.