
Look on the upside folks, at least RAM, GPU, and storage are wildly expensive. That’s great for the economy and reduces the instances of people being mind-controlled by violent video games!
/s
seems like they ran out of using AI as an excuse to lay people off and “record profits”
Majority of CEOs tried to use Ai on the wrong level of their company: at the bottom.
And watch them get crazy bonuses anyways and suffer no consequences.
they will just run to trump to beg for bailouts.
I support abolishing the death penalty except for two cases.
1: war criminals (think what Israel is doing and their disgusting behavior) and anyone committing crimes under the auspices of the state expecting that protection to allow them to escape. 2: high ranking political and economic figures who fuck things up on purpose for profit.
Even mass shooters and serial killers do far less damage to society than a single one of those fucks.
Someone still needs to be the executioner. Nobody should have to carry that burden.
Put them to work. And I mean basic hand labour.
Plowing fields, harvesting crops, buildings houses, paving roads, etc.
The AI tech comps are having too huge losses to be bailed out into a stable position
crash. and. burn.
Maybe they are actively trying to make the crash happen while Trump is still in office to bail them out.
Majority of CEOs discover they are completely incompetent frauds to the point of literally deserving the death sentence
There, fixed the headline.
Majority of CEOs discover something plebs like you and I knew all along the whole time.
dbzer0 instance is always so brave, or maybe you just don’t feel fear.
Oh I feel fear.
I can just do systemic analysis and cost benefit analysis.
And I can face the facts that I see, the fear that I feel, and be brave enough to say what I really think about it.
Bravery is not a lack of fear, it is being scared as hell and doing the thing anyway.
These people, in the current system, are essentially untouchable.
And if left unchecked, they will kill millions, and then billions of people through their individual and collective compounding incompetence and greed.
The needs of the many outweigh the needs of the few.
Holy shit, really?
How could they have possibly thought AI would make them money? Lmfao. It sucks power and water just to give wrong answers or generate “art” with terrible attention to detail…
It’s because they have no idea what “AI” actually is. They think you tell it to make profits, and it just does so. Anyone who has used any kind of “AI” for an hour knows that it’s mostly just shit at everything except the absolutely most basic shit.
They have no respect for the work of their employees, so they thought that they could be easily replaced by a computer program. They were so excited by the prospect of handing over ALL of our jobs to AI, that they far overextended themselves.
Now they are going to crash and burn because they bet AGAINST every worker in America, and LOST.
I hope it hurts them really, really badly. We should respond to their financial pain by laughing at them, and taking away their fortunes and their companies, since they have demonstrated so clearly that they can’t be trusted to handle the American economy responsibly.
A government bailout was ALWAYS the plan.
Some applications of AI are pretty neat. For example the DeepL translation tool. I convinced my employer to spend money on that. And they make 55 million in profits.
But forcing AI down our throats, like Google does with those horrible auto-dubbed videos? There’s no way that will ever be profitable
DeepL isn’t what is being touted as “AI” this week, though. DeepL is based on older translation technology (by which I mean “far more reliable”).
This is a shell game. Every time there’s a wave of “AI” it’s some new tech that shills sell as the answer to “real” computer intelligence. (It can never possibly be this, of course, because we can’t even define intelligence properly, not to mention making an artificial version of it.) There’s certain levels of hype. There’s a bubble (usually far smaller than this one, of course). Then the bubble pops and we enter the next AI Winter.
The small use cases for which the new technology is actually useful, however, loses the AI monicker and is just called “software”. Like what used to be AI for doing dynamic adjustment on pictures for night shots, HDR, etc. is no longer shilled as AI. It’s just … software that my phone has built in.
So currently “AI” means “LLM” outside of some very specific academic environments. DeepL is just software that (mostly kinda/sorta) works.
DeepL is based on an LLM:
https://www.deepl.com/de/blog/next-gen-language-modelHuh. That’s new. When I first tried DeepL it was not LLM. That’s an intriguing development.
I will update my mental database accordingly.
Yeah there are definitely some cool uses, it seems like analysis/processing uses are pretty good, but generative ones are not.
The AI models that are used for molecular research are literally remaking our understanding of biology and medicine. I don’t know why the big AI corporations don’t point to that as an example of the benefits of AI. I guess cause that doesn’t help them to exclude the proletariat from their profits.
probably it has problems of its own, and it will likely require a scientist to fact checks any thing the AI makes, it also depends if a journal is finicky enough to accept a paper that the experiments are done by AI. pretty niche, i doubt its using the commercial ones like OPENAI/ GOOGLES,or other. its probably made for that specific purpose of that research field. a small subset of users, so unlikely to generate profit that way because thats asmall group of "customers using a niche AI, and likely its proprietary to the UNiversity that made it anyways.
It definitely has its own problems, and the results are thoroughly investigated by the researchers. But yeah it’s very niche, however most models are freely shared between teams. I mean it has to be to get through peer review.
It’s simple. They thought they would fire everyone and replace them with chatbots. All the admins? Chatbots. All the bookkeepers? Chatbots. Purchasing? Chatbots. Assembly lines? Robots running chatbot software.
It never occurred to them that training on large datasets does not make for good decision-making. It just makes the chatbot more wordy.
It’s seeing a vision of the future and the technology that will transform it but not having the patience to let it happen and wanting to jump right to printing money. I think the fact that it happened to the internet should show that even incredibly useful tech can go through this process. It happened with video games, too. They see the potential but their eyes are only on the money, so they don’t have the ability to meet that potential.
And in the case of the metaverse, they killed it off entirely by wanting to build a virtual storefront and advertising space before building a virtual space people would want to visit. Facebook thought the idea would sell itself just based on pop culture, despite none of the pop culture versions involving just a headset and trackers to enter a world you can only see and hear, but not touch (even though it might inconsistently react to your touch). The tech wasn’t there but FB had FOMO and wasted billions chasing it anyways.
Exact same thing is happening with AI. LLMs improved by leaps and bounds, and once it was conversational, people with money went all in on the idea of what it could become (and probably still will, just not anytime soon and it won’t be chatbots, actually I suspect we might end up using LLMs to communicate/translate with the real AIs, though they’ll likely be integrated into them because that communication is so useful).
They don’t understand that it takes more than just having a good idea or seeing tech before it explodes, you have to have passion for that tech, a passion that will fight against the urge to release it to make money, not a passion to release it regardless to make money sooner and the intent to fix it up later.
It’s why they are trying to shove it into everything despite no one wanting it, because they think the exposure will drive demand, when it’s actually exposure to something desirable that drives real demand. And exposure is instead frustrating or dangerous because it’s often wrong and full of corporate censorship (that hasn’t once been accurate but has always been easy to bypass any time I’ve run into it).
I just wonder if MS bet the farm on it, or only bet a survivable loss. Like is the CEO just worried about his job or the entire company’s future?
I do think it’s disingenuous to downplay how effective AI can be. If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
And the art, whilst not as good or as ethical as human art, can still be high quality.
Being against AI is completely valid, but disparaging it with falsehoods does nothing but give the feeling that you don’t know what you’re talking about.
If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
I think that speaks to how bad search engines have gotten, not really to how good AI is. Google used to work. I promise! It used to not just be ads and SEO garbage, if you knew your special search operation functions you could find exactly what you were looking for every time. It’s only because they enshitified the platform that AI search even makes sense to use.
They’ll enshitify AI search soon enough and we’ll be right back where we started.
Sure, but what I am talking about outperforms any search engine in history. If you have a specific question you will get a specific answer with AI, and usually it will be correct. If you use a search engine you can come to the same answer but it will definitely take you longer.
I’m not defending the use of AI, I’m just saying, the quality of them is not the issue. They are becoming extremely high quality with their answers and usefulness. The problem is with the ethics and energy usage.
It used to be that the first couple results would answer the specific question, as long as you knew how to format the question in the correct search terms and with the correct special operations. What might take longer is refining the search to get extremely specific results, but that was usually only necessary if you’re writing a paper or something.
But you shouldn’t just trust whatever the AI says when you’re writing a paper anyway, so that’s not really different.
AI does allow you to skip all that and just ask a plain language question, but search didn’t used to take so long if you knew how to use it. It worked.
Yes it worked, and still required you to dig through the answers to find the answer yourself. That is the difference. AI will search for you and collate the results to give you the definitive answer. I’m not saying searching didn’t work, or doesn’t even work today, I’m just saying AI is more efficient and effective and pretending it isn’t is simply wrong and / or lying.
You shouldn’t just trust whatever the AI says
And you also shouldn’t just trust random things you read on the internet, so I’m not sure exactly what point you are making here. I’ve never advocated for that. I also am not sure why you keep explaining to me how good search engines used to be, seems like a strange aside considering you don’t know how long I’ve been on the internet for.
I can’t tell if you’ve forgotten how good search was, are too young to know better, or were never good at using search.
I’m telling you that you didn’t have to “dig through the answers” if you formatted the search well. It worked. You obviously couldn’t trust everything you read on the internet, but the tricky part was formatting. No digging was required once you were good enough at key words, syntax, and search functions (“” , + - site:). Search results were incredibly efficient and effective. It was amazing.
AI is now maybe as efficient and effective as search results used to be. That’s it. They ruined search and gave us AI.
And they’ll ruin AI too, just you watch.
You had to “dig through answers” as in, you got your answer, in the form of a website that you then had to click into and scan for the answer.
AI is far more efficient. I can’t tell if you are delusional or just willfully ignorant. Ask a question and in two seconds you have a succinct answer with all of the information that using a search tool (now, and in the past) would provide you.
I also don’t disagree that they will ‘ruin AI’, I’m not defending it or the creators of it in the slightest. I am simply saying it can be an extremely effective tool and it is without a shadow of a doubt better than using a search engine to get the answer to a question.
I’m starting to think that most “business” leaders have the skills of a Trump. It’s all puffery.
Have you ever talked to a CEO? Like, sit down and talk face to face? Their are dumb as rocks. They are dumb as rocks and make all the money, and just move around from company to company, running them into the ground.
Yes. Yes I have. And I agree with you. The “starting to think” is an old Norm McDonald trope.
I’m starting to think that Norm McDonald character isn’t a very serious person.
- some exceptions for CEOs that actually founded the company they remain in charge of, but they’re in the minority for sure. There’s probably variance by sector as well.
All CEOs need to be [redacted]
Maybe I’m just too close to the non-profit side of things to see it as a simple binary.
“Non-profit” is a tax classification, not a description of their business motives. I’m not saying that all non-profits are bad. They definitely help people. But many are absolutely profit-driven, with their CEOs making a shitload of money. CEOs, by and large, profit off the exploited labor of their employees.
What a fucking retarded statement.
Yes. My last 3 were excellent businessmen and treated employees exceptionally well.
The one before that was a buffoon, but that was a tiny 13-person company. Our main vendor said, “He’s a man who has found great success despite himself.”
I work in IT, and the accuracy of “The IT Crowd” when it comes to management is scary. How they get anything done is beyond me.
They don’t get anything done they just ask other people to do stuff. Their job is to nag you and stress you out until you do stuff
During americas “great” years a lot of the research and development was guided by long term USA policy. We are no longer guided by any sense of what’s to come. It’s a greedy grab all free-for-all. No rules no restrictions, just have fun and make lots of money.
If trump can graduate with a IVY league degree, that says a lot about the (trogolydite) IVY league grads.
We said the same thing about bush! Clearly they lost their luster damn near a century ago.
Ivy League schools were never about education. They’ve always about making connections with other elites.
and suckering in the actual talent/smart kids, to be exploited by those elites with connections
Ivy league degrees are only impressive if you’re poor. Any idiot multimillionaire or above can buy one for themselves.
Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.
These are the people in charge of the economy.
TIL the economy is driven by FOMO.
Always has been
Feels like it kicked into hyper-drive after Bitcoin blew up
You never lived through the dotcom bubble.
Or tulip mania
I definitely lived through the dot com bubble. My memory of that was people registering domain names liked beer(dot)com and selling it for millions of dollars
Or the gourd futures thing, maybe?
Today I learned about gourd futures on Reddit wall street bets.
There’s a few other classic commodity trades in there. Orange juice rings a bell. The SPX box trade guy is really interesting. All sorts of incredible history on there
I do like to think I could have made money on that one and will always regret avoiding it. It was clearly always a bubble with nothing behind it (less than the current ai bubble), but at the same time was a huge long shot.
Like any bubble, you can make money if you’re in early enough but it’s absolutely critical to get out in time before everything comes crashing down. I try to avoid speculation bubbles because I’m not good at getting out in time. But for the bitcoin craze, I think it was much clearer than other bubbles. There was a clear transition where all the faithful were gone and it was pure speculation finance bros. Clearly it had jumped the shark but there was still opportunity to get out of the water
If it makes you feel any better, I was an early adopter, mined 36 bitcoins, got bored, heard years later that bitcoin was “a thing” now and never found my wallet.
Ha ha, so close to becoming ruling class ……. Looking at my cupboard of old computers where I want to look for old photos and stuff …. Yeah I’d lose my wallet too
It’s called futures. Do you even stonk?
It’s a faith based economy, and when a large amount of participants will believe 2+2=5 there’s no limit to the fuckery you can do!
“Falling behind” is such a bullshit phrase.
FOMO for CEOs
In this context or period?
Specifically this context. Like “falling behind in school,” I get. There are grades. Some people do better while some fall behind. It’s a scaler value with a good end and a bad end.
But the AI bros are co-opting the phrase and presupposing that ALL OF TECHNOLOGY is on a linear scale where the good end is more AI shit and the bad end is less AI shit. They’re using a scaler when they should be using a vector.
I’ve mentioned it before on here, but my cousin is taking an AI class in college. He tried to convince my business-owner uncle to let him create an agent for his company’s website so he won’t “fall behind.”
My uncle owns a gas station.
Haha, alright — I totally understand what you mean. Yet another phrase co-opted by those that would rather spend their time seeming intelligent rather than being intelligent.
Send that cousin to trade school so he can telll your uncle to install EV chargers so he won’t fall behind.
I think that definition of “falling behind” is exactly the FOMO that the tech giants want to sell to all the other companies.
I think the FOMO of the tech CEOs is the same as it ever was. Money. In this case it’s “What if the magic thing that makes AI dominate the economy happens and I am not leading the pack at just that moment? I might not become one of the first trillionaires! 😭”
While it’s felt so keenly now, it has always been a part of tech. Some amazing stuff comes out, and thus everything most be amazing and you can’t afford to miss out.
In practice, if the tech is truly that great, then you can catch up real quick after waiting to understand if it is great. But everyone FOMOs constantly.
I think it’s different this time around. Tech pundits might write “If Apple doesn’t have a VR/AR product in the next six months, they’re falling behind,” but this is the first time in my memory that it’s been tech companies staring down consumers and small businesses and saying “If you don’t adapt to this technology, you will lose your job.”
It’s the “sunk cost fallacy”. They’ve already invested so much, that turning back now means a total loss…when success “might be” just around the corner, if they only invest a little more. In for a penny, in for a pound.
Not just sunk cost, they see all this apparent press, writers, business leaders saying they have fantastic results, so obviously they most be doing something wrong.
I have seen it over and over again in tech. People are told that everyone else in the world is doing great with some tech and everyone is so afraid they are “not getting” something they won’t say anything or else they will pretend they also “get it” and it is awesome.
Not even things that have marketing spend behind it, some programming paradigm or whatever. Marketing spend makes it worse, but all by ourselves everyone seems to be afraid to somehow be seen as a clueless Luddite at any second.
It’s really weird when you attend some talk where the guy talks up sonething or another and then in more casual private conversation basically admits straight up that he actually doesn’t see the benefit, but feels like he has to dance the dance.
It’s crazy how being a “leader” in tech just means latching on to the whatever the latest trend is while kissing maximum ass.
Almost none of these tech companies have their own vision anymore, they just mindlessly copy whatever FAANG or the current trendy startup does.
Just dealt with this forward deployed engineer bullshit created by Palantir. My company spent six months “exploring” this concept that makes absolutely no sense for us because we already have an established product with thousands of customers… And it took them that long to figure this out. But hey Palantir did it and they’re the trendy guys right now, so we should try it too so we can also be cool.
Not related to AI but we’re in fully brainwashed mode on that too.
When toxic positivity infects the minds of the people making decisions that affect everyone…
The mentality of the gambler. There’s a lot of people out there who have this addiction and just aren’t fortunate enough to have lucked into becoming a CEO. That’s how you know it’s a lottery, and being born into riches is a shortcut.
A common phrase of mine is, “the truth is, there is no bus driver at the wheel”.
I thought it was Sandra Bullock and Keanu Reeves driving this thing!
Truly the only group of people worthy of marching us into the sea.
the sea will come to them very soon.
Rich people: Debunking the idea that the rich are the smartest since about 10,000 BCE.
What they lack in smarts, they can make up for in selfishness.
Don’t forget psychopathy
And a granny state to bail them out if they lose all their money at the casino.
This metaphor made me laugh and cry at the same time. Granny state indeed. I remember seeing those old folks in Vegas and being horrified about that future. My fear created an entire economy out of that old person at the slot machine stereotype.
And caloric value.
Tech CEOs are a cancer on society and the planet. Ruining everything just to make fake numbers go higher.
TechCEOs are a cancer on society and the planet. Ruining everything just to make fake numbers go higher.The alternative is running everything by committee, which isn’t always practical.
You can have a single person in charge without them having the title “CEO”.
Making your title CEO is also basically accepting a lot of the cultural baggage that comes with the term.
That’s all it is, semantics and connotation. The person in charge of the whole thing is by definition CEO unless it’s a board.
Semantics and connotation are important.
It’s sad how few of you have taken the time to research or attempt to start a business. Of any sort.
People aren’t evil for selling a service and registering it as a corporation which often requires a CFO and a CEO/President, even if the business has only a couple people. A lot of these businesses struggle financially just as much as any welfare recipient.
Do you think the only people who aren’t evil are the non-industrious? It comes across as pathetic parasitic envy.
I hope we find a way to cure it
Luigi found a pretty good solution.
But, all jokes aside, the best solution is to start taxing these fuckers as hard as possible.
But, to meaningfully tax them, we need a government that they don’t own.
What we really need is to speedrun evolution, and reach the next step sooner, where greed is no longer a factor and things are more globalistic, etc. You know, something sensible. Otherwise, it’s just a cycle that repeats, even with people like Luigi appearing every now and then, and society cheering them on. The next cycle is behind the corner, at any time. We have to break the cycle with radical changes to human behavior and motivations in general.
Evolution doesn’t have “steps”.
You don’t have to take things so literally. Let’s say it so that everyone is happy: evolution is on a spectrum
That wasn’t the point of my comment at all.
Evolutionary changes take millennia to happen. Even if you speed run things it will still be centuries before there’s a change. You could try to manually curate genetic changes, but we’ve tried that, it was called Eugenics and led to Nazis.
What we really need to do is accept that humans are still animals, and have all kinds of flaws due to our animal natures. We need to come up with ways of accepting and working with our animal natures. Males are inevitably going to try to do things to attract female attention so that they can “mate”. So, maybe try to make it so that the things that work are taking care of needy people, rather than driving a big pickup truck or shooting a gun. Or, in the case of CEOs, accumulating absurd amounts of money.
When we have the potential to reach post-scarcity, scarcity will become artificial solely for the purpose of wielding influence or expressing power.
That’s the world we are already living in.
They keep saying LLMs will replace jobs. Maybe we replace their jobs and just see how it goes?
Sounds good. Then, they’ll finally move away from AI and we will all stop having AI being shoved down our throats. I’m sick and tired of all these AI chatbots in places where we don’t even need them.
“Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.”
Sunk cost fallacy at work
Oh no they are shit afraid of what happened to companies that didn’t survive the shift into digital that happened around 2000s.
The truth is, many companies didn’t try that transition and disappeared or went from their peak to being 2nd class. But also, lots of companies put in large amounts of money the wrong way and the same thing happened. Guess history repeats itself and every ceo is finding out they didn’t get where they did because they’re smarter than their peers the way they strongly believed before.
It’s gambling all the way down, thinking they will be the ones that will win big and everyone else fail.
I was thinking about this recently… and in the early 2000s for a short time there was this weird chat bot crazy on the internet… everyone was adding them to web pages like MySpace and free hosting sites…
I feel like this has been the resurrection of that but on a whole other level… I don’t think it will last it will find its uses but shoving glorified auto suggest down people’s throats is not going to end up anywhere helpful…
A LLM has its place in an ai system… but without having reason its not really intelligent. Its like how you would decide what to say next in a sentence but without the logic behind it
The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
No you can’t use logic based on old information…
If information changes between variables a language model can’t understand that, because it doesn’t understand.
If your information relies on x being true, when x isn’t true the ai will still say its fine because it doesn’t understand the context
Just like it doesn’t understand things like not to do something.
most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data
Well people use it and don’t care about hallucinations.
And they never add anything of use. They are like an incredibly sophisticated version of Clippy… and just as useless.
Clippy being useless was okay because it was the 2000s. In this time and age though? Meh.
Also, people HATED Clippy. They always hated AI.





















