It’s well known that he was a socially awkward and autistic.
- 0 Posts
- 19 Comments
I can’t wait for the trickle down Bukkake. Any minute now!
pufferfischerpulver@feddit.orgto
News@lemmy.world•Apple deadnamed the Gulf of America and conservatives are triggered
15·1 year agoHoly shit yes. I remember making fun of the doofus Bush Jr. But holy fuck do I wish him back compared to the current shit show.
I fucking loved windows 8, if I could I would still have the all screen start menu bastard.
pufferfischerpulver@feddit.orgto
Privacy@lemmy.ml•Flirting with Trump is flirting with Nazism - Response to Andy Yen (CEO of Proton AG) on Reddit 📢📢📢
16·1 year agoIt renders fine, it’s just a pain to read due the wide aspect ratio. Either it’s too small or you have to scroll horizontal for reach line, or you have to flip your phone. None of it it is optimal.
pufferfischerpulver@feddit.orgto
politics @lemmy.world•Utah trans girls now required to meet testosterone levels stricter than NCAA to compete in high school sports
5·1 year agoWell, well, well, now you think of the eggs. Real Women just have 'em free you know. /s
pufferfischerpulver@feddit.orgto
People Twitter@sh.itjust.works•How much resources are spent fact checking because bots like Mila Joy are lying?
22·1 year agoFur real? I think you’re purrbably right.
pufferfischerpulver@feddit.orgto
memes@lemmy.world•It burns going down, but don't worry too much about it
7·1 year agoWhy do I really want to drink it? The distillation removes any radioactive particles making it safe to drink. And it just seems so fucking metal??
pufferfischerpulver@feddit.orgto
People Twitter@sh.itjust.works•Running out of that brain juice for the day
4·1 year agoHow many fingers? 🖐️
pufferfischerpulver@feddit.orgto
memes@lemmy.world•Shitting on Windows built-in media player if you didn't know
5·1 year agoFucking real player😩
Wtf Rome, such tyrants. Never heard of the 2nd amendment or what?!
pufferfischerpulver@feddit.orgto
memes@lemmy.world•Shitting on Windows built-in media player if you didn't know
15·1 year agoTBF iTunes is a terrible player but made the shit loads of money so I guess they achieved what they set out to do.
And I would argue iTunes is the reason for newer media player versions being shit since of course MS saw that there was money to be made and tried to do the same.
pufferfischerpulver@feddit.orgto
Fuck Cars@lemmy.world•Waymo trains its cars to NOT stop at crosswalksEnglish
8·1 year agoWhat a bullshit argument. One of the arguments for self driving cars is precisely that they are not doing the same thing humans do. And why should they? It’s ludicrous for a company to train them on “social norms” rather than the actual laws of the road. At least when it comes to black and white issues as what is described in the article.
pufferfischerpulver@feddit.orgto
[MIGRATED TO DIFFERENT INSTANCE CHECK PIN POST] Stardew Valley@lemm.ee•That's how it become for me
4·1 year agoI don’t get the game tbh. At first it was cozy then it turned into work. Which I know people like, some at least. But I work enough in the day to not want to work when I play.
pufferfischerpulver@feddit.orgto
Technology@lemmy.world•Leaked Documents Show OpenAI Has a Very Clear Definition of ‘AGI’English
21·1 year agoI’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
Here’s a breakdown of why this “parrot-like” behavior hinders true AI:
- Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
- Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
- Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
pufferfischerpulver@feddit.orgto
Technology@lemmy.world•Leaked Documents Show OpenAI Has a Very Clear Definition of ‘AGI’English
161·1 year agoInteresting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.
The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.



Nah, fuck that. It’s an account with shit post in the name. They can use the fucking word without an analysis of contemporary communications.