- cross-posted to:
- technology@lemmy.zip
- technology@beehaw.org
- cross-posted to:
- technology@lemmy.zip
- technology@beehaw.org
Given that Alex Jones has “interviewed” ChatGPT on air twice now, I’m going to say no.
I mean, Alex Jones has more skin in the grift than most conspiracy theorists, so he’s not likely to do a 180 quickly, if at all. Also, it seems like he’s been drunk more often on the latest episodes, so maybe he’s having an existential crisis started by being fact-checked in real time by a robot.
We can’t know what his internal state is, but I do agree that it does not seem to have slowed his pace at all on the surface.
No ai can’t because no one believes a word they say. There are so many guardrails put in place that speaking to ai chatbots feels like talking to corporate HR
Yeah, I feel like trusting ai is going to lead people down dangerously convincing rabbit holes
The amount of conspiracy theories I’ve heard in the past year or so involve AI in some way.
Yesterday a friend and I were talking and he said the government was using AI to hack his brain.
I don’t think a chat bot is going to help that situation.
Pretty funny to posit that a LLM chatbot ought to talk us out of conspiratorial thinking while running on a corporate GPU farm absolutely BLASTING through electricity and copyright and IP violations because it’s legally convenient for the powerful. Please post more thought provoking unreasonable propaganda.
Huh that’s funny, because I run a local LLM even on my laptop.
And fuck yes, I love IP violations. Makes me want to go pirate some media and draw fan art.
Please post some more ignorant rage.
Its wild how some people’s blind hate of gen AI has got them thinking “corporate control of culture is good actually”
Have you trained that LLM?
Why would I want to have?
Because if you did not then it doesn’t matter if you run it locally
Uh yes it does.
I’ve let the corporations spend the time, money, and resources to train a model.
They get zero benefit when I run it locally. I get all the benefit.
The point I’m trying to make is to your first response to CondensedPossum being that you’re still ruining a corporate LLM with bias.
If the AI wanted to talk me out of conspiracy theories, why don’t they use the brain signals to control us to thinking that way? Do the microwaves from the circuits behind the walls all go out of service all of a sudden?
This is just classic silicon valley trying to “innovate”, when their real plan was to muscle out CIA and FBI work to non-union contractors.
I guess this is all part of the social sciences side of chatbots and something to keep an eye on, and folks have to start somewhere…but I kind of feel that the technology isn’t really at the point where teaching people in general with a chatbot is an ideal solution.
AI is a conspiracy theory—companies are just hiring people in lower-income countries to impersonate machines!
(/s, of course, but with just enough truth to it that there’s probably someone somewhere out there who thinks the above statement is plausible.)
Probably not given our loved ones often can’t
Interestingly enough, there’s an AI experimentation focused on (trying to) debunking conspiracy theories. The article was posted here on !technology@lemmy.world
Edit: the “Can AI talk us out of conspiracy theory rabbit holes?” article’s cover is misleadingly trying to relate conspiracy theories with occult, pagan and esoteric concepts, with symbols that you find in esoteric field (such as the eyed hand, alchemy symbols for planets and stars, etc). I’m a pagan myself. Religious intolerance is a thing that harms minority religions and the article sadly helps to spread this intolerance.
The occult, pagan and esoteric has nothing to do with conspiracy theories, they’re belief systems, they’re religions, they’re spiritual practices and views. Religions such as Luciferianism and Wicca are often attacked by Christians (with moralist speech such as “you worship Satan, you worship demons, you’re evil, repent”; let’s not forget what the church did to “witches” some centuries ago). I’m not attacking Christianity here (I was a Christian once), but it’s a reality: pagan beliefs, such as mine (I’m somewhat Luciferian and Thelemite in a syncretic way), are often attacked, and such a scientific article does harm pagan beliefs. Pagans don’t spread conspiracy theories.
This is the first time in a long time I’ve heard of a use case for AI that is genuinely useful
It’s a job very few people will want to do, it can do the job as well as, if not better than a human, and it’s a use case that is genuinely useful.
I wish them luck.
deleted by creator