Did any of the AI safety dorks have accidentally doing MKultra as one of the risks?
That would require them to care about people other than themselves.
Feature not a bug?
Definitely a feature.
I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they’re currently at and if it’s reversible.
deleted by creator
On the other hand, this article got you to click on it so… that’s a win in their book. And now here we are discussing it, so double and then triple win as the OP is made and people comment on it.
Anything beyond that is someone else’s problem, it would seem?
imagine discussing a topic
We should improve society somewhat.
Yet you participate in society. Curious!
People playing with technology they don’t really understand, and then having it reinforce people’s worst traits and impulses isn’t a great recipe for success.
I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.
for openai, that’s just a recurring customer
A larger symptom of the loneliness epidemic and people feeling more and more detached from humanity every day because this reality we have built for ourselves is quite harsh.
It’s curious how if ChatGPT was a person - saying exactly the same words - he would’ve gotten charged with a criminal conspiracy, or even shot, as its human co-conspirator in Florida did.
And had it been a foreign human in the middle east, radicalizing random people, he would’ve gotten a drone strike.
“AI” - and the companies building them - enjoy the kind of universal legal immunity that is never granted to humans. That needs to end.
A computer can’t be held accountable.
This has “people don’t understand that you don’t fall in love in the strip club” vibes. Like. The stripper does not love you. It’s a transactional exchange. When you lose sight of that, and start anthropomorphizing LLM’s (or romanticizing a strip tease), you are falling into a trap that will allow chinks in your psychological armor to line up in just the right way to act on compulsions or ideas that you wouldn’t normally.
Don’t besmirch the oldest profession by making it akin to souless vacuum. It’s not even a transaction! The AI gains nothing and gives nothing. It’s alienation in it’s purest form—no wonder the rent-seekers love it—It’s the ugliest and least faithful mirror.
The barista and the barmaid don’t love you man. They don’t love you. I don’t care if you flirt and they smile. They are doing a job. It’s a transaction. Don’t get in your feelings and do something you’ll regret just because she makes a nice latte.
Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.
EDIT:
To clarify I and maybe put it in terms which are closer to your interpretation.
With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.
With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it’s where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There’s nothing below the surface,
I don’t personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it’s more useful to frame it as respecting other people’s boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.
And I especially don’t think it healthy to essentially be saying: “have the same expectations of chatbots and service workers”.
TLDR:
You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it’s not just about protecting yourself, it’s also about protecting them.
You should never catch feelings for a chatbot, because they don’t have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).
My response was a joke. You don’t have to clarify anything. You’re just taking it too seriously. It’s cool man. I’m not mad or anything.
you seem really fucking annoying











