The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back
The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back
So is the bot not pointing out obvious lies with links to factual data or what is your point? Can you link me to an example of bot using shaky arguments?
And the WMD claims stood on shaky legs from very beginning, many countries like Germany opposed use of force in Iraq. Perhaps we’d benefit from bot correcting false narratives in real time had this technology been available at the time.
The bot doesn’t know what’s “real” or not though - it’s a large language model, not a model of the real world. All it knows is what it’s been told in its training data.