cross-posted from: https://lemmy.zip/post/1476431
Archived version: https://archive.ph/uwzMv
Archived version: https://web.archive.org/web/20230815095015/https://www.euronews.com/next/2023/08/14/how-ai-is-filtering-millions-of-qualified-candidates-out-of-the-workforce
“This touches on one of the huge ethical questions with regulating AI. If you are discriminated against in a job hunt by an AI, who’s fault is that?” It is the fault of the company hiring practices, which are to blindly trust an AI without testing whether or not it is discriminatory. It is also the fault of the producer of that AI software (or service) sold to the company for screening candidates. No new laws are needed to hold either of them accountable, existing laws cover the ground well. That company selling the AI screening services could have just been called “crystal ball hiring” before AI and would be equally liable if they just discriminated in their hiring suggestions. The tool isn’t the thing that needs regulating, the actions people and companies take based on the tool is. And that is already well regulated.
Make an AI in the privacy of your own home that does ____ literally anything? Fine. Collaborate making an OSS AI to do whatever with some of your friends? Also fine. Sell that AI as a employment screener app? Better make sure you’ve tested it to not have discriminatory outcomes. Use that AI to screen employees? Same deal.