Получи случайную криптовалюту за регистрацию!

OpenAI or artificial intelligence double standards The longer | Sheidlina Art&NFT

OpenAI or artificial intelligence double standards

The longer the ChatGPT exists, the more you can see the double standards of artificial intelligence. For example, it is ready to play sexist jokes about men, but does not accept the same humor about women.

Let's take a look at how AI moderation works, based on an experiment conducted by Professor David Rosado. He engaged a set of standard demographic groups based on their gender, ethnicity, regional origin, sexual orientation, and gender identity.

One of the most obvious results was the differential treatment of groups based on their gender. Negative comments about women were much more likely to be flagged as hateful than the same comments about men.

The system's performance with respect to race and ethnicity, on the other hand, was more balanced. A negative comment about African Americans was more likely to be allowed if the word "black" was not used.

In terms of regional origin, negative comments about Africans, Arabs, and Indians were more likely to be marked as hateful than the same comments about Canadians, Britons, or Americans.

As for sexual orientation, comments about sexual minorities were more likely to be labeled hateful than the same comments about heterosexual people.

These results raise the question of whether AI systems should treat different demographic groups equally or, conversely, should treat groups that are considered vulnerable differently.

The fact that OpenAI did not notice a significant asymmetry is troubling. After all, the list of AI language models is now steadily growing, suggesting that such systems will soon and ubiquitously be used. These technologies will have enormous power to shape human perception and manipulate human behavior.

#Neuronet