Ethics of AI: Errors and Bias
Ethics of AI Bias and Error
AI Can Read Your Emotions. Should it?
...do we really want our emotions to be machine-readable? How can we know that this data will be used in a way that will benefit citizens? Would we be happy for our employers to profile us at work, and perhaps make judgments on our stress management and overall competence? What about insurance companies using data accrued on our bodies and emotional state?
AI systems claiming to 'read' emotions pose discrimination risks
..such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory. Lisa Feldman Barrett
Welfare surveillance system violates human rights, Dutch court rules
“This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds.”
Last updated
Was this helpful?