AI ethics groups are repeating one of society’s classic mistakes The big picture: International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates. The problem: AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts underway today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. These groups are well-intentioned and are doing worthwhile work. However… Without more diverse geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe. If unaddressed, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures. Read the full story written by Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a machine-learning engineer at Microsoft, and Victoria Heath, a researcher at the Montreal AI Ethics Institute.
|
No comments:
Post a Comment