“One part of AI is machine learning, in which a system analyzes massive amounts of data to make decisions and recognize patterns on its own. And that data must be carefully considered so that it doesn’t reflect or contribute to existing biases,” Kelly reports. “A recent study by the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring.”
“Many of those who gathered at last week’s discussion, ‘AI Summit – Designing a Future for All,’ said new industry standards, a code of conduct, greater diversity among the engineers and computer scientists developing AI, and even regulation would go a long way toward minimizing these biases,” Kelly reports. “Technical approaches can help, too. The Fairness Tool, developed by Accenture, scours data sets to find any biases and correct problematic models.”
Read more in the full article here.
MacDailyNews Take: Microsoft’s Tay chatbot was the canary in the coalmine.
Apple tells U.S. Senate Democrat Al Franken they trained iPhone X’s Face ID system to not be racist, ageist, or sexist – October 17, 2017
Microsoft’s ‘Tay’ chatbot turns into a foul-mouthed racist nazi – March 25, 2016