Can A.I. be taught to explain itself?

“In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: ‘Advances in A.I. Are Used to Spot Signs of Sexuality,'” Cliff Kuang writes for The New York Times. “But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work ‘dangerous’ and ‘junk science.’ (They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: ‘The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.'”

“Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they ‘liked’ Poetry Magazine,” Kuang writes. “In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.”

“After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate,” Kuang writes. “Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.”

“It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing ‘Jeopardy!’ We assume that is because computers simply have more data-crunching power than our soggy three-pound brains,” Kuang writes. “Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the ‘black box’ problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself. This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars.”

Much more in the full article here.

MacDailyNews Take: An inscrutable neural network that teaches itself and makes its own decisions?

Oh, relax. What could possibly go wrong?

SEE ALSO:
The Dark Secret of Artificial Intelligence: No one really knows how the most advanced algorithms do what they do – June 12, 2017

7 Comments

  1. No. In the strictest terms, ‘AI’ can’t be ‘taught’ anything. It’s just software, and it follows instructions like any other software. Sorry to burst the hype bubble.

    1. I believe that is precisely the problem. Humans make the rules, and are adept at breaking them. Machine intelligences would not break the rules that constrain them, and therefore represent an inflexible threat to the natural adaptability that humans enjoy.

      Another way of saying that is that we are meat and make meat decisions; we use logic only to promote the interests of meat. Machines are silicon and make logical decisions, some of which may run counter to the interests of meat.

    2. You don’t understand how AI is evolving. It’s not machine learning, it’s two steps above that now. Software that writes it self, you get it? Your are thinking of programming in the traditional sense. This is no longer the case. This issue is something that needs to have very careful considerations, specially in military applications.

  2. Statistics don’t lie, only their interpretations do. It’s pretty hard to interpret these results as anything other than they outdo our own best efforts at reading people. It looks like another kind of mind at work. And that’s what’s scary.

    1. Anthropomorphism is etched into the human brain an pervades every encounter we have with the un-human. We’re also innately looking for bilateral symmetry in every living thing. If we don’t find it, we consider such creatures to be strange beyond our comprehension, unearthly, scary.

      Knowing this, that’s probably why I like molluscs. 😉

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.