“I confessed to a certain skepticism Monday when I posted an annotated YouTube video of Steve Jobs,” Philip Elmer-DeWitt reports for Fortune.
“The sound of Jobs’ voice had been run through a so-called ’emotion detection engine’ that provided commentary at 20-second intervals about what was really going on for Jobs behind the words he spoke — his underlying mood and attitude,” P.E.D. reports. “I didn’t see how something like that could be done without a fair bit of time-consuming field work and machine learning… That was before I chatted with the folks at Beyond Verbal and played around with the Moodies app on their home page.”
“The company has done a fair bit of field work and machine learning, processing recordings and questionnaires from more than 70,000 volunteers over the past 18 years,” P.E.D. reports. “And to prove that its results are not canned, Beyond Verbal was kind enough to provide the attached analysis of the first two minutes of Apple (AAPL) CEO Tim Cook’s iPhone keynote last month.”
See the video and the results in the full article here.