Apple transforms Turi into dedicated machine learning division to build future product features

“Following its acquisition of machine learning platform Turi earlier this month, Apple is now growing the team that will serve as the company’s new machine learning division focusing on integrating the tech into new and existing products,” Jordan Kahn reports for 9to5Mac.

“We’ve learned Apple has now started growing the team that it acquired from Turi by hiring application developers and data scientists for the new machine learning division that will remain in Seattle where Turi was originally based,” Kahn reports. “The new machine learning division will work alongside Apple’s product teams to prototype new features and improvements that could be integrated into Apple’s existing apps and future products.”

Kahn reports, “Apple is already using machine learning in many of its products that could benefit from the new team.”

Read more in the full article here.

MacDailyNews Take:

We’ve been seeing over the last five years a growth of this inside Apple. Our devices are getting so much smarter at a quicker rate, especially with our Apple design A series chips. The back ends are getting so much smarter, faster, and everything we do finds some reason to be connected. This enables more and more machine learning techniques, because there is so much stuff to learn, and it’s available to [us]… We use these techniques to do the things we have always wanted to do, better than we’ve been able to do. And on new things we haven’t be able to do. It’s a technique that will ultimately be a very Apple way of doing things as it evolves inside Apple and in the ways we make products… Machine learning is enabling us to say yes to some things that in past years we would have said no to. It’s becoming embedded in the process of deciding the products we’re going to do next.Phil Schiller, August 2016

SEE ALSO:
An exclusive inside look at how artificial intelligence and machine learning work at Apple – August 24, 2016
Apple rumored to be taking big piece of Seattle-area office market in expansion – August 12, 2016
Why Apple will become a leader in artificial intelligence – August 8, 2016
Apple buys machine-learning startup Turi for $200 million – August 6, 2016
Apple touts Artificial Intelligence in iOS and opens ‘crown jewels’ to developers – June 14, 2016
Smartphones to die out within five years, replaced by artificial intelligence – survey – December 9, 2015
Apple’s extreme secrecy retarding its artificial intelligence work – October 30, 2015
Apple hires NVIDIA’s artificial intelligence director – October 24, 2015
Apple acquires advanced artificial intelligence startup Perceptio – October 5, 2015
Apple buys artificial intelligence natural language start-up VocalIQ – October 2, 2015

12 Comments

  1. I got a call from a client this morning, Bob’s Burgers, for the duration of this anecdote. Bob says his employee David Eggs is having trouble launching Microsoft Outlook. So I connect to David’s computer but David is working just fine.

    So I say to Siri, “Hey Siri, Call Bob’s Burgers.” My plan was to speak to the front desk, and ask to be transferred to David. Only Siri says, “Calling David Eggs.”

    I’m like, “How did you know I wanted to talk to David?”

    Very creeped out.

  2. Anyone know anyone named “Cortana’? people wind up with names given to them by parents that have been poor choices for eternity, Some worse than others.. Some have last names I bet they wish they didn’t have.. Siri isn’t necessarily a poor name choice, but it now has its drawbacks, who knows, maybe something positive is there too… just haven’t heard about it.

  3. Keep in mind: We humans have turned out to be fairly lousy at writing secure computer code. With machine learning, the computers will be writing the code. I can’t imagine them doing any better than we do, seeing as we’re teaching them how to write their own code. So much for Skynet. 😛

    1. Fairly lousy at writing secure code. Compared to what? Humans are the only species doing it. In stories, programmed systems built by aliens are often portrayed as fail-safe, but they have a million years’ experience to our 150. Human infants are fairly lousy at the three Rs, but they learn fast and given enough time, become masters.

      Machine learning is less teaching than it is breeding, using biological principles to evolve rational agency and decision-making. Humans’ natural mental flaws lead to bugs or vulnerabilities in the conventional code we write, but they don’t automatically pollute our nascent, self-scrubbing A.I.

      As detailed in Thinking, Fast and Slow by psychologist Daniel Kahneman, we humans employ two forms of thinking — the conscious, intentional mind and the unconscious one where our gut feelings live along with a slew of pre-programmed biases.

      Machine-learning researchers aren’t saying, but I think they’d be fools if they tried this two-tier approach to decisioning. Sure, they’d approach something more like us humans, but this has disastrous-unforeseen-consequences written all over it. Mary Shelley had an inkling…

      1. Fairly lousy at writing secure code. Compared to what?

        I compare to nothing. Instead, I note the consistent poor memory management that is the plague of modern coding. Buffer overflows of various sorts are to be expected in any complex software at this period of time. I say this having studied computer security since 2005, having written about Mac security since 2007….

        Regarding the application of ‘biological principles’ to evolve our current Expert Systems level of ‘Artificial Intelligence’ into something more capable: There are attempts at artificial neural networks…

        Artificial neural network

        A) We humans still don’t know how our own neural networks work. Therefore, what we are coding is highly artificial indeed.
        B) They increasingly require the use of vast ‘trusted code’ as objects because such code systems are beyond the comprehension of any one person. (Although perhaps there is some genius who would argue otherwise). Therefore, we end up with an expanded problem inherent in object model coding: Misplaced trust. Security holes are there. Expect it.
        C) Much as I would like psychology to have progressed beyond its current dark age, it has not. I’d like to read the work by Daniel Kahneman, which I expect goes into more depth than you have quoted. But I can’t imagine his concepts have congruence to actual, physiological systems within the human brain. Psychology and psychology remain what I call Shot-In-The-Dark sciences. Sorry.
        D) Considering how crazy as a whole we humans are, easily manipulated by others and constantly at war, at murder, with one another… WHY would we want to teach computer ‘minds’ to be like us? This subject has been a constant in science fiction for over 70 years. And yes, we could even go back to ‘Frankenstein; or, The Modern Prometheus’ for early insight into this specific problem.

        My usual self-description: I call myself an optimist cynic. Aim for the best; Expect the worst. I’m innately joyful. And yet the self-destructive nature of our species inspired me to foresee how we’ll kill ourselves off (if only in my imagination, which sadly continues to correspond to modern events. Thus my cynicism).

        I think I’m off topic at this point and it’s time for my dinner. As ever, I wish we could chat live. Stay well!

        1. Oh, we could talk for hours about buffer overflows. Or a least I could, but I wouldn’t want to see you barf in your chowder. I gave you the book so don’t imagine what it might contain; digest it and we can talk again.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.