RUMOR: Apple’s massive new NC data center to host Nuance tech; partnership announcement due at WWDC

“Last Friday, we posted about the negotiations between Apple and voice recognition company Nuance,” MG Siegler reports for TechCrunch. “In digging into the information about the relationship between the two companies, we had heard that Apple might actually already be using Nuance technology in their new (but yet to be officially opened) massive data center in North Carolina. Since then, we’ve gotten multiple independent confirmations that this is indeed the case.”

Siegler reports, “And yes, this is said to be the keystone of a partnership that Apple is likely to announce with Nuance at WWDC next month. More specifically, we’re hearing that Apple is running Nuance software — and possibly some of their hardware — in this new data center. Why? A few reasons. First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit.”

“Obviously, Nuance, which owns the technology, would have to sign off on all of this. And we now believe that they have. Hence, the big time partnership that should be formally announced soon,” Siegler reports. “All of this plays in nicely with our report that Siri would be a big part of the upcoming iOS 5 software. Apple is expected to show off iOS 5 at WWDC, but it would be launched this fall, as we previously reported. In order to work, Siri requires Nuance. When Apple bought Siri last year, they immediately began negotiations with Nuance to ensure that Siri was able to keep running. But it now appears that after months of tense negotiations, the two companies have decided that it was best to take the relationship a step farther.”

More info, including why Apple doesn’t just buy Nuance, in the full article here.

MacDailyNews Take: Copying Apple’s next step may be more than a case of simply stealing IP, implementing it an uglier fashion, and then waiting for the lawsuits. Ripping off Apple’s next leap might not only be prohibitively expensive, but practically impossible.

Steve Jobs may be getting ready to say, “Let’s see you rip this off, you S.O.B.s.”

[Thanks to MacDailyNews Readers “Fred Mertz” and “Lynn W.” for the heads up.]

Related articles:
Nuance gains on report maker of voice software has held talks with Apple – May 9, 2011
Apple may be negotiating deal with Nuance for iOS 5 speech recognition – May 7, 2011
RUMOR: Apple’s iOS 5 to deeply integrate Siri’s artificial intelligence and assistance technology – March 29, 2011
Join the dots on six future Apple technologies – September 22, 2010
The story of Siri from birth to acquisition by Apple – June 14, 2010
Apple buys virtual personal assistant startup Siri; deal ratchets up competition with Google – April 28, 2010

45 Comments

  1. Whatever’s cooking at the NC data centre it’d better be good. After Mobile Meh Apple needs to do to the cloud what it’s done to other segements of the market with the iPod, iTunes, iPhone and iPad. The cloud solution needs to be the next big tie in for iOs.

  2. Can you imagine a Watson-grade voice recognition capability built into OS X and iOS?

    It would be “game over” for RIM, Nokia, Box-assemblers, and probably MSFT.

    The stuff of SciFi.

    1. Whatever fanboy. Run along to YouTube and check out the kinect entertainment demo. It makes Apple TV and iOS look like pieces of antiquated garbage.

      1. I’m not sure comparing the kinect to anything but the wii and playstation move is even reasonable. I’m sure it’s great but they’re two totally different things. iOS-mobile operating system. Kinect-game system peripheral that makes you look like a dumbass.

  3. This is how it works at the moment:

    Me: “Call John Doe”
    iPhone: “Play songs by Snow Patrol”

    Speech recognition on everything is terrible, take booking movie tickets, if Apple can make it work properly it could be awesome.

      1. Here is the funny part: I never use voice control on my iPhone. But after reading this thread, and just for giggles, I held down the home button and said, “play songs by Snow Patrol.” In a couple seconds “Chasing Cars” started. I’ll let others decide whether this is a good thing or not.

        1. I tried it for the first time, too. Since I have no Snow Patrol, I said ‘Play songs by Swervedriver’. The results were hilarious. First it called my doctor, whose name is nothing like Swervedriver. Then it tried to Facetime my friend Susan. Then it played a song by The Verve (getting close!).
          Then I simplified it a bit and said ‘Play songs by Tad’. It dutifully played ‘Delinquent’ from the 8-Way Santa album.
          So… it works perfectly as far as I can tell. >.<

          1. First Snow Patrol is not so bad in a Coldplay diet kind of way.
            I find the voice control on the iPhone very reliant on where you might be located and is impacted greatly by sounds around you. Now if they could improve the noise reduction on the microphone maybe this might work a little better. While I love having the feature I rarely use it due to it being very unreliable……for now.

  4. This makes no sense, why would apple need a datacenter to provide this kind of functionality? It is much more likely that they are beefing up the capabilities of mobile me, maybe beefing up the capabilities of the app store for the lion launch that is going to have nearly everyone hitting the store to get the latest greatest version of OSX. But a datacenter primarily for speech recognition, give me a break…

      1. Clueless people are the ones who believe a whole data center is needed to do basic voice recognition… All things being equal, the most likely answer is mobile me, and app store reinforcement. Really doesn’t take brain science to figure out…

        1. I think bluejay’s point is that a server farm on the scale of what’s been built in NC would only require a small fraction of it’s capabilities to run this technology. I don’t think anyone, other then you, is suggesting that Apple would use the NC data center for this technology alone.

          Like many server farms, it could easily be used to support the App store, iCloud AND voice recognition software.

          I’m not sure where you got the impression that it would be used only for Nuance

            1. you only know speech recognition as what it is today, nobody knows whats up apple’s sleeve this time with voice recognition. They invent and push forward new stuff all the time, thats my point.

    1. most likely they’ll apply it for search. SJ repeatedly said that search in mobile phone is different with one in PC, it might be easier to search something just by saying it to the phone. I think it’s pretty cool.

  5. Hopefully, we’re not talking about the same voice recognition system United Airlines uses for their telephone Reservations. That system has redefined the word “cumbersome”.

  6. Do people really want to talk to their devices? Personally I hate answering machines and robo-menus on the phone. The few times I tried my iPod’s speech recognition the songs that I asked for were not the ones it started playing. The technology has its uses but most people seem to prefer a visual GUI to voice control.

    1. I have to call insurance cos. many times a day. You are right, most of the time the voice recog is terrible. However, once it gets to the point of being very accurate it will be a paradigm shift like we haven’t “heard” before.

      My business partner (in his mid 60’s) can’t seem to operate a simple smartphone to get to his voicemail, but in his new Ford Crossover – don’t know the model- he loves to voice call his friends and play songs or internet radio. That is because he doesn’t have to look at menus. (I know the iPhone is vastly superior to the other so-called smart phones but he won’t change over yet.)

  7. I hope part of the agreement requires Nuance to replace the iPhone-clone image used for the Contact link with a real iPhone on their home page. More seriously, Nuance is recognized as having the best available speech recognition software. I would use more voice commands if it provided reliable results. Demonstrating voice commands on the iPhone today is like demonstrating handwriting recognition on the Newton a decade ago.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.