Apple acquires DarwinAI startup amid generative AI scramble

Apple logo

Apple, scrambling to catch up in generative AI in 2024, has acquired Canadian artificial intelligence startup DarwinAI.

Mark Gurman for Bloomberg News:

The iPhone maker purchased the business earlier this year, and dozens of DarwinAI’s employees have joined Apple’s artificial intelligence division, according to people with knowledge of the matter, who asked not to be identified because the deal hasn’t been announced.

DarwinAI has developed AI technology for visually inspecting components during the manufacturing process and serves customers in a range of industries. But one of its core technologies is making artificial intelligence systems smaller and faster. That work that could be helpful to Apple, which is focused on running AI on devices rather than entirely in the cloud.

The under-the-radar acquisition comes ahead of a big AI push for Apple this year. The company is adding features to its iOS 18 software that rely on generative AI, the technology behind ChatGPT and other groundbreaking tools… Apple has fallen behind in the generative AI market. It was caught flat-footed by the launch of OpenAI’s ChatGPT in 2022, and tech peers like Google and Microsoft Corp. have stolen the spotlight with new features.

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

MacDailyNews Take: On-device AI is likely to be a distinguishing feature for Apple, but the company will have to effectively promote the advantage to consumers in order to boost device sales.

Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

8 Comments

  1. For technical readers out there, what are the potential benefits of on-device AI? Yes or No…

    Faster responses on queries
    Better Privacy/Security
    Don’t access the Cloud
    -Don’t need to build data centers to support AI, if on-device
    -Quickly deployable and scalable without the need for new infrastructure
    -No power consumption needed for cooling servers
    -Lower operational cost (for Apple)
    -Better for the environment
    -Less demand on the power grid
    -What else could there be

    What are the compromises:
    -Access to relevant content beyond your phone for generative AI
    -Updating OS on device with latest features
    -Faster chip processing power needs (which could be seen as an advantage for Apple)
    -What else?

    7
    1
    1. The list of advantages above sounds more like Apple pushing its existing infrastructure and capabilities as “AI” in a desperate attempt to sound relevant when jaw-dropping achievements from companies like OpenAI seem to drop weekly.

      I get the distinct impression that Apple is finally beefing up Siri’s capabilities, which would be nice, but not terribly exciting in the current world. Yesterday I watched as an AI developed an app from start to finish using the same tools most developers would use. It was uncanny. I watched new Robots perform in ways I thought were still a decade away. It’s a shame Apple isn’t ahead in the robot area.

      I typically have multiple chats with different LLMs open on my desktop as I work every day. It’s like having staff members I don’t have to pay. I often sit in front of chatGPT 4, give it a few lines of instruction, and watch it generate complete BASH scripts or Python programs while I sip my coffee.

      I tell it to write an email to someone explaining why it might behoove us to not upgrade the network until after their big move this summer. and it just does it.

      The sudden invasion of generative AIs has made working fun and dramatically increased productivity. As far as I can tell, Apple doesn’t have anything like it. What a waste that stupid car was. I don’t have high hopes for the Vision Pro either. It seems it is time for some changes up top.

      1
      1
      1. From what I understand machine learning is a limited subset of AI. That’s all Apple has talked about in the past in this area. However, Siri has shown virtually no improvement in years as if they abandoned developing it further. We certainly would have heard something significant in the dozen WWDC keynotes that have occurred since its release in 2011? Is it really possible that they simply didn’t think this area was important so neglected it, and now whatever we see from them will have been created in just the past year??

    1. I saw this demo and it was somewhat impressive BUT response time was annoyingly slow and, honestly, as amazing an achievement as it was, it was not particularly valuable in a real world application from what I could see. Nice demo but…lets see what develops.

      I have queried many generative responses. Not a single one has been of any value to me. In most cases the results sounded authoritative but had at least one critical error or miss that rendered the response useless, or the response was so generic as to be useless.

      I do see the value for programmers who can query code that is automatically generated, but I would think that a smart programmer would check the work for accuracy and correctness. And I do see the value for generating basic, generic written content.

      Like many Apple followers I am eager to see what this potential on-device generative AI delivers. And yes, it damn well better make Siri more useful and capable. I would imagine a lot of blood is being spilled at Apple getting this ready for the Developers conference and to launch by September. There is a lot of pressure to deliver.

      As I understand it the need for processing power at the data center is to run the LLMs. Once the models have been run, it should be quite possible to load and execute them on a local device. I’m just not sure if Apple has a way of avoiding all that energy consumption, or not.

      The idea of Vision X as a display is very compelling to me, btw. 120″ screen in every room of my house that takes up no space. If you have 2 or more Vision X’s, then the content being watched can be shared. I believe Apple introduced that capability in the early days of Covid.

      1. It doesn’t need to get much more sophisticated than this for a variety of applications. It can be used to sort many things that otherwise would be too tedious and time consuming for humans. Imagine this sorting and folding your laundry for you after you demonstrate it once.

        You don’t need very advanced AI when you have a basic walking version of this or Optimus. Simply having it carry 20+ lbs. of weight that you put in a special box from point A to B will already make a world of difference for many people. Imagine Optimus as a Roomba plugged into it’s hub that you ask to carry something and then just follow you there or command “go to laundry room”, “go to garage” etc.

        Most people don’t need AI to solve complex problems for them, they need a robot that can understand and carry out basic commands in order to do repetitive tasks. This was the great failure of Siri, forget current AI, Apple couldn’t even make it trainable to do very basic things.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.