
OpenAI and renowned designer Jony Ive are tackling multiple technical challenges with their secretive AI device, aiming for a major product launch next year. The San Francisco-based startup, led by Sam Altman, acquired Ive’s design firm, io, for $6.5 billion in May, but the duo has revealed little about their ongoing projects.
Tim Bradshaw, Cristina Criddle, Michael Acton, and Ryan McMorrow for Financial Times:
Their aim is to create a palm-sized device without a screen that can take audio and visual cues from the physical environment and respond to users’ requests.
People familiar with their plans said OpenAI and Ive had yet to solve critical problems that could delay the device’s release… [O]bstacles remain in the device’s software and the infrastructure needed to power it.
These include deciding on the assistant’s “personality”, privacy issues and budgeting for the computing power needed to run OpenAI’s models on a mass consumer device.
“Compute is another huge factor for the delay,” said one person close to Ive. “Amazon has the compute for an Alexa, so does Google [for its Home device], but OpenAI is struggling to get enough compute for ChatGPT, let alone an AI device — they need to fix that first.”
Multiple people familiar with the plans said OpenAI and Ive were working on a device roughly the size of a smartphone that users would communicate with through a camera, microphone and speaker. One person suggested it might have multiple cameras.
The gadget is designed to sit on a desk or table but can also be carried around by the user… One person said the device would be “always on” rather than triggered by a word or prompt. The device’s sensors would gather data throughout the day that would help to build its virtual assistant’s “memory”.
The goal is to improve the “smart speakers” of the past decade, such as Amazon’s Echo speaker and its Alexa digital assistant, which are generally used for a limited set of functions such as listening to music and setting kitchen timers.
OpenAI and Ive are seeking to build a more powerful and useful machine.
MacDailyNews Take: As we just asked earlier today, “The question is how does a “pocket-sized AI device” differ from the already pocket-sized iPhone and its Android knockoffs. The iPhone already has everything needed – microphones, cameras, fast processors, display, speakers, connectivity, etc. Why carry a “pocket-sized AI device” when you already carry a smartphone that could, via settings, be set up to match whatever the “pocket-sized AI device” offers (always listening, etc.) and exceed it (on-device LLMs, etc.)?”
Please help support MacDailyNews — and enjoy subscriber-only articles, comments, chat, and more — by subscribing to our Substack: macdailynews.substack.com. Thank you!
Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.
To answer your question macdailynews,
iPhone desperately needs an ai personality, something you can converse with like a friend, grok on iphone is the closest but needs something built in and synced across all apple device. A friend that gets to know you, comments on your new haircut, offers advice and to be no more difficult than interacting with another human being.
Speaking of “the firm,” regardless of one’s opinion of Tucker Carlson, take a listen to a recent interview with Sam Altman. His history has some curiosities and his answers–to very good/probing questions–were ambiguous at best and disconcerting, at least. Empathetically, I wouldn’t want to be him, or ANY other person heading the AI train.
He reminded me A LOT of another silicon valley player that’s practiced in “apologies”…Markie.
I imagine it’ll look like the Apple hockey puck mouse
MDN Take = 100% bang on.
If you’re creating a “palm-sized device without a screen” why do you need a famed industrial designer? It’s 100% a software/infrastructure challenge. Form-factor is almost irrelevant if it’s something you interact with using only voice. And if you introduce cameras and screens (which I believe good consumer AI implementation requires) Apple is in the pole position on every front, from a hardware and ecosystem perspective.
OpenAI has the model and nothing else. Apple has everything but the model (yet — but let’s see where they are ion. 6 months).
They need him for the product rollout video to talk about its ‘a-loo-min-i-um’ frame.
LOL. They are finding out that building real products is hard. The Apple engineers must be rolling in the aisles to read this.
macOS does have a native chatbot. It works either on device, with Apple Intelligence or with OpenAI. You can read the article at MacMost to find out how to access it. It’s not there by default but you can access it with a shortcut.
I don’t know anyone who wants an always on device listening to them. I’m sure they are out there but it is not a huge number.