“During the latest edition of its annual WWDC event, Apple made important strides to show developers and creators that it is finally getting serious about artificial intelligence,” MIX reports for TNW.

“The company announced its all-new Core ML framework specifically designed to enable developers to build smarter apps by embedding them with on-device machine learning capabilities,” MIX reports. “But it seems the new system still has some learning to do.”

“Toying around with the Core ML beta, developer Paul Haddad took to Twitter to showcase how well the framework handles computer vision tasks,” MIX reports. “Using the new built-in screen recording tool, Haddad tested the capacity of Core ML to identify and caption objects in real-time.”

“Despite little inconsistencies, Haddad expressed enthusiasm about the framework’s potential, noting that users need to point their cameras ‘at the right angle’ for optimal results,” MIX reports. “Given that apps relying on machine learning algorithms to caption objects in the real world misidentify their targets all the time, it is hardly surprising Apple’s framework is getting stuff wrong here and there…”

Read more in the full article here.

MacDailyNews Take: Beta.

At least it didn’t call it a cheese grater.

And, depending on the CPU and GPU installed, the Core ML beta is actually correct.