IBL News | New York
On its annual developer’s conference WWDC on June 5, in Cupertino, California, Apple announced how much work it’s doing in AI and machine learning but it didn’t introduce any product or service related to Generative AI and chatbots, as Microsoft, Google, Adobe, and other large companies recently did.
It’s Apple’s way: a practical approach to AI with features, including an improved autocorrect tool running on the iPhone based on machine learning.
Nonetheless, Apple’s new augmented reality headset, Vision Pro, attracted worldwide attention despite its high price tag of $3,500.
Apple wants AI models on its devices, unlike its rivals who are building large data farms and supercomputers. As a product company, Apple mentions the feature and says that there is cool technology working behind the scenes.
One example of this was an announced improvement to AirPods Pro that automatically turns off noise cancelling when the user engages in conversation. Apple didn’t frame it as a machine learning or AI-based solution.
Another feature: Apple’s new Digital Persona, a feature that makes a 3D scan of the user’s face and body and then can recreate what they look like virtually while videoconferencing with other people while wearing the Vision Pro headset.
Apple also mentioned other new features that used the company’s skill in neural networks, such as the ability to identify fields to fill out in a PDF or a feature that enables the iPhone to identify your pet, versus other cats or dogs, and put all the user’s pet photos in a folder.
• CNBC: I tried the Apple Vision Pro mixed-reality headset — here’s what it’s like