Your Future iPhone will clone your voice within 15 minutes

Your Future iPhone will clone your voice within 15 minutes - The Current

Table of Contents

If you have an iPhone or iPad, you will soon be able to hear it speak in your own voice, Apple announced earlier this week.

The upcoming feature, “Personal Voice,” will give users randomized text prompts to generate 15 minutes of audio.
There will be another new tool called “Live Speech” which lets users type in a phrase, and save commonly used ones, for the device to speak during phone and FaceTime calls or in-person conversations.

Apple says it will use machine learning, a type of AI, to create the voice on the device itself rather than externally so the data can be more secure and private.

It might sound like a quirky feature at first, but it is actually part of the company’s latest drive for accessibility. Apple pointed to conditions like ALS where people are at risk of losing their ability to speak.

“At Apple, we have always believed that the best technology is technology built for everyone,” said Tim Cook, Apple’s CEO.
The new “Personal Voice” feature, expected as part of iOS 17, will let iPhones and iPads generate digital reproductions of the voice of user for in-person conversations and on phone, FaceTime and audio calls.

Apple said Personal Voice will create a synthesized voice that sounds like a user and can be used to connect with family and friends. The feature is aimed at users who have conditions that can affect their speaking ability over time.

Users can create their Personal Voice by recording 15 minutes of audio on their device. Apple said the feature will use local machine-learning technology to maximize privacy.

It’s part of a larger suite of accessibility improvements for iOS devices, including a new Assistive Access feature that helps users with cognitive disabilities, and their caretakers, more easily take advantage of iOS devices.

Apple also announced another machine learning-backed technology, augmenting its existing Magnifier feature with a new point-and-speak-backed Detection Mode. The new functionality will combine Camera input, LiDAR input, and machine-learning technology to announce the text on the screen.

Apple typically launches software at WWDC in beta, meaning that the features are first available to developers and to members of the public who want to opt in. Those features will typically remain in beta throughout the summer and launch to the public in the fall when new iPhones hit the market.

Apple’s 2023 WWDC conference begins June 5. The company is expected to unveil its first virtual reality headset among other software and hardware announcements.



Leave a Reply

Your email address will not be published. Required fields are marked *