Apple iPhone Will Soon Speak In Your Voice After 15 Min Of Training: Here’s How It Works

New Delhi: As part of its global accessibility awareness campaign, Apple has unveiled new features for customers with cognitive, vision and hearing impairments. The following important iPhone features are on the way: “Assistant Accessibility,” “Personalized Voice,” and “Point in Magnifier and Speak.” Apple is also releasing curated collections, additional software features, and more in some regions.

However, the corporation maintains that its new devices take advantage of hardware and software advancements, including on-device machine learning, to protect user privacy. ,ALSO READ: Fourth major exit in 6 months! Meta India executive Manish Chopra resigns,

Personal Voice Advanced Speech for users who are at risk of losing their ability to speak, such as those recently diagnosed with ALS or other diseases, is probably the most important function. Using the iPhone, the device intends to enable user-generated voice communication. ,ALSO READ: Tamil Nadu announces 4% DA hike for government employees, pensioners,

Apple explains how users can create personalized voices in a blog post: “Users can create personalized voices by reading along a randomly generated set of text prompts to recorded 15 minutes of audio on iPhone or iPad. User Privacy To protect and safeguard, this speech accessibility feature leverages on-device machine learning. It also interacts seamlessly with live speech so users can communicate with their loved ones using their personal voice.

In addition to Personal Voice, Apple is introducing Live Speech on the iPhone, iPad and Mac to enable people with speech disabilities to communicate. During phone and FaceTime chats as well as face-to-face conversations, users can enter what they want to say to have it spoken aloud.

Users with cognitive limitations can use assistive access. By removing extra information, the tool delivers a personalized app experience and helps users choose the most appropriate option.

As an example, Messages offers an emoji-only keyboard and the option to record a video message to send to loved ones for users who prefer to interact visually. Those trusted supporters can also choose between a row-based layout for those who prefer text and a more attractive grid-based layout for their home screen and apps.

Simply put, the Assistive Accessibility feature for iPhones and iPads provides a straightforward user interface with high-contrast buttons and large text labels. A new Point and Speak in Magnifier feature will be available for iPhones equipped with LiDAR scanners so that people with disabilities can interact with real objects.

As users move their fingers across the keypad, Point and Speak reads the text on each button using data from the camera, LiDAR scanner, and on-device machine learning, according to Apple.

Along with the new devices, Apple will launch SignTime on May 18 in South Korea, Germany, Italy and Spain to connect its Apple Support and Apple Store consumers with on-demand sign language interpreters.

To help consumers learn about accessibility features, some Apple Store locations around the world offer educational sessions every day of the week.