Apple has today given a preview of some of the accessibility features that will presumably be apart of iOS17.
Live Speech will allow users to type what they want to say out loud during phone and FaceTime calls as well as for in-person conversations. Users will also be able to save phrases that they use often.
Apple is taking this one step further with Personal Voice, a feature that will allow those in danger of losing their voice to re-create their voice utilising AI. This is done by recording 15 minutes of audio on iPhone and iPad, which Apple will then use machine learning to create a synthetic version of your voice.
Assistive Access will combine apps such as Phone and FaceTime as well as the messages, camera, photos and music apps. It’ll then show off an interface that has high contrast buttons and large text labels as well as other tools that will help make the experience easier for those who might have trouble with the regular iOS interface.
The last major new accessibility feature that Apple has announced is Point and Speak in Magnifier which will basically allow you to use the camera to point at things around your house in order to read words out loud.
Other iOS17 accessibility features will include: