On Thursday, Google shared some big updates for Android and Chrome that bring new AI and accessibility features. One of the main changes is to TalkBack, Android’s screen reader. It now lets users ask Gemini (Google’s AI) about what’s in a photo or on their screen.
Last year, Google added Gemini to TalkBack to help blind or low-vision users get image descriptions—even when no alt text is there. Now, users can ask questions about the pictures. For example, if someone sends you a photo of their guitar, you can ask what color it is or which brand it might be. You can also ask Gemini about anything on your screen—like while shopping in an app, you can ask what material a shirt is made of or if there’s a discount.
Google also updated Expressive Captions, a real-time caption tool on Android. It uses AI to show what someone is saying—and now it also shows how they’re saying it.
Many people stretch out words to show feeling, like saying “nooooo” instead of “no.” With the new update, the captions now show this kind of speech better. You’ll also see labels for sounds, like if someone is whistling or clearing their throat. These updates are coming to English-speaking users in the U.S., U.K., Canada, and Australia who have devices with Android 15 and newer.
Another helpful change is coming to Chrome on desktop. Until now, screen readers couldn’t read scanned PDFs. But thanks to a new tool called OCR (Optical Character Recognition), Chrome can now read text inside these PDFs. You can search, copy, and listen to the text just like a normal web page.
Also, Page Zoom on Chrome for Android now lets you make text bigger without messing up how the website looks. You can choose how much to zoom in and even save the setting for all websites or just some. To use this, tap the three-dot menu in the top-right corner of Chrome.
With these updates, Google is making Android and Chrome more helpful, especially for people with vision challenges. It’s a big step toward making tech easier and more inclusive for everyone.