🇺🇸

Alt Text is (Actually) Enabled on Threads

Over the years, I’ve become more passionate about making the web an ever more accessible place. At a basic level, that means adding image descriptions (alt text) for images that I post. But I’ve also tinkered around with shortcuts that help bring more clarity to the things shared online. So I was intrigued when rumors spread about Threads not only building support for adding custom alt text to images, but also automatically generating a description if a custom one wasn’t added. It took some time, but Threads delivered.

For the past few hours, I’ve been toying around with VoiceOver, the technology in Apple’s operating systems that reads out things on the screen. It’s also how you can hear those image descriptions. (To enable it, go to Settings → Accessibility → VoiceOver. Or just ask Siri to turn it on.) With that new tool on my toolbelt, I could start testing.

I started a thread with two posts. One had an image for which I added a custom alt text: Eating my first meal on the floor of our new house with our dog.” The other was the same image, except without any alt text. What would Threads describe it as? I turned on VoiceOver to learn and…

…my first lesson was in patience. While VoiceOver dutifully read out my custom description for the first image, the second one was just a photo by hey.jarrod”. Threads apparently needed some time to process.

So I went scouring through my older posts with images and was delightfully surprised at what I found. Threads did a pretty good job at creating brief descriptions, and they even helpfully started with Maybe an image of” to convey its uncertainty. It also reads out any text it found within an image. Upon further testing, I think reading that found text is a feature of VoiceOver and isn’t exclusive to Threads, but it’s cool nonetheless!

I headed to my timeline and tapped through more images there. Meta’s Llama generative AI model is doing some great work. I found a post that very accurately described a landscape photo as Maybe an image of beach, long grass, horizon”, and another as Maybe an image of pottery, coffee cup, long grass, landscape.” (I wish I could link to that post, but Threads automatically refreshed before I could save it and the post was forever lost to the algorithmic timeline. 🙃)

One more example. In another post, Threads described the final photo as Maybe an image of two people, landscape, [the three different dog breeds].” Pretty good, right? Another hiccup though: I wanted to back and relisten to the post’s VoiceOver so I could document the specific dog breeds it found, but it’s lost the generated description. It is once again blandly described as an image”. The generative feature must not yet be stable, or it just hasn’t permanently attached the generated description to the image.

Okay, time to check on my test thread.

Voilá! After about an hour, Threads came through. Even though I hadn’t added a description of any sort, here’s what it thought my photo showed: Maybe an image of one person, golden retriever, and pet food.”

Eating my first meal on the floor of our new house with our dog.
The photo in question.

As I mentioned in a follow-up reply, I can assure you that I wasn’t eating dog food. But as a basic description, I give it a pass! Furthermore, this is just the beginning for these auto-generated descriptions. The technology will progress, and get more accurate, faster, and more detailed. What were once major swaths of the internet literally hidden from folks using screen reading technology, at least on Threads everyone will have a better idea of what’s shown in an image.

To be clear, I still think custom alt text is better to accurately convey your intent when posting an image. I’m still planning on manually adding a description for most images. But not everyone will do that, and this seems like a big win for everyone.


❮ Previous post

7 Things This Week [#106] August 6, 2023

Next post ❯

7 Things This Week [#107] August 13, 2023