We all know why alt text is important. If you don’t, perhaps you might care to read this little screed to get an idea. But I don’t want to talk about any of that. I’m going to talk more about getting fingers on the keyboard—about a few things that might make it a bit easier for everyone to get some descriptions attached to some images. TL; DR: Writing your own alt-text is the best way to make images accessible to everyone. Asking for help is next best—either by asking a friend or a volunteer (through Alt4Me). If your posted images are text-heavy, there are lots of ways to grab that text and paste it into the alt-text field before you post.
I have Opinions about alt text, the first of which is that the best way to deliver accurate image descriptions and context framing for a person with no/low vision or a neurodivergent person who relies on alt text is to have that text written by a human. Ideally, this text would be prepared by the person who posted the image—they’d be the most likely to understand not only the image contents, but also any subtext that they’re trying to convey. In some ways, that’s the hard way to do things. It requires the most mental effort on behalf of the person making the post. But there’s help out there. There’s even a feed (created by Bluesky user melodywisp.bsky.social) that flags good examples of alt text for people to (maybe?) use as a basis for writing their own. Asking someone else to help is another option, whether that’s a nearby friend or a Bluesky volunteer. That’s not as good a solution—context matters, and it can be difficult to describe or to derive meaning from an image that’s being analyzed second-hand. From a volunteer’s perspective, doing this work can be like trying to copy handwriting while you’re looking in a mirror. It might be legible, but you might garble some words and letters and lose some of the intended meaning of the original thing.
I get that sometimes it’s hard to do the work and there are times when you just want to find an easier way to make your shit accessible. To that end, let’s talk about LLMs. Like ChatGPT. Gemini. CoPilot. Those are probably the most well-known “AIs,” but there are quite a few now available that can be leveraged to “read” an image and to provide a description of what they “see.” That said, you should always be cautious about using these services. Sometimes they work. In this case, “work” means “can give you a baseline description that needs tweaking.” Sometimes they fail in spectacular ways, and you wind up writing the alt text from scratch anyway. Privacy is a consideration—you need to think carefully before delivering your artwork (or someone else’s if you’re a volunteer) to a third party to be scraped for content and added to an LLM database. Having said all that, Bluesky user linusrath.bsky.social created an image description bot that was at one time useful in this regard. It was invoked by posting its username (@alttext.bsky.social) as a reply to the post containing the image that needed description. There was an image transcription bot that operated similarly, but its job was to work with text-heavy or text-only images. It can still be found at @alt-text.bsky.social, and it was invoked in the same way as the description bot. Its author is holden.bsky.social, who created it specifically for the benefit of screen reader users. The primary intent was for those users to invoke it when someone else had neglected to add alt text to a post (again, it was not meant for general use). I used a lot of past-tense in those last couple of sentences, and that’s because the first of these two bots is no longer being maintained. The second remains available for use as of this writing. The recently published image description service maintained by coolhand.bsky.social at https://assisted.space/alt is at present unavailable. It has been taken offline as of the time of this writing/amendment.
Luckily, transcribing walls of text turns out to be much easier than it looks. OCR (optical character recognition) has come a long way from its humble origins, and the options here are many and varied. Native or app-based text recognition now exists in various forms across most platforms, so let’s dig into those functionalities a little bit. On Android devices, Google Lens has the capability to select and copy text that appears on a device’s screen. This post by diamondqc.bsky.social outlines the steps it takes to make the magic happen. And there’s a pretty interesting article at computerworld.com that speaks to a number of other things that can be done with Lens that might find some utility in and/or around Bluesky. A user that’s very happy that iOS devices allow text capture right from the screen is tiamarie.bsky.social—this post goes into a little more detail about how that works. Another cool trick is to long-press on image text in iOS to make selection “handles” appear. Tapping on text selected in this way gives the option to copy that text. iOS also allows the use of shortcuts that can provide a bit more functionality and a few more options for taking advantage of its built-in OCR functionality. Shortcuts by coolhand.bsky.social (here) and eustace.link (here) may be worth exploring. Last, there are apparently ways in both macOS and Windows to work with on-screen text recognition tools. There’s always a catch, though. It’s important to remember is that the alt text field has a two-thousand-character limit. This means that you do have to either (a) be creatively concise in your alt text or (b) split your descriptions and transcriptions across multiple panels. It also means that if you’re using OCR tools to add alt text, you need to check to make sure that your image descriptions don’t get truncated abruptly at that two-thousandth character. It may take a little bit of planning to get everything correctly transcribed and uploaded, but it’s worth that extra effort.
I’m finally done talking. I hope that what you take from this is helpful to you. I especially hope that what you take from this allows you to create posts that are more accessible to those parts of our community that rely on alt text to be able to experience the site properly.
Have fun. Happy Bluesky-ing.