Everyone’s heard the old saying: a picture is worth a thousand words. That’s now a literal truth on social media–text heavy screen-caps of excerpts from online articles or note-taking apps and reposts of lengthy screeds from other social media platforms are making regular appearances on the skyline. But what of image descriptions for these posts? Even if you’re able to extract the entire content of the piece from your image, everything from character 2001 onward will simply disappear once it’s pasted into the image alt text field. So. What now?
Here’s where you get to make a choice. It turns out that a LOT of people are now making the choice to use the alt text field to give a shoulder-shrug of a notice, simply saying that there’s too much verbiage to transcribe. Sometimes the statement comes with an apology. Sometimes it comes with a link to another place from which the entire posted text can be retrieved. Either way, it’s not particularly useful if you’re a person who’s reliant on that kind of information to glean information or context from posted images. On occasion, there will be a summary of the text in the image–that’s the helpful middle ground that allows at least a partial window into the message that’s being presented in the original piece.
Here’s option two. You can post multiple copies of the same image, or you can post placeholder images–up to a total of four images per post–to include up to 8000 characters worth of description for each image you post. And honestly, if your image post crosses 8000 characters, it might be better to just link directly to the info that you screen-capped anyway. Not sure what I mean? Have a look at this post on Bluesky and at the reply. The original image contained over 2000 characters, so the reply used the original image and two placeholders to capture the entire content. The alt text character limit has since been doubled, so in future posts of this length would only occupy two of the four image slots.
As for actually extracting text, iOS has a Live Text tool that allows you to do it on the fly. You can also use Shortcuts to do the same thing if that fits your workflow better (that’s actually my preferred method, because OCR can potentially make mistakes, depending on how the text is laid out and on the font used in the image). In Android, you can send your images to Google Lens to use a tool that’s similar to Live Text. This video shows the basic elements of using Live Text or Lens for text extraction.
I mentioned Shortcuts earlier. If you’re an iOS type of person, I’ve written a few that are linked below. One is a character counter. All it does is look at text-heavy images and count the number of characters it finds. There are several versions of another one that actually extract the text from the image. If they detect more than 2000 characters, they’ll divide the text into segments of appropriate length to make it easier to use with placeholder images. That output will be stored in a new entry in the Notes app. If multiple images are selected in the Photos app, the output will be labelled accordingly in the note that’s generated by the shortcuts.
Have fun. Happy Bluesky-ing.