Visual impairment and IT gadgets, ChatGPT, etc.
*The same content is also available on Spotify (Japanese only).
This is a bit personal, but the other day I had an acute eye disease that suddenly clouded my vision and ruined the glare again, and now that my vision is much better and I can see more easily, what saved me when I had difficulty seeing around me was the accessibility features of IT gadgets such as iPhones.
With the accessibility features of IT gadgets, for example, the background can be black and the text white by using the dark mode, so even if the glare is not good, I can still read the text. In my case, the iPhone’s accessibility features, such as making the text larger and bolder and using a magnifier, made it much easier to read the text.
Similarly, when working on a computer, accessibility allowed me to see quite a bit in dark mode, and although color discrimination was more difficult, I could continue to read documents that would have been impossible to read on paper alone.
Then I suddenly looked into the relationship between visual impairment and ChatGPT, which is a hot topic these days, and found that ChatGPT seems to play a transformative role for the visually impaired as well.
For example, Be My Eyes, an app that supports the visually impaired and low-vision people, is equipped with OpenAI’s GPT-4 language model, which is the creator of ChatGPT and other applications, so that the AI can read text from an image and the user can instantly get information about that image. The application is designed to help users to read text from images and get information about the image instantly.
For example, if you send them a photo of the inside of your refrigerator, they can not only tell you what is in it, but they can also estimate and analyze what can be made with the ingredients and provide you with a recipe for it.
Note that if the AI is unable to answer the question, it will automatically connect to a volunteer for assistance.
Another example is, TAKAO AI’s engine, a venture company from a technical college, analyzes input image/document data and Web pages using a proprietary system and converts them into Braille and text for speechreading automatically.
The recently released trial site additionally introduces the LLM (large-scale language model) “gpt-3.5-turbo” technology from OpenAI, Inc. of the U.S., which has recently been released to the public and has become a hot topic, and combines it with the analysis engine,
- Organize information scattered throughout the document by topic and structure the document
- Generation of text that is more readable with voice and Braille reading in mind
the company has achieved.
By the way, I asked ChatGPT and they said they do not have any information as of 2022.
Perhaps the information was not yet reflected in the ChatGPT, or perhaps I could not ask them properly, since the voice reply and image recognition features were not implemented in ChatGPT until 2023.
And finally, a bit of a departure from IT gadgets, I present the Mahora Notebook, a notebook with a simple design, with a border that is easy on the eyes and easy to identify, with reduced light reflection, created based on the voices of people with developmental disabilities.
notebook with reading aids (such as kana or punctuation marks)
As I have mentioned, it became a little difficult for me to write and read on paper with a typical white background after the glare became no longer good for me, and that is when I came across this notebook.
When I actually used it, I found it to be much easier to use than a regular white paper notebook, as the glare was reduced.
That is all I have to say about visual impairment, IT gadgets, and ChatGPT in this issue.
Reference Site: