How I discovered that the audio in Live Photos can help blind people identify and organize them

When Apple announced live photos along with their iPhone 6s in 2015 almost everyone I know or read thought they were nothing more than a stupid gimmick, and promptly turned them off. It took 2 plus years before advantages of Live Photos started to show up as mentioned by Allison Sheridan recently on her blog, as well as a post last year on How to Geek; but if you’re blind, you’ve had something cool since day one.

When everyone else thought Live Photos were stupid and just a silly way to waste space, I immediately realized that they brought accessibility to photos in an interesting way. With 3 seconds of audio, someone could easily provide an audio label for those pictures. If a blind person went on vacation and wanted a few pictures for their sighted family and friends, they and/or someone with them could add audio like “Uncle John and Grandma on the beach” or “Julie standing near the Eiffel Tower at night”. The possibilities are endless. Then, when a blind person scrolls through their library and opens one of their Live photos they hear the audio and can rename them for faster browsing in the future.
A sighted person could even take live photos on their phone adding audio tags, and then send them to a blind friend or family member.

I could even see a future version of iOS offer to transcribe the audio in Live photos and use that text to rename the file, that would just be cool and make my love of efficiency side all warm and happy.

Just another reminder that when a feature seems silly or useless, maybe it helps someone else in huge unimagined ways.

Advertisements

Instructions on how to get your Aftershokz Trex headset to work with Bluetooth Multipoint

About a year ago, I wrote how useful the Apple watch was for me, but one problem still was unless I used my mess of cables hack I couldn’t get VoiceOver from both my iPhone and watch simultaneously.
The Aftershokz headphone company released their Trekz-Titanium model in early 2016 and many people in the blind community were excited because they claimed the titaniums had Bluetooth Multipoint. They did, but after 1-2 hours of trying to get it working I gave up in frustration.

Then the Apple AirPods came out and I was hoping they could do the trick, but no, the user has to switch between devices; they won’t do it automatically.
Then my friend Hai Nguyen Ly who introduced me to bone conduction headphones 4+ years ago said in passing last month, that he’d gotten them working in Multipoint with both his iPhone and Apple watch, so I decided to reexamine the challenge, and this time was successful within about 30 minutes.

Here are the steps to do it, this works for both the Aftershokz Trex Titanium, and Trekz-air models; hopefully they make sense.

1. first you have to reset the headset, Turn off the headset before beginning to reset it.

2. Enter pairing mode by turning on the headset and holding the volume up button for 5-7 seconds.
You will hear the Audrey Says™ voice say “Welcome to Trex Titanium, and then shortly after, “pairing”. The LED will flash red and blue.
Audrey will say Trex-air if you have that model instead.

3. Then press and hold all 3 buttons on the headset for 3-5 seconds. You will hear a low pitched double beep, or feel vibrations.

4. Turn the headset back off.

5. Enter pairing mode again by pressing and holding the volume up-power on button. Audrey will first say “Welcome to Trex Titanium” and then “pairing.” The LED will flash red and blue.

6. Continue to press-hold the volume up button while then simultaneously also pressing and holding the triangular multi-function button on the left. after about 3 more seconds Audrey will say Multipoint enabled.

7. In your first device’s Bluetooth settings, select your Trekz model. Audrey will say “Connected.”

8. Turn the headset off.

9. Reenter pairing mode again by pressing and holding the volume up-power on button. Audrey will first say “Welcome to Trex Titanium” and then “pairing.” The LED will flash red and blue.
10. In your second device’s Bluetooth settings, select your Trekz model. Audrey will say “Connected” or “Device 2 Connected.”
11. Turn the headset off.

The next time you turn your Trex headset on it will connect to both devices. It works pretty well, though here are some things I’ve noticed.

If I move out of range of one of the connected devices and then move back into range, the device doesn’t always reconnect. Turning the headset off and back on reconnects both again.

I said Multipoint lets you connect 2 devices simultaneously but that doesn’t mean you can hear audio from both simultaneously. only one at a time. This means if I’m playing a podcast on my iPhone, I won’t hear anything from my Apple watch; that has already bit me a few times. if I pause the podcast on the phone, audio from the watch will start playing in about 2 seconds.
Beyond that, using Multipoint is still quite useful. I can use either device in a meeting, concert, or at church. I can also use either device while traveling in loud situations like around heavy traffic. I can also use the watch in situations where the watch’s built-in speaker would be too quiet to hear. Even with the limitations I’ve mentioned , I think you’ll still find using your Aftershokz with Multipoint a productivity boost.
Oh, my mess of cables hack is still useful if I want to hear more than 2 devices; and with that solution, the audio really is simultaneous.

How to accessibly and reliably spell check documents on iOS devices with VoiceOver

Although I guess possible on older versions of iOS, until iOS 11, spell checking documents on iOS devices was extremely difficult with the screen reader  Voiceover. Occasionally when browsing around a document if VoiceOver said a word was misspelled you could maybe get suggestions if you happened to be exceptionally lucky. but now with iOS 11, here’s a totally accessible and reproducible process. Previously not being able to reliably spell check documents on iOS was a large frustration for me, and meant that all I could efficiently do on the run was to write rough drafts; having to later correct them on my mac back at home. Experiencing that spell checking was now totally doable on iOS 11, I am more than happy to share what I’ve found. I use the word activate, because there are several ways to progress workflows on iOS devices. Yes, if using only the touch screen, I mean double tap; but if a future reader is using a Bluetooth keyboard, a braille display, or the new O6, there are multiple more ways they could do it.

1. Open a document you want to spell check.

2. Make sure VoiceOver says “text field is editing” “quick nav off”.

3. rotate the VoiceOver rotor left, often only 1 menu item to “misspelled words”.

4. swipe up or down to move between a list of misspelled words.

5. after stopping on a misspelled word you want to correct, change the rotor to “edit”. Edit should be 1 rotor item to the left of misspelled words.

6. Swipe up or down to “select” and activate it. VoiceOver should say “word” selected, where word is the word you selected.

7. then swipe up or down until you get to “replace”, and activate that.

8. after a short wait, probably less than 1 second, VoiceOver will say a word, probably similar to the misspelled word you’re wanting to change. Some times, VoiceOver may also instead say text field but in this case just swipe right to the first item in the word suggestions list.

9. If that is the word you want, activate it; if not you can swipe right or left to move through the list of word suggestions until VoiceOver speaks the word you want. Then activate that word.

10. The new word you chose from the list should have replaced the previously misspelled word you wanted to correct.

Back when looking at the list of suggested words, you may also change the rotor to character and spell the words letter by letter. Yeh that works. Notifications arriving on the scene may be a different matter however.

After a few times through the process, you will probably find that it’s not as complicated as it looks. This not only works by using the touch screen, but also by using Bluetooth keyboards. If your braille display keyboard can also use the rotor, it should work for that also.

For someone who writes a lot while on the run, adding “misspelled words” to the rotor may be one of iOS 11’s most appreciated features.

My thoughts about looking at the sun last Friday with the BrainPort, preparing for the solar eclipse

North America had its last total solar eclipse of the 20th century on February 26, 1979. My 3rd grade teacher, Mrs. Love, told the class the next one would be in 2017. My 9 year-old brain almost had a meltdown trying to imagine how far in the future that would be; I remember when I was five thinking 20 was old.

Decades passed, and among many things that happened, Dr. Paul Bach-y-rita invented the BrainPort device made by Wicab, and over time I became one of their primary testers.

I late summer, 2009 I wondered if the BrainPort could show me the moon, it could and I saw 3 lunar eclipses over 2014-15.

I then began to think hey maybe I could see the upcoming solar eclipse too, , so last Friday I successfully looked at the sun using the BrainPort device. I had to stand with my face pointing almost straight up which was quite uncomfortable and made me slightly dizzy, but maybe I could use a chaise lounge and lay almost flat.
The sun was round and quite small, seemingly smaller than I remember the moon being. The interesting thing is that to see the sun I had to turn invert mode on, which means the BrainPort would show me dark objects on a light background. When I tried looking at the sun with invert off, which means the BrainPort would show me light objects on a dark background; the bright sunlight completely overloaded the camera sensor. Conversely, when looking at the moon, invert mode needs to be off.

I remember when looking at the lunar eclipse in September 2015, for a time there were clouds almost covering the moon. With invert off, I could see the moon; with invert on, I could see the clouds. The clouds visually seemed to be almost touching the moon, though I knew the moon was over 200000 miles away, maybe one of the most transformative things I have experienced with the BrainPort.

Even though the eclipse will only be 83% here in Madison, it will still be another one of those transformative moments for me, as using the BrainPort device helps me more understand visual concepts.

Why blind people should care about social media and contact photos, facial recognition

I have observed in the blind community over the years that many of us seem to care little, if at all, about pictures. In the past, I admit, they didn’t do much for us most of the time, but times are changing.
Starting with TaptapSee,  KNFB Reader, and other less known apps like Live Reader, and currently popular apps like Seeing AI, blind smartphone users are finding more meaningful ways to take and use photos; but now I have another way not yet realized.

Sighted friends have told me that until recently many blind users, maybe unintentionally, often didn’t have any picture on their social media profiles at all. For some, it hasn’t mattered as many of their followers are also blind, but now with apps like Seeing AI and more in the future; the potential of facial recognition will be huge.
If, or more likely when Seeing AI or some similar app can use social profile, or contact photos for facial recognition, just imagine for a moment how awesome that could be. A blind person could wave their phone camera around and find colleagues or friends at a restaurant, at work, or maybe at a conference, or family reunion. It would be somewhat like how a sighted person just looks around and sees someone they haven’t talked to in years and walks over to them to say hi.

At work or a conference, maybe a blind person hears someone presenting and doesn’t know who they are. , With an app like Seeing AI, and a contact list with photos attached in their phone, they could easily find out quickly who was speaking without interrupting anyone else. Add multiple people asking questions after the talk, and the feature may be used several more times. That’s only one of the many ways I or any other blind smartphone user could more independently fit in to an ever more increasingly visual world, and yes this is a form of augmented reality at work.

It’s not just social profile pictures though, it’s more importantly the pictures that Android and iPhones associate with the user’s contact list. This means that some of us, me included, will have to ask people we know to send honest pictures of themselves to us to add to our phone contact list; even just a head shot is good. It’s not a project i could even begin to finish in a day, so let’s start now. the effort we put into building a personal database of pictures of people we know today, will connect our digital tools to our human world more than we could ever imagine tomorrow.

How using the Amazon Dot 1st generation needs a little help when plugged into external speakers

Like many when the Amazon Echo came out in 2014 I thought it was unnecessary at best and silly at worst. I already had Siri and it was always with me, the Echo   was also physically large and cost $179. Time passed and when a hack surfaced last April where you could buy an Amazon Dot though still expensive i felt $90 was way more acceptable than twice the price and it was nice and small, and you could plug in external speakers if you really wanted better sound.

I so far haven’t ever used the Dot for playing music or sports or audio books, I know many do i just haven’t done that yet; I really prefer wearing headphones for any serious audio. I still thought, however, that a better sounding external speaker was a good idea. The problem was if I plugged the Dot into the 3 external speakers I tried, the speaker also got a lot of annoying noise. Not exactly a 60 HZ hum almost like an alternator interference from a car. I was about to give up when Patrick Perdue told me on twitter that a ground loop isolator might solve the problem. Two days later after plugging one in between the Dot and any of those 3 speakers, the noise was gone and Alexa sounded great.

Then I thought it would be cool if I could pair the Dot to a Bluetooth speaker with a built in microphone and make the Dot portable at least around the home, not so lucky there. The Dot paired to the Bluetooth speaker just fine but only for audio out. I tried with both the Anker Soundcore XL and the Cambridge Soundworks angle. Oh well, can’t win them all, Alexa still sounds a lot better than before.

My thoughts on how both technology and new workflows improved my life in 2016

People can look around and see new things they bought over the last year, but if they think a little deeper they might realize how some of their workflows also changed. Yes I bought a few new gadgets in 2016, but some of that was to support changes in my thinking and planning for better workflows in the near future.

I’ve had 2 talking medical thermometers in the past, the second of them quietly dying in 2015, I’m not sick often but decided having a way to take my temperature was a good idea, but instead of finding another blind-centric device I bought the Pyle in ear thermometer. It has bluetooth, pairs with an accessible iOS app and will even save temperatures along with date and time to my calendar; a workflow I hope to not need for some time yet. Oh, and it takes body temperature in about 2 seconds instead of 3 minutes like old traditional thermometers, that is game changingly awesome.

I replaced my corded hand vac with a cordless dry-wet vac, already finding no cord a nice convenience which might actually mean I use it more.

In the kitchen I now have the Instant Pot 7 in 1 programable pressure cooker, with bluetooth, really the only way to go for a blind person. Today I ordered the Waring PVS 1000 vacuum sealer, (refurbished because the price of unboxed models jumped $75 when I wasn’t looking) which among things may allow me to try some sous-vide cooking; along with preserving fresh food longer. I also got the Drop digital kitchen scale to measure food by weight instead of volume, I can also now answer the question of “What do you know” with a penny weighs 2 grams.

Since iBooks came out in iOS 4 back in 2010 I’ve been reading almost daily on my iPhones , but when the cool automatic scrolling-reading option in voiceover broke in the first iOS 10 beta last June, I started using the voice dream reader app; which I knew was awesome and had bought 3 years earlier, just hadn’t used it much. The voiceover bug was fixed in beta 3 but I still read now almost all long text with that app. I wish the voice dream reader had an Apple watch app, then I’d have the smallest ebook reader ever; and since I’m blind and not screen dependent, it would be awesome.

Already this year my 2009 MacBook pro died, and thanks to one of my cool friends, I now have a maxed out 2013 11 inch air on loan. Another workflow change is I installed Homebrew instead of Macports this time around. The newer air also means VoiceOver thus far is busy much less of the time, so I can finally play more with Xcode; and I’ll have a working battery when I speak at Cocoaconf Chicago in late April.

What kinds of changes in your workflows did you see last year, how might new workflows in the new year help you in the future. Rather than just coasting through life, it’s way better to “live on purpose”.