Instructions on how to get your Aftershokz Trex headset to work with Bluetooth Multipoint

About a year ago, I wrote how useful the Apple watch was for me, but one problem still was unless I used my mess of cables hack I couldn’t get VoiceOver from both my iPhone and watch simultaneously.
The Aftershokz headphone company released their Trekz-Titanium model in early 2016 and many people in the blind community were excited because they claimed the titaniums had Bluetooth Multipoint. They did, but after 1-2 hours of trying to get it working I gave up in frustration.

Then the Apple AirPods came out and I was hoping they could do the trick, but no, the user has to switch between devices; they won’t do it automatically.
Then my friend Hai Nguyen Ly who introduced me to bone conduction headphones 4+ years ago said in passing last month, that he’d gotten them working in Multipoint with both his iPhone and Apple watch, so I decided to reexamine the challenge, and this time was successful within about 30 minutes.

Here are the steps to do it, this works for both the Aftershokz Trex Titanium, and Trekz-air models; hopefully they make sense.

1. first you have to reset the headset, Turn off the headset before beginning to reset it.

2. Enter pairing mode by turning on the headset and holding the volume up button for 5-7 seconds.
You will hear the Audrey Says™ voice say “Welcome to Trex Titanium, and then shortly after, “pairing”. The LED will flash red and blue.
Audrey will say Trex-air if you have that model instead.

3. Then press and hold all 3 buttons on the headset for 3-5 seconds. You will hear a low pitched double beep, or feel vibrations.

4. Turn the headset back off.

5. Enter pairing mode again by pressing and holding the volume up-power on button. Audrey will first say “Welcome to Trex Titanium” and then “pairing.” The LED will flash red and blue.

6. Continue to press-hold the volume up button while then simultaneously also pressing and holding the triangular multi-function button on the left. after about 3 more seconds Audrey will say Multipoint enabled.

7. In your first device’s Bluetooth settings, select your Trekz model. Audrey will say “Connected.”

8. Turn the headset off.

9. Reenter pairing mode again by pressing and holding the volume up-power on button. Audrey will first say “Welcome to Trex Titanium” and then “pairing.” The LED will flash red and blue.
10. In your second device’s Bluetooth settings, select your Trekz model. Audrey will say “Connected” or “Device 2 Connected.”
11. Turn the headset off.

The next time you turn your Trex headset on it will connect to both devices. It works pretty well, though here are some things I’ve noticed.

If I move out of range of one of the connected devices and then move back into range, the device doesn’t always reconnect. Turning the headset off and back on reconnects both again.

I said Multipoint lets you connect 2 devices simultaneously but that doesn’t mean you can hear audio from both simultaneously. only one at a time. This means if I’m playing a podcast on my iPhone, I won’t hear anything from my Apple watch; that has already bit me a few times. if I pause the podcast on the phone, audio from the watch will start playing in about 2 seconds.
Beyond that, using Multipoint is still quite useful. I can use either device in a meeting, concert, or at church. I can also use either device while traveling in loud situations like around heavy traffic. I can also use the watch in situations where the watch’s built-in speaker would be too quiet to hear. Even with the limitations I’ve mentioned , I think you’ll still find using your Aftershokz with Multipoint a productivity boost.
Oh, my mess of cables hack is still useful if I want to hear more than 2 devices; and with that solution, the audio really is simultaneous.


How to accessibly and reliably spell check documents on iOS devices with VoiceOver

Although I guess possible on older versions of iOS, until iOS 11, spell checking documents on iOS devices was extremely difficult with the screen reader  Voiceover. Occasionally when browsing around a document if VoiceOver said a word was misspelled you could maybe get suggestions if you happened to be exceptionally lucky. but now with iOS 11, here’s a totally accessible and reproducible process. Previously not being able to reliably spell check documents on iOS was a large frustration for me, and meant that all I could efficiently do on the run was to write rough drafts; having to later correct them on my mac back at home. Experiencing that spell checking was now totally doable on iOS 11, I am more than happy to share what I’ve found. I use the word activate, because there are several ways to progress workflows on iOS devices. Yes, if using only the touch screen, I mean double tap; but if a future reader is using a Bluetooth keyboard, a braille display, or the new O6, there are multiple more ways they could do it.

1. Open a document you want to spell check.

2. Make sure VoiceOver says “text field is editing” “quick nav off”.

3. rotate the VoiceOver rotor left, often only 1 menu item to “misspelled words”.

4. swipe up or down to move between a list of misspelled words.

5. after stopping on a misspelled word you want to correct, change the rotor to “edit”. Edit should be 1 rotor item to the left of misspelled words.

6. Swipe up or down to “select” and activate it. VoiceOver should say “word” selected, where word is the word you selected.

7. then swipe up or down until you get to “replace”, and activate that.

8. after a short wait, probably less than 1 second, VoiceOver will say a word, probably similar to the misspelled word you’re wanting to change. Some times, VoiceOver may also instead say text field but in this case just swipe right to the first item in the word suggestions list.

9. If that is the word you want, activate it; if not you can swipe right or left to move through the list of word suggestions until VoiceOver speaks the word you want. Then activate that word.

10. The new word you chose from the list should have replaced the previously misspelled word you wanted to correct.

Back when looking at the list of suggested words, you may also change the rotor to character and spell the words letter by letter. Yeh that works. Notifications arriving on the scene may be a different matter however.

After a few times through the process, you will probably find that it’s not as complicated as it looks. This not only works by using the touch screen, but also by using Bluetooth keyboards. If your braille display keyboard can also use the rotor, it should work for that also.

For someone who writes a lot while on the run, adding “misspelled words” to the rotor may be one of iOS 11’s most appreciated features.

My thoughts about looking at the sun last Friday with the BrainPort, preparing for the solar eclipse

North America had its last total solar eclipse of the 20th century on February 26, 1979. My 3rd grade teacher, Mrs. Love, told the class the next one would be in 2017. My 9 year-old brain almost had a meltdown trying to imagine how far in the future that would be; I remember when I was five thinking 20 was old.

Decades passed, and among many things that happened, Dr. Paul Bach-y-rita invented the BrainPort device made by Wicab, and over time I became one of their primary testers.

I late summer, 2009 I wondered if the BrainPort could show me the moon, it could and I saw 3 lunar eclipses over 2014-15.

I then began to think hey maybe I could see the upcoming solar eclipse too, , so last Friday I successfully looked at the sun using the BrainPort device. I had to stand with my face pointing almost straight up which was quite uncomfortable and made me slightly dizzy, but maybe I could use a chaise lounge and lay almost flat.
The sun was round and quite small, seemingly smaller than I remember the moon being. The interesting thing is that to see the sun I had to turn invert mode on, which means the BrainPort would show me dark objects on a light background. When I tried looking at the sun with invert off, which means the BrainPort would show me light objects on a dark background; the bright sunlight completely overloaded the camera sensor. Conversely, when looking at the moon, invert mode needs to be off.

I remember when looking at the lunar eclipse in September 2015, for a time there were clouds almost covering the moon. With invert off, I could see the moon; with invert on, I could see the clouds. The clouds visually seemed to be almost touching the moon, though I knew the moon was over 200000 miles away, maybe one of the most transformative things I have experienced with the BrainPort.

Even though the eclipse will only be 83% here in Madison, it will still be another one of those transformative moments for me, as using the BrainPort device helps me more understand visual concepts.

Why blind people should care about social media and contact photos, facial recognition

I have observed in the blind community over the years that many of us seem to care little, if at all, about pictures. In the past, I admit, they didn’t do much for us most of the time, but times are changing.
Starting with TaptapSee,  KNFB Reader, and other less known apps like Live Reader, and currently popular apps like Seeing AI, blind smartphone users are finding more meaningful ways to take and use photos; but now I have another way not yet realized.

Sighted friends have told me that until recently many blind users, maybe unintentionally, often didn’t have any picture on their social media profiles at all. For some, it hasn’t mattered as many of their followers are also blind, but now with apps like Seeing AI and more in the future; the potential of facial recognition will be huge.
If, or more likely when Seeing AI or some similar app can use social profile, or contact photos for facial recognition, just imagine for a moment how awesome that could be. A blind person could wave their phone camera around and find colleagues or friends at a restaurant, at work, or maybe at a conference, or family reunion. It would be somewhat like how a sighted person just looks around and sees someone they haven’t talked to in years and walks over to them to say hi.

At work or a conference, maybe a blind person hears someone presenting and doesn’t know who they are. , With an app like Seeing AI, and a contact list with photos attached in their phone, they could easily find out quickly who was speaking without interrupting anyone else. Add multiple people asking questions after the talk, and the feature may be used several more times. That’s only one of the many ways I or any other blind smartphone user could more independently fit in to an ever more increasingly visual world, and yes this is a form of augmented reality at work.

It’s not just social profile pictures though, it’s more importantly the pictures that Android and iPhones associate with the user’s contact list. This means that some of us, me included, will have to ask people we know to send honest pictures of themselves to us to add to our phone contact list; even just a head shot is good. It’s not a project i could even begin to finish in a day, so let’s start now. the effort we put into building a personal database of pictures of people we know today, will connect our digital tools to our human world more than we could ever imagine tomorrow.

How using the Amazon Dot 1st generation needs a little help when plugged into external speakers

Like many when the Amazon Echo came out in 2014 I thought it was unnecessary at best and silly at worst. I already had Siri and it was always with me, the Echo   was also physically large and cost $179. Time passed and when a hack surfaced last April where you could buy an Amazon Dot though still expensive i felt $90 was way more acceptable than twice the price and it was nice and small, and you could plug in external speakers if you really wanted better sound.

I so far haven’t ever used the Dot for playing music or sports or audio books, I know many do i just haven’t done that yet; I really prefer wearing headphones for any serious audio. I still thought, however, that a better sounding external speaker was a good idea. The problem was if I plugged the Dot into the 3 external speakers I tried, the speaker also got a lot of annoying noise. Not exactly a 60 HZ hum almost like an alternator interference from a car. I was about to give up when Patrick Perdue told me on twitter that a ground loop isolator might solve the problem. Two days later after plugging one in between the Dot and any of those 3 speakers, the noise was gone and Alexa sounded great.

Then I thought it would be cool if I could pair the Dot to a Bluetooth speaker with a built in microphone and make the Dot portable at least around the home, not so lucky there. The Dot paired to the Bluetooth speaker just fine but only for audio out. I tried with both the Anker Soundcore XL and the Cambridge Soundworks angle. Oh well, can’t win them all, Alexa still sounds a lot better than before.

My thoughts on how both technology and new workflows improved my life in 2016

People can look around and see new things they bought over the last year, but if they think a little deeper they might realize how some of their workflows also changed. Yes I bought a few new gadgets in 2016, but some of that was to support changes in my thinking and planning for better workflows in the near future.

I’ve had 2 talking medical thermometers in the past, the second of them quietly dying in 2015, I’m not sick often but decided having a way to take my temperature was a good idea, but instead of finding another blind-centric device I bought the Pyle in ear thermometer. It has bluetooth, pairs with an accessible iOS app and will even save temperatures along with date and time to my calendar; a workflow I hope to not need for some time yet. Oh, and it takes body temperature in about 2 seconds instead of 3 minutes like old traditional thermometers, that is game changingly awesome.

I replaced my corded hand vac with a cordless dry-wet vac, already finding no cord a nice convenience which might actually mean I use it more.

In the kitchen I now have the Instant Pot 7 in 1 programable pressure cooker, with bluetooth, really the only way to go for a blind person. Today I ordered the Waring PVS 1000 vacuum sealer, (refurbished because the price of unboxed models jumped $75 when I wasn’t looking) which among things may allow me to try some sous-vide cooking; along with preserving fresh food longer. I also got the Drop digital kitchen scale to measure food by weight instead of volume, I can also now answer the question of “What do you know” with a penny weighs 2 grams.

Since iBooks came out in iOS 4 back in 2010 I’ve been reading almost daily on my iPhones , but when the cool automatic scrolling-reading option in voiceover broke in the first iOS 10 beta last June, I started using the voice dream reader app; which I knew was awesome and had bought 3 years earlier, just hadn’t used it much. The voiceover bug was fixed in beta 3 but I still read now almost all long text with that app. I wish the voice dream reader had an Apple watch app, then I’d have the smallest ebook reader ever; and since I’m blind and not screen dependent, it would be awesome.

Already this year my 2009 MacBook pro died, and thanks to one of my cool friends, I now have a maxed out 2013 11 inch air on loan. Another workflow change is I installed Homebrew instead of Macports this time around. The newer air also means VoiceOver thus far is busy much less of the time, so I can finally play more with Xcode; and I’ll have a working battery when I speak at Cocoaconf Chicago in late April.

What kinds of changes in your workflows did you see last year, how might new workflows in the new year help you in the future. Rather than just coasting through life, it’s way better to “live on purpose”.

My realization that blind users of VoiceOver have had touch screen macs since 2009

In the early 1990s , Neal Stephenson released his now well known book “Snow Crash“. Then in 1999 he wrote the even more famous book Cryptonomicon. He also wrote a lesser known and much smaller essay entitled “In the Beginning was the Command Line“. In this essay Neal Stephenson talks about interfaces; not just of computers but how every object we use has an interface, beginning with his best friend’s dad’s old car. He talks about how beginning with the first mainframe terminals up to Microsoft Windows and Apple’s Macintosh, the way humans first interacted with the computer was through the command line. The command line is still great, takes few resources, and even still today potentially simultaneously provides many more options than any graphical interface often called a GUI. The GUI was invented though, for the same reason the command line replaced punch cards, the command line was way more efficient than punch cards for everyone, and then later the GUI was more convenient than the command line and easier to use , at least for sighted people . Graphical interfaces meant people didn’t have to remember tons of commands, and could become more familiar with a system faster. The mind with sight available to it, is great at making data points of spatially presented, and intersecting pieces of information. The GUI is great at displaying information in 2 or 3 dimensions to the visually enabled mind, instead of 1 dimension the command line presents. It was a great match, except for the abstractions we have still today. The arrival of Apple’s first Macintosh in 1984 blew the world away with it’s amazing graphics for that time, and the mouse? I’m sure many wondered why they would ever want a small furry rodent on their desk.


Along with computer mice, we also saw trackballs and trackpads, but they all still have the problem of dynamic rather than static reference.
When using a trackpad, if the mouse pointer is in the center of the screen, but the user places their finger on the lower left corner of the trackpad and slides to the right, the pointer will move from the center of the screen to the center of the right edge; and depending on how the settings are the finger may have only moved a half an inch, or 6 inches, still on the bottom of the trackpad. The mouse is even more removed by abstraction. I played with all 3 of these input devices during my years on Microsoft Windows, but was never productive with any them.


In early January 2007 while having dinner with my friend Nathan Klapoetke  he was ecstatic about the new iPhone that had just been announced; at the time I cringed in fear knowing that soon all cell phones would no longer have buttons and had no idea how a blind person would use them.

Two years later at WWDC 2009 Apple announced that VoiceOver was coming to the 3GS  and the blind community was completely blown away, no one saw that coming. Late in June 2009 I went to the Apple store and played with VoiceOver for the first time. I’d already read the iPhone manual’s chapter on VoiceOver, so I had a bit of an idea what to do, or at least how to turn VO on. I only had an hour to play, but except for typing, reading text and getting around basic apps didn’t seem too bad; 9 days later I bought one. The first text message I tried to send though, was a complete disaster, but I still knew my world had changed for the better.

The idea that when you touched some part on the screen, you were directly interacting with that icon or word made a lot of sense to people; blind and sighted alike. Even young children before they can read understand tapping on icons to start games they already , , know how to play. In some ways, the touch screen is the command line equivalent of visual interfaces. Being able to directly touch and manipulate screen elements is efficient on such a basic level, that I wouldn’t be surprised at all if using touch screen interfaces activated the same parts of the brain as making something out of play dough  or clay. There’s an interesting topic of discussion currently going on over how Microsoft tried to make Windows 8 a touch first interface, failed, and now how Windows 10 offers touch based interfaces for those who want it but still behaves like a traditional desktop. On the other hand, Apple has never tried to bring touch screens to their macOS at all until the 2016 line of MacBooks with the new touch bar, which really isn’t a screen at all and currently must only be an extra program’s offering as many macs still don’t have it.

And now, as Paul Harvey used to say, “, the rest of the story.” as most people would tell you, and as google searches would reply with, there are no Apple computers with a touch screen. Except, unless you’re a totally blind person using VoiceOver. The gestures VoiceOver users learn on their iPhones have been available to them on their macs as well starting with Snow Leopard. ; with trackpad commander on VoiceOver , behaves very much like it does on iOS. If with trackpad commander on, I touch the exact center of the trackpad, the mouse pointer is also on the exact center of the screen, and if VoiceOver announces an icon i want i just double tap to activate it. All of the abstraction I struggled with trying to use a mouse or trackpad without the commander mode are gone; but here’s a rare moment where sight still gets in the way. It is so instinctive for someone who can see to visually follow  where their hand is going, that even if most of them turned VoiceOver and trackpad commander on and speech off while still looking at the screen, they still would find it quite difficult to use. that the screen being separate from the trackpad visually is too abstract for many of them. The trackpad is obviously much smaller than the actual screen, though since I can’t see it that doesn’t really matter anyway, but beyond that as a VoiceOver user I’ve had a touch screen on my mac for 7 years. I and probably most other blind users still don’t use it as much as we probably should, or for many of us hardly at all, though I have found some ways in which it is way more efficient than using more traditional VoiceOver keyboard commands.


If I’m told that an interface control I want is in the lower left corner of the screen, using trackpad commander, I can go there directly. If I’m using an interface with a list of items in the center and buttons around the edge I can get to the list way faster than navigating there with a keyboard.

Tim Sniffen has published a book entitled “Mastering the mac with VoiceOver” in which he for the most part ignores keyboard commands altogether and teaches with trackpad commander mode instead. He trains many veterans who lost their sight while deployed. , and says after they become comfortable with VoiceOver on iOS it’s an easy transition for them to their macs. We VoiceOver users should probably listen more to Tim and learn from his experiential wisdom, and for the sighted proud, at least you know if your vision ever degrades so far that in the end you have to use VoiceOver, at least you’ll have a touch screen on your mac.