Author Archives: kevinrenatojones

How the Amazon Basics microwave oven has replaced the much more expensive talking microwaves made specifically for the blind

Posted on Friday September 6, 2019

In summer 1981 between my fifth and sixth grade years, my parents and I visited my oldest sister Kathi in Colorado. She had a microwave oven, and her kids who were in grade school could use it to cook or reheat foods. Kathy thought it would be good for me also, as it would be safer than a stove.
Later that summer Mom and Dad bought one.

Although not the panacea originally thought as, the microwave is still quite useful I use it almost every day. The model I’d been using from 2001-2018 was the Panasonic mid sized model, though I had to buy a second one after the first one died in 2014 but hey 12 years isn’t bad.
Still one problem for blind people with just about any microwave though, is that they have flat panels so if you’re blind it was hard to know where the buttons were. Many of us found ways to braille labels and put them on to the microwave panels. Braille takes up a lot of space though, so some of us also , used Highmarks, or fabric paint, which is much cheaper, to save space; but there was only so much room. This meant not all of the more advanced features or any features selected from a menu were usable unless memorized.
There were talking microwave ovens made specifically for blind people to use, but they were significantly more expensive, costing as much as $400,  and often hard to find. Some of them also didn’t have as many features or were lower powered.

Then last summer Amazon announced their Amazon Basics microwave oven, and you could control it with any of their Amazon Echo  devices.
I read reviews when they came out, but sadly many people thought it was silly or only a novelty. Why control something with your voice when the buttons are right there. They seemed not to think about how much more convenient it might be for someone with a disability, or even for someone without any disability at all. They also mentioned that the Amazon microwave was under powered, and it is at only 700 wats, and it is also smaller, so it may not serve a family of more than 2 people very well, but it’s a start. With that in mind I still felt it was an ok experiment.

It showed up, I plugged it in, and it configured itself. Because I had already configured an Amazon echo device, it knew my wifi network, and once on there, it also set the correct time. Awesome, now my sighted friends won’t have to tell me that the clock isn’t set anymore.
I still had a friend mark the microwave with fabric paint though, and one day when there was an internet outage I had to press the buttons like an animal. Most of the time though, , unless in a phone call, I control it with my voice.
One can say things like “Alexa, Cook microwave for 3 minutes” and it will do that, you can also say “at power x” where x is from 1 to 10. I also had marked the popcorn bacon defrost and reheat buttons on previous microwaves I’d owned, but this was about the most a blind person can do with most models.
With the Amazon model, you can say things as advanced as, “Alexa, Cook 8 ounces of broccoli,” “cook one cup, (or bowl), of soup;” “cook one cup of coffee. saying heat also works. baking a potato also works. There are more commands, I’ve only mentioned a few here.
One might say that’s cool and all, but the expensive talking microwaves specifically for the blind could tell you how much time was left in cooking. That’s true, but you can get the same on the Amazon microwave as well. You can also add more time while it’s cooking; so if you say “Alexa add 1 second to microwave,” it will tell you something like “cooking 45 seconds on power 10.”

It is true that the Amazon basics microwave is under powered, so it takes a bit longer to cook things, but at the end of the day, it’s really not that big of a deal. It is also quite small, large dishes may not fit. I still call my experiment a success with a few miner caveats.

General Electric also has a model of microwave for about $150 that can be controlled by an Amazon device. It is larger, 0.9 cubic feet,  and runs at up to 900 wats, so also not as under powered. It also has a feature where you can scan barcodes on frozen food packaging with your smartphone, after which your phone will look up cooking directions and send them to the microwave. This in the reviews I read seemed not to work so well, but hey probably future models will get better.

Even if one  bought the GE model and an Amazon Echo Dot   for around $200 , they could have a talking microwave for about half the price  than one of those previously mentioned talking models specifically for the blind. As I have said, mainstream device plus talking cell phone, or in this case Amazon echo device, equals talking, or accessible mainstream device. Seemingly novelty features to many are game changing accessibilities for others. Designing with inclusion from the start, is always the best way to go.


My thoughts and discoveries on how Amazon now uses proprietary AC adaptors on their echo dots.

Posted on January 7, 2019
A month or 2 ago, I read that the new Amazon Echo Dot third generation model no longer was powered from micro USB, but now required a proprietary AC adaptor, I remember the old days when every device had its own AC adaptor,  those were frustrating times. I then got my own Amazon Dot third Gen model, and was annoyed. Then, three weeks ago I read some tweets where others were also annoyed, and questioned why Amazon made that change. I then realized, hey I have an iDevices switch and it can measure energy, so I decided to do some testing and find out.
When  idling, the Dot 3rd Gen model draws about 1.8 watts. I then played Beethoven’s 5th symphony at full volume and could only nudge the Dot up to 3 watts, even in the moments marked fortissimo .  If someone could double that some how, so the Dot drew  6 Watts, or even triple it to , 9 watts, that would still  fit in the generally accepted ceiling of 15 Watts on micro USB 3.1. Yes, even the Amazon Echo Dot third generation can run on micro USB, so it is not a technical limitation.
My thoughts are that people tried to power their new Dots from USB ports on their computers or old USB hubs, where they only got the traditional USB specification of 5.0 volts at 500 milliamps which only multiplies out to 2.50 watts. I’m guessing Amazon got lots of nasty calls when people tried to play music and had problems. They decided instead of trying to tell people that the Dot required 1 amp or throwing in a USB power adaptor they just went proprietary.
Ye it’s still annoying, but I also still enjoyed figuring it out.

How to set up an iDevices switch as a blind person.

Posted on December 18, 2018

All over the internet you can find people write phrases like: “first world problems”. Here’s one probably most people haven’t heard before: “blind world problems”. E.g. problems sighted people will probably never experience.
Earlier this month, I setup my first iDevices switch, but until I knew what I was   doing, it was quite a challenge. I had bought it some time ago so I didn’t have the box anymore, so I  couldn’t get the number needed to set it up in the iDevices app from there. I looked all around on the device for the number using Seeing AI (except for the side with the plug that goes into the wall, didn’t think it would be there) and and gave up for the day. The next day, I got sighted help and was told it was on the side by the plug, that faces the wall, where I didn’t look. So blind readers, the 8 digit number you need for setting up iDevices is on the side of the switch facing the wall , in the space  closest to where you would plug in the thing you want to  control. .
Once I knew that, setting up the switch was very easy. Both Seeing AI and the iDevices app could see it, and the iDevices app is quite accessible.
There are 2 ways to get the number of your iDevice switch into the app during setup. You could paste or type it in to the edit box, but I just capture it with my iPhone’s camera, which, even as a blind person is very doable. When the app wants the number and displays the camera view:
1. Make sure you have good lighting, Even if you used the flash, which the iDevices app isn’t capable of, the number seems unreadable in low light.
2 place the iDevice switch on a flat surface with the plug that goes into the wall facing up.
3. hold the phone directly   above the switch  so the camera lens is positioned next to the plug that goes into the wall, towards   where you would plug in the thing to control.
4. Slowly raise the phone straight up about 1cm per second. You should hear that the number capture was successful within 5 seconds or so.
Beyond that, the setup was very easy and accessible.
People say a picture is worth a thousand words, so I needed almost that many to try and explain where the number was on the switch. Let me know if it made sense. 
One more thing,   I use the same camera technique mentioned above for capturing the number on the iDevices switch when I setup my Apple watch. It  works very well there also.

My beginning explorations about less visual alternatives to spreadsheets for screen reader users

Posted on December 12, 2018

Almost 2 weeks ago  I was listening to the Mac Power Users podcast with David Sparks and Katie Floyd, the episode had one of the best titles I’d ever seen,  “My Life Is a Subscription” this could be a blog rant all by itself, but what they wanted the listener to get out of that episode was how to be aware of and manage their subscriptions. Sadly for me, they used the sighted person’s go-to, the spreadsheet. If you can see spreadsheets are nice and visual, convenient, but not so much when using a screen reader. Especially if using Apple’s Numbers program, because there is no go to cell keyboard shortcut  command. Beyond that, even if there was one, like in Microsoft Excel it still leaves it up to the user to figure out which cell they want to navigate to.   I’d been frustrated before by how people love their spreadsheets, and even complained about this in a speech I gave at Madison UX 2014.  David and Katie reminded me of this frustration, so I  decided to explore if there were less visual   alternatives.
My friend Kyle Borah suggested databases, so I went there first.
The mac has sqlite installed by default, and sqlite can run on just about anything. The lite really only means that it doesn’t run on a server and doesn’t allow for concurrent users, but for an individual person, it can work very well. SQL  was originally called SEQL (structured English query language) so now we know why people say sequel today. SQL is easier to learn than many other programming languages and will probably be where I play and blog about in the future, but then I remembered I had a cool little app on both macOS and iOS called Soulver, and for this case, managing my subscriptions, Soulver was more than equal to the task.
Soulver is kind of like the Microsoft program called  notepad and a calculator, so you can write things in a sort of natural language and it will do the math for you. lines beginning with // are comments so you can write notes to yourself. I didn’t feel like putting my real subscriptions into a public blog though, so I made some up. Besides, that way I could show currency conversions.

//fictitious subscriptions

//say that 3 times fast

//subscription 1 renews on January 1

sub1Annual = 100 GBP in USD = 125.69 USD

sub1month = sub1Annual/12 = $10.47

//subscription 2 renews on the 10th of the month

sub2month = $5 = $5.00

//subscription 3 renews on the 21st of the month

sub3month = $2.99 = $2.99

sub1month + sub2month + sub3month = $18.46


Here’s what the developers of Soulver call “back of the envelope calculations”.


//my fictitious  day trip to Chicago

//oh wait, I actually did something almost like this back in 2015

//some of these prices are also  fictitious, though I tried to be somewhat accurate The bus roundtrip from Madison was $62 = $62.00

Tickets to the Shedd aquarium and 2 other museums cost $65 = $65.00

Transportation on the L cost $15 = $15.00

two  meals and a snack cost $30 = $30.00
//that Chicago pizza was worth it.



In this last example I didn’t use variables, I just wrote how someone might write down notes on the fly. Most reviews I’ve seen unfortunately showoff Soulver with screenshots, but that would be disingenuous of me, so I exported my examples from Soulver into .txt files and then pasted them into this post. Soulver can export to html and PDF as well as text, which is great if you want to share calculations with others who don’t have it.


Soulver also has your ensemble of basic scientific functions, it can do quite a bit; so  let’s go further. Wisconsin can get pretty cold  in the winter, and I’ve actually heard some people wonder why heated air inside houses and buildings is so dry then, so here’s a bit of what you might learn in meteorology 101. Depending on the temperature, air can only hold so many grams of   moisture per cubic meter; The warmer the air, the more it can hold. This is usually told to us as relative humidity, but to the meteorologist what’s more important is the dew point. The dew point is the temperature at which the air is totally saturated, e.g. at 100% humidity. So your furnace sucks in a bunch of cold air from outside heats it up to 70 degrees F or 21.1 C and keeps your house warm; but without adding extra moisture to the air, it only has what it could hold when outside. Let’s use Soulver to figure out a real life example. As I write this the current temperature outside my window is 24 F the dew point is 19 F and the relative humidity is 79%Let’s see what the humidity is when that air gets  warmed up to a much more acceptable 70 degrees.
First let’s calculate with the original outside temperature of   24


//calculating relative humidity from air temperature and dew point

//b is a constant used for calculating vapor pressure is in degrees C

b = 237.7 = 237.7

//air temperature and dew point must be converted to Celsius

//with no interface to input data, here’s where we do it

atf = 24 = 24

dpf = 19 = 19

//temperature conversions

atc = (atf-32)/1.8 = -4.4444444444

dpc = (dpf-32)/1.8 = -7.2222222222

//calculating saturation vapor pressure svp,  and actual vapor pressureavp

svp = 6.11*10.0^(7.5*atc/(b+atc)) = 4.3967971796

avp = 6.11*10.0^(7.5*dpc/(b+dpc)) = 3.5564947151

//returning relative humidity

round(avp/svp*100) = 81


I know, it’s 2% off, this formula seems to lose a bit of accuracy when the temperatures go below 0 C.
Now’ with just changing the atf from 24 to 70 I get 14. Now we know. This also demonstrates one thing databases are not so good at that spreadsheets are, what-ifs. When I was talking about this last week with my friend Fintan, he said he uses what-ifs all the time in his spreadsheets at work,  and that was not so convenient with a database; Soulver can easily do what-ifs also, we just saw one above. I also realized something else cool about Soulver.Being a programmer as I am, instead of using Soulver for more advanced calculations like calculating relative humidity, I had already written a function to do it in a python math environment I put together 2 years ago, more blogs about that later. Soulver besides being notepad and being a basic spreadsheet at least in functionality, it can also fill in as a basic programmable calculator. You can’t do if-then, or loops but still a fair amount, more than a TI36 which can do just about anything numerical but has no programmability at all. Soulver could also be useful for someone who wants to figure  out some math but doesn’t  know any   actual programming language at all. As we have found, Soulver is quite useful.
The big down side though is it’s only on Apple devices, but there is a similar program called Calca that works on both Apple and  Microsoft Windows. It seems quite a bit more powerful even able to do some symbolic math including linear algebra, playing with that is probably one of my January projects. Calca’s downside I can see thus far though, is that   Soulver’s way of performing calculations is more convenient, closer to natural language,   and for that  still  keeps a place   in my tool box.

How I can edit files on a remote server using Textedit on my Mac through SSH

Posted on December 1 2018

Two years ago, I wrote about how I use Textedit instead of vi, vim, or nano  when editing files  in my macOS terminal. It’s still working well for me, but then I wanted it when remotely logged into other computers. Textmate can do that, but you need ruby on the remote machine and have to move over a ruby script etc. Last month I got a Ubiquity edge router x which I have no regrets with even after taking all the Cisco classes up to CCNP, configuring Cisco routers for 8 months  as an intern for the state, and running my home network off   Cisco   routers for 14 years  since 2004. I also recently  remembered a Nano Pi Neo, I’d been neglecting for 2 years, and between the 2 devices was needing to edit files on them, and the router doesn’t have ruby installed; I wanted my  Textedit. (Imagines a small child throwing a tantrum because they didn’t have their   security blanket.)
I downloaded Transmit from Panic software which is an awesome program, but its price was also a little too awesome for my wallet, so back to the drawing-board. 
I googled around looking for a way to edit files remotely through SSH connections and found out about SSHFS. FUSE is a way to create a file system in user space, or in simpler language,  to create kind of a virtual file system. SSHFS uses FUSE to create a session in-which a specific folder on a remote machine appears in a specific folder on your  machine in front of you, and then keeps them synced together. This lets me use my Textedit cheat to edit files on my mac as I have been doing for 2 years except in addition now they get synced back to the remote machine through SSH. It’s pretty cool, but if using a mac  you’ll need a few extra pieces to do it.
Hard core *nix users probably argue that a mac really doesn’t have BSD unix because it doesn’t have a package manager, but there are a few. Home brew is probably the best now though it had to compete with several others like Mac Ports and Fink for some years before emerging. You will need Home Brew to eventually get SSHFS so here’s the process for getting itFirst you’ll need Xcode command-line tools if you don’t have them already. Paste the following into the terminal:xcode-select –installand decide if you only want the CLI tools or the whole Xcode install. With only 128gb on my MacBook air, I think you can figure out which I chose. The Xcode command-line tools are very nice to have as they give you gcc clang, and other programming goodies.
Next it’s time for Home brew. Paste/usr/bin/ruby -e “$(curl -fsSL”
Next we will need 2 packages before we can get sshfs working.  brew cask install osxfuse

brew install sshfs

Now we can actually make it happen. 
First on your machine make a folder where you want the files from your remote machine to momentarily exist.
Then type something like sshfs username@server-ip:/path-to-folder ~/folder-for-remote-filesObviously typing server-ip won’t work  neither will username but it should be easy to fill in your specific information there.The first time you do it, you will have to answer some questions from the security and privacy preference pane. After you say yes to them, it should work.
It seems to stay connected until the next reboot. You can do anything to those files you want and changes will sync between both devices. It’s very nice.

How I got past an accessibility snag buying a book on the Pragmatic PRogrammer’s site

Posted on November 28, 2018

Two days ago I was reading some tweets and read one advertising a great deal. I know, the deal is probably over by the time most people read this post, but the books are still worth considering. :
Brian P. Hogan – : Hey folks. You can pick up my new CLI book  “Small Sharp Software Tools”  my Exercises for Programmers book  or my tmux “productive mouse-free development” book  in ebook form of 40% off for a limited time.

They make great gifts.

And I would appreciate the support.

Although graphical interfaces can some times be useful, I always have a terminal window open, so looked at his book and decided buying it would be an upgrade to my command-line know-how . Too bad the site didn’t have more accessibility know-how.

After some stumbling around I made an account on the Pragmatic PRogrammer’s site and had Brian’s book in my cart, but couldn’t get to the checkout screen.

Brian suggested I write to Pragmatic’s support email, and I got back a very helpful response.

Hi Kevin:
I’m sorry for the trouble that your having.
We’ve had this come up in the past. Could you try going directly to
while logged in to your account. Let me know if that works.

Kind Regards,
Pat the Gerbil

That link worked great, and I was able to breeze through the rest of the buying process.

Although fixing this accessibility snag would be the ultimate solution, at least there is a work around, and I wanted to share it with prospective screen reader using buyers.

And now my gentle message to the rest of the readers who are sighted. As long as ebooks don’t use screenshots when showing terminal commands or program code, they are completely accessible to blind readers, and paying the same price as sighted readers is totally ok by me. “Small Sharp Software Tools” only has a few screenshots, but as far as I can tell all of the terminal output and commands are raw text. Hopefully most if not yet all books from ebook publishers use as much raw text as possible. There are some serious geeks in the blind community, and the more accessible your ebooks, the more we’ll buy them
If a programming book has all of the code in screenshots, the book is completely unusable to blind readers, and they are totally excluded from the knowledge the author is sharing. I bought a book some years ago and all of the code was in images, so beyond not being able to learn from it that book also cost me around $30.
. I suppose screenshots are nice and pretty and all, but they’re also inaccessible to screen reader users; Any readers of this post who are also authors, I thank you for considering this when writing your books in the future.

My thoughts about what blind people see, or don’t see. Spoiler, it’s not a short answer :)

Posted on November 7, 2018

Being totally blind since birth, I get asked periodically what I see, as if sighted people can’t begin to imagine what not seeing anything would be like if they were blind; this is, actually, in fact the case. Before I had ever been in an airplane I used to have dreams about flying in one, they were all fantastically wrong. My brain had no accurate data to base those dreams on, so it just made things up. While we’re at it, a congenitally blind person will probably not dream visually because they have no visual memories to draw on. A recent study, however, shows that it is still hypothetically possible to do so. Blind people who lost their sight as early as age 5 or 6 can dream visually over the rest of their lives, and many in this situation do; and now we’re getting back to the original question.

Damon Rose, a blind journalist with the BBC has also answered this question, but very differently from what I experience. He had sight until age 13, and went blind for a different reason than I; so beyond that fact that his primary visual cortex somewhat developed, and that his optic nerves may still have some residual connections, he also has visual memories his brain can still play with.

I was born 3 months premature and became blind in an incubator from Retinopathy of Prematurity; destroyed optic nerves, my eyes never grew beyond the size of a 2 year-old’s, my brain eventually remapped. People have respectfully said to me, “If I blindfold myself I can still see black, so is that what you see?”

Seeing blackness means you can distinguish between light and dark, which means you can perceive visually, so I thought about this for a while and came up with an analogy that I think makes sense.

Say you had a radio, and it picked up a bunch of stations. We could say those stations were different colors and the white-noise or static between the stations was black, except I can’t call it black noise, because that actually exists, and  means near absolute silence. Then one day, all of the stations disappear with no explanation, so you still turn on the radio occasionally to see if they come back, but they never do. So now you’re just figuratively hearing black. Then one day you learn those stations will never come back, so you unplug the radio and throw it away. Now you aren’t even hearing black, you’re hearing in fact nothing; as far as the radio is concerned, which actually then would equate to black noise, and that time you had been using to listen to radio stations is now free for other things.

This is basically what the  visual cortex in my brain, and those of congenitally blind people does. Once enough time goes by where there is no input from the eyes through their optic nerves, the brain begins to remap and neuroplasticity happens. When a baby, humans have many billions of neurons all waiting to develop, and totally impressionable; and since 90% of a sighted person’s sensory input is visual, that’s where many of them go. When there is no visual input they get programmed to do other things. When a totally blind person reads braille for example, the same activities go on in the primary visual cortex that happen when a sighted person reads print. Blind people are often thought to hear better than those who can see, but deaf people are also way more observant visually than sighted people who can still hear; in part the brain’s resources devoted to the lost sense redistribute and enhance those that are left. Enhance sometimes means increased, but not always , as in the case of children who lose a sense after age 2 or 3, but in those cases the brain at least is more focused on those senses that remain.

So in my case, what do I see? Absolutely nothing, which is not black; black would have been like the static on that radio, or the number zero. Consider for a moment, that zero is way more meaningful than nothing in mathematics; it is the center of the number line after all.