IDSE201 – Revision 4 of a thermostat
As shown above, these are the revisions of the thermostat mentioned in my previous posts. I’d like to share my design process with you so that you can see how I got from flow diagram to interface and iterated on that interface.
The basis of our research was to explore a Honeywell thermostat and provide a flow diagram which displayed the possible user actions. This particular model of thermostat gave an incredible amount of precise control to the user and installer. What resulted is a bit of a “dragon’s tail” step-by-step wizard flow for installers.
A certain amount of abstraction is necessary for sanity and effective communication, so the ‘Installer Options’ wizard was simplified to a single node among the numerous menu options.
The first step after researching an existing system is to create your ideal version of the same system. In my case, many of the controls were dropped as I found them to be superfluous. I wanted a very stripped-down system where the user is presented with very few options. I also wanted an intelligent system which takes advantage of modern technologies. These opinions are expressed via my ideal flow diagram.
I immediately thought to separate the controls into two systems. Ideally, I only wanted an Arduino in the wall with no accessible controls, but Professor Matt’s prevailing wisdom reminded me that some users occasionally lose their phones or forget them in inconvenient locations.
The mobile phone interface made a lot of sense to me as users almost always have them closer than the nearest thermostat control. Why not allow users the opportunity to control their thermostat remotely? Secondarily, the complexity of the schedule control in the Honeywell unit convinced me that it belongs in a phone where users are familiar with scheduling systems.
Ideally the system would also be able to calculate the efficiency of the home’s insulation (R factor) and use local weather data to predict necessary changes to the heating or cooling schedule.
After creating an idealized model, it was up to me to determine the interface I wanted to expose to the users. Initially I had wanted to present identical controls for the fan and temperature controls. A quick visual pass proved that using the same up and down arrows actually confused the users rather than provide a consistent feel. The red and blue arrows were also not entirely clear in their function. Lastly, the user did not have a clear enough indication that the system was reacting to their input. While I had wanted to obscure the temperature as I feel that users don’t care if it’s 70° or 72°, but rather if they are warm or cool enough; this ultimately only served to confuse the user.
For my next revision, I created clearly different controls for the temperature and fan. I also used hinting via partially obscured icons to give the user a feeling that other spaces were accessible if only they swiped the screen. This is a fairly modern interaction and it hasn’t entirely caught on. For this revision I also needed to move beyond my intuition and have some measurable means of ascertaining the usability of my system. The tools I used for measurement are paper prototypes, think-aloud testing, and a SUS score.
I created screens for the Hero Flows (ideal user actions) and a few common mistaken actions in Adobe Illustrator. I then printed out sheets of screens and cut them out individually.
I ended up with quite a few screens as I have two interfaces to test, both the wall unit and the mobile app.
I needed to test this with users who were not familiar with my system. To remove any undue guidance, I used think-aloud testing where a participant receives no guidance other than a singular task from a list and simply blurts out their thoughts in a stream-of-consciousness style. An example task would be “You are too cold. Change the temperature from 70° to 72° so that you are comfortable.”
Users would say many things like “Well, I guess I would click on the fan icon if I wanted to change the fan speed.” I would wait until they actually touched the paper, then I would present them with a new piece of paper which reflected their decision. All without saying a word. This unnerves a lot of people as they feel they are being tested as much if not more than the interface. To avoid unnecessary stress, it is always good to thoroughly preface the test with guidance and assurance.
A few techniques which worked very well for me were to start by telling the user that I ‘would like to play a game with them’ rather than asking something like, ‘Would you like to test my UI?’ This kept the atmosphere playful and light. I also told the user that they are welcome to quit at any time. It’s important to give them an out. Present them with an example of an interaction by putting an interface in front of them and actually touching the paper yourself. I reassured them that if they have any difficulty it is because I made a mistake in the interface.
Two mistakes I made which you should try to avoid are to give overly positive feedback when a goal is achieved (because when they fail it will be more painful for them) and to explain the think-aloud process throughly. Let them know they will hear a prompt of “Please keep talking” if they fall silent for more than a couple seconds.
After the user completed the task list, I presented them with a sheet which allowed them to rate their experience. The questions alternated between positive and negative statements so they had to pay attention and could not provide a straight-column score without trashing the interface’s usability rating. There’s some slightly tricky arithmetic with this system, but the results speak for themselves. I received an average score of 87.916 which is quite reasonable.
In a larger company, the SUS Score is more conventionally used as a point of persuasion. Typically there is a before-and-after scoring to prove that usability improved in a measurable fashion.
Despite my high score, there were some fatal flaws. Some users could not understand the partially obscured icons or the notion of swiping between spaces. Other users did not appreciate the mix of icons and text as that makes internationalization and localization efforts more difficult and assumes a level of literacy on the part of the user. The most fatal flaw however was the lack of a confirmation on the schedule flow. When a user completed the task of setting the new schedule, they would tap the home button on the interface which would leave the app instead of committing the change. A simple ‘Done’ button would alleviate that issue.
With this latest revision, I have made some considerable improvements.
Firstly, the icons are all in a fixed position which perfectly mirrors the wall interface. Learn how to use one, and you’ll know how to use the other. No more hinting or spaces. The user has every option available to them at all times.
The second most obvious change was to bring back the Schedule shortcut to the bottom of the interface. This is a nod to the genius of Forecast.io if you haven’t tried this out via a mobile device, do so now. You’ll thank me later. Their spring-up space for the ‘Next 7 Days’ forecast is a great way to expose a complex bit of information behind an unassuming control.
I also added a ‘Done’ button on the ‘Add Schedule’ flow to specifically guide the user to commit their change.
In the latest revision of my annotated wires, I specifically call out new animations on the icons to create fun moments of interaction as the user switches between controls. The thermostat fills from the bottom, the fan blows the other controls away, and the system control gear spins while sliding the other controls away.
While I am dying to get a demo spun up via Meteor, I have been told to hold off for one more week to clean up my specification a little more.
We’ll see if my willpower holds out over the Thanksgiving break…
In case I haven’t mentioned this before, please provide feedback via the comments or one of my numerous methods of contact. Thanks!