Honeywell Thermostat Redesign FINAL Crack At It!
As a user, I am willing to try to learn the navigation of an interface to a certain degree the first time that I use it. Then, I expect the learning curve to significantly drop as I have learned the primary behaviors that make the system work and am able to apply them to secondary functions in the interface. The iPhone navigation system is a great example of this: the swipe feature that you use to open the phone for use (indicated with an arrow) is the same motion that you then use to navigate through the pages; the system has trained you and given you a mental model that you can carry throughout the rest of the system.
“Mental model” was a term that we beat to death throughout the iterative design process in redesigning the overly (to put it kindly) complex and complicated Honeywell thermostat. The concept of a “mental model” calls into question the previous experiences that a user has had using a system. In the case of the iPhone, they really made their own mental model for operation of a smart phone and introduced you to a behaviour that would be repeated throughout from the system from the first interation with the phone. With the thermostat, I was bringing forward the previous experiences that I’d had (and later my users were doing the same) using a thermostat. Pictured below, you see the beautiful Austin Energy issued “energy saver” thermostat that I use daily in my own home. It’s so awful that I’ve never actually figured out how to use it. You have to dive so deep into numerous screens to manually enter each day’s schedule that by the time you’re on Wednesday, you have this sinking and frustrating feeling that you’re an hour away from done and that you’re not even really sure that you’re doing it right.
That is exactly the opposite feeling that I wanted my user to have when using my system. I wanted them to feel like they had an awareness and control of all space, temperature and time at any given time. I started out my first iteration not really being able to identify this strategy. I knew that I wanted very clear and direct paths to the spaces within the system, and to greatly reduce them. I started out by creating a concept model of both the existing system, and then of how I wanted to redesign the system. A concept model is an abstracted visual model of the spaces in a system and how they relate to each other. If you look at the post that I wrote about these models , its apparent how I wanted to simplify navigation of space for the user by reducing them. I shaved a lot of function out of my design that I viewed as clunky and frankly, not useful.
I also wanted to add a “learning” feature; this feature would automatically record the climate control behaviour of the user and then play the recorded behaviour as a learned schedule. I loved this idea because I was using the mental model that I had learned from my own terrible thermostat at home: scheduling a thermostat is harder than it’s worth. So, great, I’ll have the system do it for me. I have to admit though, that I was really defaulting to ignorance is bliss, and I later realized that I was actually choosing a perceived control over actual control. If I can design to have real control, I think that’s ideal. My first wireframes, which are a very basic tool for creating an interface skeleton design so that we could test and rate the function with users, featured the learning function and several climate control spaces within the system with which to navigate through.
My first wireframes actually tested pretty well. We use a user testing method called “Think Aloud Testing” which I detail in a previous post here. This method requires the user to speak aloud each decision they are making in regards to reaching the goal in each flow of the system. This method is great because they actually say out loud what they are thinking when they choose the right path to reach the goal, and, even better, the WRONG path, called a Critical Incident. A Critical Incident is what happens when the user cannot complete their goal within a system. It is great feedback for the designer. It is through this testing that I stumbled across a great and slightly depressing insight: a design that works and scores well is not necessarily a good design! I detail this realization here .
Upon realizing that I thought that my own design, which scored well with testers, was not a system I was interested in using, I really had to assess what my priorities were in my next iteration. I remember Matt Franks, our professor and leader in this venture, saying at the beginning of the quarter that the first iteration would be the hardest, because we’d be creating each frame in Illustrator and then the following iterations would just be edits on this variation. Reflecting back, I think I made 6 completely different designs and one measly iteration! I did get a lot faster at iterating, though, and my design acuity really improved (if I do say so myself:).
I realized that it what I really wanted was to have all of the information about what was happening in my thermostat in front of me at all times. The current temp and the schedule needed to reside in the same space. I would feel like I could SEE what was happening at the current time and at any given time in the schedule with a swipe or a drag, NOT a million and one click throughs and guesses. To conceptualize this space, I used one of the mental models that I am used to using in part of my daily life.
When I’m not iterating on thermostat wireframes and doing design research, I am an artist and I teach college students photography. This involves a lot of Photoshop. One of my favorite features of Photoshop that I am constantly preaching about to my students is the Curves adjustment layer. It looks like this:
The histogram in the background shows every tone in the photograph. The line traversing the space diagonally can be moved to manipulate any one of those tones in the photograph. This is done using dots on the line. You click on one with your cursor, and pull it up or down to manipulate the line and therefore the tones that correspond to that point. I liked the idea that you could have that kind of finger to screen control in a thermostat interface. Click the line. Push it up to raise the temp. Pull it down to lower it. Release and you’re done. It seems intuitive to me. I resolved to try the mental model of Curves in my user interface. My first crack at using it can be viewed HERE .
I was all excited to test it, thinking that I was really getting closer to the interface that I wanted to design and as a user, wanted to use. The testing failed miserably. My users were just not intuitively wanting to move the dot to change the temperature. They didn’t know what to do with it.
I anticipated a pretty bleak review session with Matt, but he actually pointed out that I had used a mental model that I was familiar with from using Photoshop, but that my users probably didn’t have that same mental model and there was no visual indication on the actual screen that those dots correlated to actual degrees and to temporal space. Duh!
This was an exciting turning point for me in realizing how to make my interface more “human”- I needed to think about how to conceptualize the user seeing the interface and realizing that the space that they are navigating through is actually time and temperature and that that cue is actually a mental model unto itself.
So, without further ado, I present my final iteration of my thermostat:
From this screen, the home screen which is also the only screen-you can change the current temp, the scheduled temp (by tapping the top menu of week days or simply by scrolling forward- you then add a point on the day and time that you desire by double tapping the screen). All of the buttons on the left hand side are touch buttons- each is indicated on by turning gray or off by turning white. The screen falls dark when the system is off.
Shown below is the flow to change the current temperature. When the user double taps on the current temp bubble (the large one with the number in it), a hotdog shaped track appears that allows the user to drap the current temp bubble up and down. The sliding number also then juts out to the side so that it can be read while the user’s finger is covering the number inside of the bubble.
When the user removes their finger, the track disappears and leaves the set temp in the bubble and the current running temp just below it in gray. See below:
This flow is designed, like the iPhone swipe maneuver that I detailed in the beginning of my post, to be a quick and easy behaviour for the user to learn to be able to repeat to then navigate the rest of the system. I think it’s important to note that a user is ok with a slight learning curve in the beginning of using a system, as long as it is short and easily learned. The thing that I learned in this round of user testing, however, is that a user can’t learn the behaviour of a touch screen and react to it on a set of paper wire frames. Paper doesn’t react to touch. So, the user looks at the interface and guesses which behaviour is appropriate, instead of touching the screen and seeing its reaction and then responding to that.
So, when my users tried to change the current temp, some assumed the arrows meant tap up or down because the screen couldn’t react. It would be like trying to test the slide feature on the front of iPhone and tapping the arrow that is pointing to the right instead of sliding it bc the system couldn’t react either way to correct your behaviour!
So, this is where I am in my system design. I’m happy with the space of my system- the user can see all of the information at hand and manipulate it in what feels like a physically and mentally intuitive way. However, I feel at a little bit of a stalemate about the gap between the inevitable learning curve of using a new system, and the inability of the paper wireframes to react in order to teach the user those behaviors.
Link to the full PDF of my final wireframes is HERE