At Austin Center for Design one of the nicest things you can hear is Jon Kolko saying, “You made a thing. I’m proud of you.” You made a thing? A major tenant of the curriculum is learning to make artifacts as tools to ask questions and make arguments. Ever step of our seven-week-long, iterative assignment to redesign the smart phone app for Austin Public Transit System, Cap Metro, has been about exploring how to create artifacts that ask a question and get the information needed for that stage of the process.
The first week we created a system flow with screen shots from the existing CapMetro app and then created a concept map of the existing app and a concept model of our proposed redesign. A concept map is a visualization of the structure of a system to understand complex interactions that cannot all be viewed simultaneously in the actual product. We started here because this is the right thing to make for the questions we were trying to ask, namely: How does the existing app work and what is wrong with it? What should we do instead? If we had started by recreating and then improving upon a single screen from the CapMetro app, we would have created an artifact that would be excellent for asking questions about CapMetro’s typographic and layout choices, but would not provide useful information about the overall structure of the system.
The second week we started creating wireframes for our proposed new system. Wireframes represent the layout of a screen that focuses on functionality and interactions, rather than visual design. We have iterated on our wireframes each week for the last six weeks, as well as updating our concept model to represent our iterations.
At week three we had iterated on the wireframes once, and it was time to go out and get feedback from real people. To do this we used a method of user testing called the think-out-loud protocol with paper printouts of our wireframes. The user is given a task to perform using the wireframes. While he is completing this task, the user is instructed to “think out loud”, verbalizing each action as he does it. During the test the researcher acts as the “computer” bringing up the appropriate screens in response to the user’s action.
Each subsequent week of iterating and testing I have had the opportunity to see how the changing level of fidelity of my wireframes, or closeness to the actual thing being represented by the artifact, affects the type of feedback I receive. In addition to documenting at least three critical breakdowns to address each week, I have found a high-level takeaway from my testing. Many of these have involved how the level fidelity, or closeness to the actual thing being represented by the artifacts, have affected test results. A few weeks ago my high-level takeaway was about the need for better visual consistency and hierarchy in my typography and graphic elements throughout the system – a fidelity issue. Last week, I realized that I was trying to test navigation at a level that depended on the user having a spatial understanding of how on and off screen elements related to one and other. This is largely communicated through animation that was not coming though on paper screens. For my final user tests I created an on screen, clickable prototype, so I could model some of those animations.
[Link to clickable prototype here: http://invis.io/9A1W9OYYJ ]
What I found from my user testing is that the fidelity of the clickable prototype created a number of issues of its own. Because the prototype looked very real, and some of the gestures felt like a real application, users were derailed when it did not behave the way a real application should behave. This was especially true for the keyboard, which was just an image with certain sections mapped as hot spots to link to other screens and the map, which was also just a static image and so did not allow for zooming in and zooming out.
“Wait, the keyboard is broken…” – User 2
Despite these issues, I got useful validation on my updates to the real time bus arrival feature, called Next Bus (see Next Bus Flow below), and feedback that lead to a final tweak to my on-going attempt to integrate step-by-step written directions to a destination with the map of the route (see screen 27 and screen 29 from the Search Flow).
“I don’t expect to swipe [between things] on a map.” – User 4
So, that’s it for this quarter. I made a thing, and I’m proud of me. I have learned about the specifics of designing a transportation application, about the tools and techniques available for wire framing and I have continued to hone my ability to craft artifacts that elicit feedback that is useful.