Evaluation and Redesign

Last quarter in our Ideation class, we created wireframes for a redesign of the myAT&T account management application. Moving in the the third quarter, for our first evaluation project, additional think aloud user testing on our wireframes was performed, along with both a heuristic analysis, and a cognitive walkthrough. The full set of screens the evaluations were preformed on can be found here.

The AT&T application was chosen because it is in need of some serious TLC. The current state is confusing for users to navigate, and it makes it difficult for them to perform tasks they had initially been able to do with the application. Our group surveyed users to see what the biggest pain points for the application were in their mind, but also looked through reviews to see what people were saying. Most of what we heard were things like “it’s an unending maze” or “It’s very difficult to find various device plans and you cannot change the options when you do find them.” These are obviously issues with the application because these tasks are its intended use.

Creating a set of screens and flows with tasks to accomplish was the next step in redesigning a better account management experience. Much of this initial wire framing was playing with the placement of different options and how to access the important pieces of information without overload like there is in the application currently. By performing think aloud user testing throughout the process with each iteration, the design was molded by real world users and their opinions on the way applications should function. This think aloud user testing is one of the most successful forms of evaluation. Testing in this way allows a designer to understand the interaction a user has with the screens, as well as understanding and giving insight to what users are thinking as they are navigating. The hope is as the user accomplishes specific tasks, they will speak out loud, describing what they’re doing and and how they feel about the flow.

Through the most recent round of user testing, the design I created was tested by four users. The feedback I received was valuable, mostly because the things pointed out as issues in the design were things I had looked over and not even considered to be problems. The biggest issues were a normalization of language, with Ken telling me “I don’t know what suspending a device is, but this is how I do it,” and another user telling me “I hope this doesn’t charge me more for the same plan I already have, I have no way to see my total.” These pieces of information, along with the other recordings of these users, give insight into the missed pieces and the opportunities for further iteration and improvement.

The feedback received was used to directly impact the screens below, they are posted along with their predecessors. The redesigned screens help to provide more indicative language of the task users are trying to complete, as well as moving the slider for Autopay to a new screen so it is not accidentally toggled.

suspend device 18      change plan 3

The screen to the left shows the device options view, and to the left you see plan changes. Both screens are crowded and the issues the users complained of are evident.

deviceop   editplan    minutes    minutes1

The left most screen here is the redesign of the device view, and the other three are the plan change screens. They have been exploded into separate screens and show more clearly the new costs and what is being paid for.

Think aloud user testing is only one of the tools used to evaluate the application design though. Along with the think aloud protocol, I performed a cognitive walkthrough on my application flows to find problems associated with the usage and language it contains. A cognitive walkthrough uses a set of four guidelines to evaluate each screen for it’s efficacy. These are:

Will the user try to achieve the desired effect?
Will the user notice that the correct action is available?
Will the user associate the correct action with the effect they’re trying to achieve?
If the correct action is performed, will the user see that progress is being made to the solution of their task?

These four questions allowed a deeper dive into the interaction of the user with the screen, and the user testing results were backed by this walkthrough. Each instance of a “no” response is recorded and rated with severity and frequency to document how big of an issue it is, which defines when it should be fixed (either immediate and before release, or something smaller that could wait) and to show how often these mistakes are made to identify a pattern of poor design strategy. It also showed how badly the design was in need of another and another and another iteration to get it right.

In addition to having the frequency with the description of the issue and severity, there is a column in the test to add notes for redesign, so as the walkthrough is completed, there is a set of principles to move forward with in redesigning the application. Using this tool alongside user testing was interesting because with user testing performed, it allowed a new, more objective view of the design I had created, as well as a further inspection of the elements contained within it.

The cognitive walkthrough returned some modal criticism, and some exploded flows due to this. Just creating screens where they should be and eliminating the modals that should have been screens. Also, the cognitive walkthrough was the first time I stepped back from the application and asked why I did the things I did. It made me realize how important spacing and making information digestible on a screen is. You can see in the earlier iteration, everything was crammed and had modals flying everywhere. In the redesign, most modals have been eliminated except for where necessary, and all of the text and buttons have gotten enough breaking room to feel less bloated and crammed.

change plan 4      paybill 2     viewstatement 9

The modal screen on the left caused a lot of trouble with users during testing, and was consistently hit on with the walkthrough because it does not indicate well the movement. Also, the walkthrough showed that the full statement view was disconnected from the rest of the flow, as it is two taps deep with the same button name.

editplan      confirmedit      payment     statement

The screens on the left coincide with those on the left above, as do the two on the right correspond with the statement view on the right above. Both sets of screens were designed to have more space and have more indicative terms and a more effective design language to get users to quickly understand the effect of their action.

Evaluation does not stop there, however. The next step was to perform heuristic analyses and discuss the findings with the evaluators. They individually assess the application based on a set of ten heuristics (listed below), and then come together to discuss redesign strategy and other issues they may have seen during their evaluation. Similar to the walkthrough, the issues recorded are rated with the same severity and frequency ratings, as well as the potential resolution for the issue found.

A heuristic analysis is a test where an evaluator will go through the application at least twice. The first pass is to get a general understanding of flow and movement within the application, and the second is a more detailed look at the specific interactions within the tasks to be completed. This test relies on set criteria or best practices of application design, looking for things like consistent text, normal language, and appropriate way finding for the user. This test is performed, ideally, by at least three evaluators. After the tests are completed, a debrief where overall issues with the application are discussed alongside the successful pieces of the flows as well. The ten heuristics are:

Visibility of system status
Match between system and the real world
User control and freedom
Consistency and standards
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help users recognize, diagnose and recover from errors
Help and documentation

This list helps to determine whether or not an application design will allow users to navigate it intuitively, recover from errors, learn the system easily, and provide a consistent and visually pleasing experience. These evaluations were performed by myself and my class mates, to return the most informative results possible. The heuristic analysis further reinforced the issues users brought up in testing, as well as some more that were not touched upon within the cognitive walkthrough or the user testing.

The heuristic analyses on my flows returned nearly 100 points of interest to take into consideration when moving forward. The biggest pieces taken from the analyses were the lack of visibility of password requirements until you mess up, and the overwhelming feeling users would get when trying to edit their plan details. Both of these have been redesigned, though possibly are not much better than they were before. Below, the screens are shown in order of the prior iteration above the most recent.

paybill 2      changepassword 32      changepassword 34

On the left, this highlights where the evaluators believed the Autopay was too conspicuous and easy to toggle unintentionally. To the right are the password change flows that were problematic due to not knowing requirements before entering the new password.

payment    autopay    password     passwordreq

The order is the same as above, but the Autopay has been added to another screen, and password requirements have been added while inputting the new password.

Overall, the three evaluations of the redesigned application show there is an overuse of modals, an overuse of jargon-y language, unclear to a normal user, and the lack of space. The most valuable experience from testing was the realization that I quite enjoy performing usability testing. More importantly, it has made me take a step back while creating and thinking more thoroughly when designing the interaction someone may actually see. The full set of redesigned screens to this point can be viewed here. They are currently a work in progress, and will probably deviate from their current form.

Looking forward to the coming weeks, there is time for further iteration and development, along with testing to refine the interface to something pretty, usable, and intuitive. Soon, developers will be working with these flows to explain what is and is not possible, and evaluating based on other criteria such as expense and simplicity of development. They will help with estimating timelines for production and testing beyond the current wireframe state. I am looking forward to understanding the way development works and the challenges it presents and am looking forward to coding my own piece of the application.