myAT&T Mobile App Redesign: Evaluation

Last quarter, we went through a process of redesigning the myAT&T app. Although smartphones have become more and more important to our everyday lives, actually managing your mobile phone account has remained frustrating and confusing.

The Design Process

Before we started designing, we sought to understand the key purpose of the app. To do so, we surveyed about 20 people to assess what was most important to them when managing their account.

We found that the user wants to:

  • feel like she has the right data plan for her needs, that she doesn’t have too much data (so she’s paying for data that she’s not even using), or too little (so she gets hit with data overage fees);
  • be able to easily upgrade her phone; and
  • never miss a bill payment.

Given this understanding, we decided to design for the following flows 1) managing data, 2) changing your plan, 3) upgrading your phone, 4) paying your bill, 5) setting up autopay, and 6) managing the security settings of your account (e.g. changing your password or updating your email address).

Evaluation

Once we completed our designs, the next step was to evaluate our work. This process included three standard evaluations:

  1. The Cognitive Walkthrough
  2. Heuristic Evaluation
  3. Think Aloud Testing

The Cognitive Walkthrough

Developed in the 1990’s, the Cognitive Walkthrough is a method for evaluating the learnability of a product, based on a theory of problem solving in unfamiliar situations.

When confronted with a unique, new situation, people leverage a problem solving theory based on exploration and progressive behavior refinement, generally with the following steps:

  1. We determine what controls are available for us to take action.
  2. We select the control that makes the most sense towards incremental progress.
  3. We perform the action using the control.
  4. We evaluate the results of our action to see if we have made positive, negative, or neutral progress.

The evaluation is based off these steps, requiring the evaluator to ask the following questions:

  1. Will the user try to achieve the right effect? The user of the system has an end goal in mind, but needs to accomplish various tasks to complete it. Will they even know to perform the specific steps along the way?
  2. Will the user notice that the correct action is available? Is the option visible and on the screen, or at least in a place the user will likely look?
  3. Will the user associate the correct action with the effect that user is trying to achieve? Is a label or icon worded in a way that the user expects?
  4. If the correct action is performed, will the user see that progress is being made towards their goal? Is there any feedback showing that the user selected the right option or is making forward momentum?

Heuristic Evaluation

The Heuristic Evaluation involves comparing an interface to an established list of heuristics – best practices – to identify usability problems.

This evaluation was established by a man name Jakob Nielsen in the 1990’s. Although technology has transformed dramatically since then, these heuristics are based on human behavior, and still apply today.

They include:

  1. Visibility of system status. The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world. The system should speak the users’ language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms.
  3. User control and freedom. Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialogue. Essentially, the design should support undo and redo.
  4. Consistency and standards. Users should not have to wonder whether different words, situations, or actions mean the same thing. The design should follow software/hardware platform conventions.
  5. Error prevention. Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
  6. Recognition rather than recall. Make objects, actions and options visible. The user should not have to remember information from one part of the dialogue to another.
  7. Flexibility and efficiency of use. Accelerators – unseen by the novice user – may often speed up the interaction for the expert user to such an extent that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions retrievable whenever appropriate.
  8. Aesthetic and minimalist design. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognize, diagnose and recover from errors. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation. Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Think Aloud Testing

Think Aloud Testing evaluates the usability of your work by encouraging a user to think out loud as they use your product or service.

It is important that the user does not reflect on why they do what they do, but simply talk aloud as they do it. Reflection uses a different part of the brain, and any introspection would not speak to the intuitive measure of the design.

As the user tests the product, the designer pays particular attention to any communication of surprise, frustration, or design suggestions; she also notes, of course, if the user is unable to complete the task with ease or at all.

Benefits of using all three evaluations

While all three tests stand on their own, there is great benefit to using all three together. For example:

  • Think Aloud participants may have trouble articulating why something was confusing or frustrating; Heuristic Evaluation and Cognitive Walkthrough can provide hypotheses
  • Heuristic Evaluation and Cognitive Walkthrough can help the designer know what to look for during Think Aloud
  • While view to view, Cognitive Walkthrough may assess the flow as seamless, the Heuristic Evaluation and Think Aloud help the designer see the design as a whole — i.e. although something may theoretically work within one flow or screen, it may confuse the user, if it is not consistent with other flows or accepted practices

Key Findings

By taking my app redesigns through these evaluations, we found the following key issues:

  1. Design did not always give a clear sense of place
  2. Intention of screen was sometimes convoluted
  3. Design did not help prioritize action

Design did not always give a clear sense of place

A glaring example of how the original designs did not give a sense of place was in the flow from “Home” to “Set up bill payment.”

MyAT&T Evaluations 2017-01-18.002

When the user taps on the “Set up bill payment” button, not only does a modal pop up, but they are also immediately taken to the Billing tab. This is a violation of the Heuristic principal of Consistency & Standards. Typically, when a modal pops up, the rest of the system stays in the same place. However, in this case, the app takes the user to a completely different place in the system, and there is no clear sense of what would happen if the user tapped on the “x” of the modal — would she be taken to the Billing home screen, or back to the Home screen?

MyAT&T Evaluations 2017-01-18.003

Based of this this evaluation, I changed the flow to from the Home screen, to Billing, to Set up Bill Payment; and I removed the Set up Bill Payment flow from the modal.

MyAT&T Evaluations 2017-01-18.004 MyAT&T Evaluations 2017-01-18.005 MyAT&T Evaluations 2017-01-18.006

 

Intention of screen was sometimes convoluted

The original Home screen provided a strong example of a convoluted design. Most important to the user is that she isn’t paying too much for data. In the original Home screen, there is an alert that the user is at risk for exceeding her data limit, but, because the data and billing cycle are separate on the screen, it’s hard to gauge the seriousness of the risk.

MyAT&T Evaluations 2017-01-18.008

To address this issue, I merged the data and billing cycle into one visual.

MyAT&T Evaluations 2017-01-18.009

 

Design did not help prioritize action

Throughout the design, we found many examples of no clear system prioritization — for many of the flows, the user would need to think carefully about each choice, as opposed to being guided by the system.

For example, if the user wanted to manage her data, the system gave her five different options, with no explanation of what each option might provide, nor any prioritization of what might be the best choice.

MyAT&T Evaluations 2017-01-18.014

MyAT&T Evaluations 2017-01-18.012

To address these issues, I updated the flow to only include three options, with an explanation for each.

MyAT&T Evaluations 2017-01-18.015

 

To see a full list of findings for each evaluation, please see links below.

myAT&T Evaluation 2017-01-24 – Cognitive Walkthrough

myAT&T Evaluation 2017-01-24 – Think Aloud Protocol

myAT&T Evaluation 2017-01-24 – Heuristic Evaluation