AT&T App User Testing

Previously this AT&T app redesign had been developed over a period of four weeks, with a few rounds of user testing. The next step in the design process is user testing. In broad terms this means reviewing and testing the app with the intentions of a user. This allows for a designer to gain more empathy for the users as they design the app. Over the past two weeks three user tests have been conducted, reviewed and solutions integrated back into the app. Below are the different methodologies used, the resulting breakdowns that were found, as well as potential solutions for these breakdowns.

The three tests the I conducted include:

  • Cognitive Walkthrough
  • Heuristic Evaluation
  • Think-Aloud Protocol

Each of these tests highlights a different aspect of usability, allowing for the results to cover a whole array of usability issues.

The first of these test was the Cognitive Walkthrough. This type of user test evaluates the prototype’s ease of learning. More plainly stated, it evaluates where there would be potential breakdowns for a first time user, performing a standard task. This type of usability test is integral to establishing and understand a theory of how the first interaction with the app will go for a first time user.

To execute this usability test, I printed out each screen. Established my six tasks: Set up Autopay, Make a payment, Change my plan, suspend a device, change password and upgrade device. Then I established who my potential first time user was; any individual who has the AT&T app and uses it for its various features. After this, I began to set up scenarios for myself to help empathise with the user, then I’d run through each flow asking myself a series of questions. These questions were:

  • Will the user try to achieve the right effect?
  • Will the user notice that the correct action is available?
  • Will the user associate the correct action with the right effect they are trying to achieve?
  • If the correct action is performed, will the user see that progress is being made towards a solution to their task?

These questions help evaluate each step necessary to perform a task, and whether or not a first time user would have the ability to connect each of these steps with their overarching task. After reviewing each screen, whenever there was an issue between the questions and the screen, it was logged on an excel sheet. This sheet included several descriptive fields, so that when reviewing later, I could easily remember what needs to be changed. These fields included: Screen Identity, Problem Descriptions, Severity, Frequency and Proposed Solution. Below is an image of the work space I was using to perform these reviews.

 

IMG_0100

 

From this review I found a number of issues in the learnability of the prototype. I’ve also come up with a potential solution for these issues. The main  three issues I’d like to focus on are as follows:

 

Problem: My screen for the Devices only included a list of devices, though with reviewing how this would help with a user’s overall task there was a disconnect. There may be an issues between the step of user’s opening up the “Devices” page and not seeing any actionable next steps.

Devices_Action_V1-01

Solution: In order to combat this disconnect, I added a few choice actions that can be started from the “Devices” page. This allows the user to connect their task with the current screen they are viewing.

 

Devices_Action_V1-02

Problem: Within my “Change Plan’ tasks, a customer won’t immediately understand that they must click the “Lock Plan” button to finalize their changes to the plan.

LockPlan_v1-01

Solution: To manage the customer expectations, I added a brief line of directions explaining that the customer needed to tap on “Lock Plan” once they were satisfied with the changes.

LockPlan_v1-02

 

Problem: Finally the placement of the Autopay signup as a subsection of “Bill” seemed like it would clash with a user’s pre established notions of apps’ setups.

Profile_Autopay_V1-01

Solution: So to mitigate that potential breakdown, I added the option for Autopay to be set up under the “Profiles” screen.

Profile_Autopay_V1-02

 

This was an excellent test for me to more fully understand how a first time user would review the actions available to them, and analyzes if those steps truly would help them complete a task. To access my full feedback here the link, Cognative Walk Through- Complete Feedback

 

The second type of usability testing performed was the Heuristic Evaluation. This tested the app against a list of 10 established usability principles. These are known aspects of well designed products, that facilitate seamless and smooth interactions between user and system. Below are the 10 principles with a short explanation:

  1. Visibility of system status- There needs to be some kind of communication during wait-times so the user understands the system is still working even though no significant visual aspects have changed.
  2. Match between system and the real world- By aligning a product with some aspects of the real world, then the system can become far more familiar to the user upon the first experience.
  3. User control and freedom- This allows a user to be able to make the changes they want to their system, but also allows them the freedom to undo these changes if needed.
  4. Consistency and standards- Following industry standards to establish familiarity of the system to the user will improve the user’s overall experience.
  5. Error prevention- In order to prevent the user from making an accidental change to their account that could cause potential damage, additional messaging needs to be incorporated into the system.
  6. Recognition rather than recall- It’s encouraged to lessen the weight of memory on the user, instead have the instructions or images carry that weight as the user moves through a system.
  7. Flexibility and efficiency of use- Creating shortcuts is encouraged within systems, this helps establish users who are more familiar with the system from those who are more novice.
  8. Aesthetic and minimalist design- Keeping a screen less cluttered is generally the preferred style.
  9. Help users recognize, diagnose and recover from errors- If issues do arise on the system, there needs to be error messages that help the user pinpoint the issue, and resolve it.  
  10. Help and documentation- If users do have the need to reach out for help with the system, there needs to be a way for them to access other outlets of help too.

 

Similar to the Cognitive Walkthrough, to execute this test I reviewed each screen in comparison to each of the 10 Heuristics. After finding problems, they were also logged into an excel sheet, though with an additional field, “Evidence, Heuristic Violated” so that I could easily recognize which heuristic was missing.

The Heuristic Evaluation can be done by a multitude of evaluators. In this case there were several other students who also performed heuristic evaluations on my screens. The benefit of having multiple evaluators is that there is a higher likelihood of identifying the majority of missed or failed heuristics within a prototype. This does have diminishing returns though, so after around 8 evaluators, studies have shown, fewer new usability issues are brought up.  

The top three issues I saw on my prototype are as follows:

Problem: Within the flow of the Upgrade Phone task, I didn’t have a screen that allowed a customer to review the full order, before finalizing it. This broke two heuristic principles more than any other, Consistency and Recognition rather than recall.

 

Solution: To resolve this, I went back into the prototype and added this screen, which includes all points needed to review an order.

ReviewOrder_V1-01

Problem: There were multiple issues around the word choices I had been used. For example I was using language like “Passcode” instead of “Password”, “Update” instead of “Upgrade”, “Payment Methods” instead of “Payment” and “Billing Information” instead of “Account Info”.

Solution: I began to review how each of these words were being used and what I wanted the user to expect from these titles. Then I reviewed the best practice words for these types of information and implemented those words instead.

Billvs.Billing_V1-01

 

Problem: The final issue was already mentioned, the “Lock Plan” button. It again was confusing for evaluators, and broke the second principle; matching the system with the real world.

Solution: Again as a resolution, I altered the screen to include instructions.

 

This test ultimately forced myself to review each screen and critically analyse why different pieces were on the screen and if they needed to stay or not. Now the screens do not contain any useless information or inconsistencies.

The heuristic evaluation was a time consuming task, but after the initial execution it seemed to become easier.  As more of these details are completed, more of the screens and flows can become more usable. The consolidated and completed feedback in located on the link: Heuristic Evaluation- Collection

 

The last and final user test I executed was the Think Aloud Protocol. This test reviews how real users’ react to the app. It is meant to identify where users feel a loss of comprehension of the app, or where user’s lose their understanding of what’s happening within the app. The key difference in this test from the other two is that this test puts the app in front of real users, and this test asks the users to speak out loud about what they are doing as they do it. This is done because there are studies showing that when a person verbalizes their stream of consciousness, they do not disrupt their own flow of comprehension. So as designers, when an individual speaks through an interaction with an app, we can identify where any misses in comprehension arise, what their reactions are and any practices that we are doing well. Its an extremely informative tool, and requires very little in the way of costs or materials.

To perform this test, I collected my printouts of the app and a volunteer to complete the six established tasks. I reassured them that they could quit at any time and that this was really a test of the app’s design not a reflection of them. I explain to them the tasks then began to watch them flow through my prototype. Below are the two biggest issues that came up on the testing.

IMG_0105

 

Problem: The first was the placement of Autopay. Each of my Think Aloud volunteers had issues finding where Autopay was within the app. One of the individuals ended up searching for nearly five minutes for this button. He illustrated his own thought process during the test, “I image it’s under Profile, so click on Profile…now I’m thinking I don’t know, not Plan, not Bill, maybe “Billing info”?…I’d expect it’d be under Settings, which seems like Profile, which it’s not there”. It was the perfect opportunity for me to understand where he thought this piece of the app should be located.

 

Solution: To combat this, just as I stated previously, I moved Autopay to the Profile pages and keep its flow there.

Problem: Secondly individuals had issues understanding that they needed to lock their plan after they made changes to it. One individual said “Wait am I done? Do I need to do anything else? What is ‘Lock Plan’”? Again this helped me understand where their comprehension of my app was coming from.

Solution: Again the solution I’ve implemented is to add a short set of directions at the top.

 

This again was the final user test I performed, so afterwards I began consolidating the feedback and understanding where the lack or loss of comprehension arose from my tests. This is a process, and I know the next iteration is just around the corner.

 

After performing each of these tests, I’ve learned how to incorporate more of these usability principles in my designs. As well as to more complexly imagine myself as a first time user. Both of these new perspectives will help me integrate more usability into my designs earlier on in the process. I’ve also become more comfortable administering Think Aloud Protocol tests, which I’m sure will only increase with more practice.
Currently, I’m still working on integrating ALL the results back into the rest of my screens, but I feel confident that ultimately my app is far better off currently with just these main changes made to it, rather than before the testing was done. Below is a link to a presentation that runs parallel to this document.

 

Usability Testing