Making myAT&T Designs a Reality – Part 1

If you’ve been following the students enrolled in AC4D, you’re aware that we have been working on redesigning the myAT&T mobile experience. If you’re just now tuning in, please click here to see the process we’ve used, which includes understanding of the current space, generating ideas, and creating wireframes. The next step in this process is to switch our mental state from a designer, to take on more of the role of a product manager and developer to take the next steps in order to ship (or submit) our app.
As a designer, putting on the hat of a developer and product manager is valuable in the same way that approaching design in a human-centered way is valuable. It provides further insight into other perspectives that will ultimately inform your design to create an overall better experience. More specifically, collaborating with a developer will frame how “expensive” a certain component, feature, or flow will be. This affords the opportunity for consolidation and exposure to technical constraints that may have otherwise been invisible coming from a designer’s perspective.
After establishing the resources needed to create the experience, we use a product manager’s skillset to prioritize the most important features. This affords the team to ship a product faster without having to create and perfect the entire experience.

 

For this project I collaborated with Chap, a previous AC4D alumna and current developer, to estimate out the resources needed for my app. To prepare for this I laid out each one of my screens to create a singular flow with individual features outlined, as well as annotations. Doing his helps identify the overall flow from screen to screen and makes the system more digestible to a developer who may not have been working alongside the designer.

Here is an example of what one flow looks like. Click here to see all of the flows.

Plan Flow@1x

 

Next, Chap and I went through each flow screen by screen and feature by feature as I explained the system to him. This turned into a conversation about the system from a developer’s perspective, especially when it came to back-end functionality (where the app will get its information, such as how much data the user has used for the month). This brought to the surface how a seemingly simple feature can turn into a much larger feature, which can then be reevaluated on how important it is to the experience.
During this conversation Chap and I also logged how long he estimated each one of these features would take to develop into a spreadsheet. The spreadsheet included columns which consisted of screen ID, feature description, points, days estimated to create (with two developers working 8 hours each day), any notes that may be relevant, and the price associated with each feature. We also used a point system to establish the number of days it would take to create each feature. A point is representative of a “man day” which is a whole day of a developer working for 8 hours. This is normally documented using the Fibonacci sequence: 0, .5, 1, 2, 3, 5, 8, 13… The problem with this type of estimation is that is typically very inaccurate due to being very difficult to estimate this type of work, unless you have years and years of experience. Typically the work is underestimated. Despite Chap having plenty of experience, he acknowledges this by subtracting .25 points per developer per day, as well as adding an additional 30 percent to each estimation.

For this project we estimated the time based on two developers working full time (40 hours a week). Overall, Chap estimated my app to take about 53 days with two developers working full time. The benefit that came from this included being able to quickly glance and see which features take up the most resources, which can then be passed to a product manager for prioritization.

Here is what part of the spreadsheet looks like. Click here to see all of it.

Spreadsheet

 

We didn’t have a designated product manager for this project like we did a developer, so we are taking on that role. The challenge for the product manager comes from the fact that we only have 20 days before we need to ship the product!

 

This is where the skillset of a product manager becomes extremely valuable. They will help prioritize which features are the most important to the user to establish a v1 release, followed up by a roadmap and timelines in which the rest of the features will be added to the app. Part of this exercise will be to modify the existing flows based on this prioritization to meet that deadline, while still retaining as much value as possible. Stay tuned for the next blog post to see how this looks.

My AT&T App Redesign: Features, Capabilities, Controls, and Development Meetings

In the last blog post about the My AT&T Redesign I had completed three usability tests, had evaluated the results, and begun to integrate those findings into my application. A link to the post is here. Since then, the project has turned to a more concrete phase, where cost and time have begun to play a role. Within the past two weeks, I’ve redesign my application (both the actions within it, as well as aesthetics), identified the features, controls and capabilities within it, and meet with a Software developer, Chap to get an estimate on the time it will take to develop. Within this article there is an in depth look at each step to explain what these mean.

After I presented my findings for user testing, I received feedback that my application lacked a “realness” to it. So I asked Jon to meet with me and discuss my app at length. In between meeting with Jon and my presentation, I also design three examples of different application styles for my home screens and asked my mentor to give me feedback on them. Below is a PDF of the three styles.

 

HomeScreen_V1-01

My mentor replied with resources for inspiration, a fresh perspective on the three styles, and a nod to the third version as their preferences. After reading her feedback, I decided to use the third option as my style for the next iteration of my application.

While I was still working on creating more visually pleasing screens, I meet with Jon. He suggested I focus on two specific details of the application:

  1. Making the application look more like an IOS application. Pulling toolkits and existing resources to create the screens, rather than my own visuals and buttons in Illustrator.
  2. Concentrate on what is happening within the app. Questioning why I’m including certain flows and actions within my app rather than just accepting “how things have always been done.”

 

An example of how the application lacked a sense of realness can be exemplified in the slider bar I created for the Plan flow. I had created my own slider bar, instead of just pulling from an IOS toolkit. Below is an image of my old slide bar, and now the newer IOS approved one I’ve placed in my application.

 

Sliderbard-01

 

Jon explained that by creating my own icons and buttons I’d made it more difficult for a software developer, because IOS elements have already been coded. Developers just have to copy paste when writing the code when creating those elements. In creating my own, I’m making the developer go and write brand new code, which causes additional unnecessary costs. If I had had a certain element that needed to be completely created from scratch, then I could have asked a developer to specifically make it, but I felt all the elements from the IOS toolkit covered my needs within the application.

An example of Jon’s second point was exemplified in the password flow. Previously I had requested that the user add in their new code and type it twice and I had certain requirements on what the password had to contained. Jon reminded me that I needed to be more conscious of what I’m requiring of users. In my next iteration of the application I striped away all the requriements of changing a password, and only included what I thought would be necessary for a user to feel their password was changed.

After meeting with Jon, I reviewed my printouts and went through each screen marking up what I wanted to keep, to remove, to add, and what needed to be on each screen. This evaluation forced me to review what I was asking of users for each flow, and if I wanted to keep it or trash it. Below is an image of what this process looked like.

 

IMG_0189

 

After reviewing every element of the contents, I began to tackle the organization of the application. Moving flows between the four sections of Billing, Usage, Devices and Account. Once I had a plan that contains all the flows I wanted, and where I wanted them, I was able to start building screen.

In a week, I build every screen for each flow with proper IOS elements and screens that were more consciously crafted. For IOS styling, I downloaded a toolkit to pull elements from, and found resources on the internet that explained the spacing requirements. I referenced the annotated printouts when deciding what elements needed to stay on each screen. I also was able to go back to my inspirations from Q2 and pull in some ideas for that time too.

After creating the new app I needed to get an estimate for the time it would take a developer to create the application. I had learned from class that when developing an application, the real cost is associated with the features and overall time it would take to build the various elements of the application. In order to meet with my developer,  I needed to identify these elements and explain their purpose within the application.

I took all the flows and identified each screen with a number. The on each screen I identified the features and controls with names. A feature is something that adds value to the user’s experience. For example in the application the pie chart containing an account’s data usage. The counterbalance to features are the controls, which are elements that are needed for the structure of the application, such as the accordion information storage style used throughout the application. Once I had each named and identified each feature/control, I printed them out and set up a meeting time with the developer, Chap.

During our meeting, Chap and I discussed each flow individually, identifying the features and controls in context of the flow. As we went through flows Chap and I identified potential error states that needed to be addressed, where there were places of overlap between screens, and some of the more difficult elements of my app and what solutions he saw were possible. As I identified the various elements, Chap was documenting the screen name and element with a point amount associated with the cost of the feature. One point equaled one day of development. Chap then added a 30% padding time for himself so that he had some room if he needed the additional time. So now for each day of estimation, Chap had 1.3 days to complete the feature. Chap then took the total estimated amount of time and divided that between two developers. This is because we assumed the development would be executed by a team of two developers. At the end of our meeting, Chap shared the excel document with me, so I could reference each cost and element. All total my application would take 41.17 day to create if two developers work on it for 8 hours a day. Here is a link with the file: Application Estimates 

The two most expensive features in the application came from the password feature options, first the option to replace your password with the Iphone’s Touch ID and the second was to opt out of having a password. For both of these features they were estimated to take around 3 and a half days for two developers. For perspective, on average my other features were only estimated to take about 87% of two developers days. This is high estimate was due two pieces. First Chap explained that he’d need to understand if it was even possible to have these options within the application, and secondly there needed to be the estimated time associated with building out these elements. His explanation of these two steps helped me understand where these costs were coming from and what that time would be spent doing.

Another interesting element Chap highlighted during our meeting, was that when coding all of these pieces, he would need to work with various teams within AT&T to write out the proper code to extract that information associated with the accounts. An element from my application that required direct communication with AT&T to gain the right information is the Data Usage pie chart. Since the information displayed is so specific to the account, Chap would need to code the right request to display the various percentages of usage. That information would need to come directly from AT&T. This also means there is a general costs with working on any app that is spent by the developer familiarize themselves with other resources they would potentially we working with.

Since my meeting with Chap, I’ve updated the screens to reflect the changes we discussed, and I updated my flows so that they contain both the screen title, and the cost associated with their buildout. Below are links to each sections’ flows with costs associated with them.

Home & Billing Flows

Devices Flow

Plan Flows

Account Flows

Since this project is under a time constraint of four week, I now need to reduce my development time. This means, going through my application and deciding which features and controls are going to be keep and which are going to be eliminated for the first launch, but added back in later. I’m looking forward to working out a way to deliver a valuable application to the client, but still maintain our timeline of four week of development work.

Scoping Development – Timeline & Resources

-Tying the process together-

In my previous blog post I discussed the processes associated with evaluating a mobile application. The application we are redesigning is the iOS version for myAT&T. From the usability evaluations of our wireframes we furthered the redesign. Correcting all of the usability issues that we found. Our main goal was to be able to bring it to a developer and get it estimated for cost and timeline. Development is exciting because you start the conversation about making it all real.

The core takeaways from the evaluations were as follows:

  • The experience feels discombobulated. There is little to no feeling of continuity. The user always feels slightly unsure if they’re doing it right.
  • Attributes and/or Features are not effectively communicating their purpose. The design does not provide clarity.
  • Visibility, control and freedom are huge overarching issues. Each screen is separate with separate actions and it is relatively arduous to go back a couple steps to change something.
  • Hierarchy and priority are not clearly visualized. Everything feels the same. The design does not draw the users attention to the next step.

I tied together all of this feedback and redesigned my tasks to compliment these findings. From there it was time for development.

I took my giant printouts up to the 14th floor of a downtown office building and sat down for a development estimation meeting. In preparation for this meeting I compiled all of the flows, showed where the user was interacting with each screen and broke out all of the key components, controls and features within the application onto a separate print. In describing the attributes and functions of each component, feature, and control I started to see where I had gone wrong. It is hard to foresee issues when immersed in the core of 1 phase of a process (Design) but now that the conversation was turning towards development I was easily able to identify what would be a misstep in terms of engineering my design. In design I felt like I had been gearing everything to the user (good right? Not so good for development) and attempting to make the flow as seamless as possible. So essentially I was basing my decisions for what to build off of the prior screen and the task that was needed. Now development comes in, which means money and time are ringing loud and clear.

As I sat down to this meeting I was excited to hear what the developer would have to say about my designs.

I gave him the shpeel;

“We are redesigning the mobile experience of the AT&T iOS application. The reason this is a significant build is because, currently, in the experience it feels like someone duct taped multiple apps together. You can accomplish the same tasks from multiple different angles, there is conflicting navigation attributes etc. My value promise for this build is founded on the idea that someone’s mobile phone account management application is not one that they want to “hang out” on. Sure it needs to be good looking in a trustable way but a user should be able to accomplish their task quickly and simply with as few steps as possible.” And with that brief background we began.

While walking him through the flows he pointed out all of the controls that I had made custom that could use standard iOS features which means the code is already written and open source. Less build time for simple things = a faster delivery to the user as well as the ability to put development resources towards more important functions. Therefore I said yes to his suggestions. Further down the line I can bring custom elements in if I wish but at this stage, to provide value to the user, custom features are not essential. This also helped because, within my design, it leveraged the same simple attributes for different uses.

I had built out radio buttons:

Screen Shot 2017-02-02 at 5.02.04 PM   

I had built out a selector:

   Screen Shot 2017-02-02 at 5.02.27 PM

I had built out a different kind of selector:

Screen Shot 2017-02-02 at 5.03.01 PM

All of these can be streamlined into the usage of an iOS picker, similar to the one below, for all of these use cases:

Screen Shot 2017-02-02 at 5.04.27 PM

From these changes a design language seems to be emerging in the form of

If there is one choice…  it’s a click.

If there are multiple choices…  it’s a swipe.

A click is commitment

A swipe implies freedom

The developers next question stumped me because it was one that slipped my mind and I hadnt even considered it. “How long do you want it to take before it “times-out” and logs you our of the app”? This is an interesting piece of the story because, in terms of the pathway through the flows, it was meaningless. However, to the overall customer experience it was highly important. Imagine if you were in the middle of buying your new iphone and got kicked off because you were trying to pick between silver and rose gold, not cool! He said it really wouldn’t take a different amount of time to code a different time-out duration but it did need to be defined. Good to know.

*Learning: In design you can’t just focus on your specific task, you must push your mind to consider all of the surrounding influences and potential scenerios.*

We then assessed my custom features:

Usage/usage progress bars:

Screen Shot 2017-02-02 at 5.07.20 PM Screen Shot 2017-02-02 at 5.07.30 PM

A phone selection carousel:

Screen Shot 2017-02-02 at 5.07.07 PM

These we isolated from the estimation of the other flows and gave them a timeline by themselves. Another thing is that there are billions of people on the planet.. Someone has probably made something similar to your idea before. Don’t reinvent the wheel. Use the resources that already exist. It is fine and awesome to craft your own designs, but make sure that that is realistic for the scope of the project. Is the company you are working with strong in branding which makes that a very important consideration? Or are you trying to get the product shipped out the door and the importance lies more in the functionality and initial value to the user. This is something that I couldn’t have been aware of before building my first set of screens for an app.

The output from all of the above is an estimation for the scope of the work, both price and time. We were delegated 2 (theoretical) developers working full time. So…

Resources: 2 Developers @ 40 hours per week

Salaries: $200,000 each

Estimation (in man days):

60 days to complete the project (30 days per man)

Timeline: 6 weeks

Price: $44,609.67 in total

To explain the work; I have run my system through 3 types of evaluations (Heuristic Evaluation, cognitive walkthrough, and think aloud protocol), I met with a developer; understood the way development is thought about and executed and now am moving into the phase where I need to think about the features i’m delivering and why, this phase aggregates the importance of all of the prior phases and makes me see how important it is to take all of these things into account at once. Which, from my understanding, is why at the Austin Center for Design we are learning them all as a fluid skillset. There are exponentially valuable details that are lost when a project gets handed off from one step to the other. In conversation with a developer when asked a question I am referencing all the way back to when I did research with users of the current experience to discover their most painful interactions with the myAT&T application. That reference spans across 3 disciplines.

I am looking forward to the next time I go to begin an app build because from project launch I will be considering the user, their needs, what they think about and are used to (Design Research), the goals of the service, what the criteria and principles for creation are and designing for and within those parameters and requirements (Interaction Design) as well as development and the pace and price by which my designs would be developed (Product Management). It’s all coming together!

The full collection of artifacts (task oriented screen flows, features and components broken out of the flows, estimations on cost/time, summaries, insights and conclusions are Development estimation – read out.

The Development Dash

Moving through the development process has been both challenging and incredibly interesting. Application system support was my job for quite some time, and seeing and working with people who create these systems for the first time is a valuable experience. While working on this application design from the last iteration, I kept in mind the simplicity of using standard controls opposed to custom controls and features that would require more time to build and more resources to be expended. This section of the development project is to estimate the time for development and the estimated subsequent releases to finalize the application.

To be more clear, this estimating process would only be for the iOS version of the application. For this I am working on a timeline with two developers, working full-time, over the course of one month. This is for the initial launch of the minimum viable product, or MVP. Generally, a separate team of developers would be working on an Android version as well, therefore doubling the estimated hours worked for the development of both.

In meeting with Mark, I gained insight helping me to ground my design in reality. His opinions and recommendations are important in shaping the way the application is going to be planned for release. It was apparent while working with him that his criticisms, questions and opinions on the design were grounded in an experiential, creating, capacity, rather than my hopeful and optimistic view. Mark’s help pointed me in the direction I need to go from here. There were many notes and changes to be done in the screens, and a bit of frustration in making some decisions to cut things from the application,

IMG_0073             IMG_0072

The decision to use mostly built-in controls for navigation and functionality paid off, as these are simple to code and create on screen. The few custom controls I did have were discussed thoroughly and Mark and I worked together to down select the features, controls, and components present in the end design. This process was difficult, picking apart the screens to see which pieces were the most difficult to code and implement, as well as the time ramifications to keep them in the design.

Working with the developer to down select and thin slice is something I found to be very important. Out assignment pointed us to doing this alone, but while talking with Mark, the conversations about controls and features, and how to redesign, came organically in the collaborative setting. I was grateful to have such an experienced developer working with me and explaining each piece and question as we went through, ensuring we were on the same page.

One example of this is the password reset flow which is below, the original screens on the left and the redesign on the right. It involved a popup and dynamic icons to appear when a user typed their password. Initially these were important design elements, but the work required for this piece was too large for the value it delivers. It may be a cool way to show password adherence, and it may be effective, but there are far more simple ways to display this information. In the end, a more standard password adherence option was chosen to replace the initial design. This brought the development time down by roughly two days, and condensed the password reset and email update options from 6 days to 4, allowing it to be released alongside of another piece of the application instead of as a standalone feature set.

3.7       3.6      3.6.2      3.4.2

The Change Plan and Features data option was changed as well, though I did stick with a custom control. Instead of using a picker, Mark agreed that using the style of selection below was clear and allowed users a low chance for errors. On the back end, when I began estimating the time it would take to add these components, it requires a high level of reconsideration into using the standard controls integrated into the OS. Having two days dedicated to a simple payment structure seems a bit wasteful in the grand scheme design vs launch. Below are the two screens we were deciding between. I have yet to build out a screen with a picker for each of the elements, but plan on doing so to see how it feels. The screenshots below show the old design the three screens above and the updated design being those below.

2.3    2.3a    2.3b

2.3alt     2.3a2     2.3b2

Another piece removed was the PayPal integration. It was certainly a nice thought, but without the server side framework to support it, coding to work within the PayPal api was an unnecessary time-sink. To remedy this and add some more freedom with payment options, Mark and I decided on working with the OS payment method, Apple Pay. Integration with this system should generally be a more simple task since it would be integrating directly in with iOS and a more familiar development platform. Below is the initial screen for PayPal, and the screen for ApplePay (left and right respectively).

paypal5.13

To get to the smallest piece of product for launch, I went back to the reviews and survey information from our research and sought what users really needed the application to do. Most people seemed to be most worried about viewing their data usage per device, and paying their bill. These two pieces, along with logging in, are the most valuable pieces of the application. They provide people with the information they want and the information they need to make decisions to manage their cell phone plan. The other pieces are scheduled for a later release simply due to resource and time constraint. Here is the tentative roadmap for release as it stands. To this point.

From here, working towards a more cohesive release schedule is the goal. Figuring out which pieces of the application should be released when is an important detail. Scheduling developers for specific tasks is also a challenge. Given their unfamiliarity with both the API language as well as one another presents a unique challenge to every development team. Thinking about a more collaborative effort between the two developers would potentially lead to a more productive team, but eats time on the front end. Then again, different work styles and an incoherence between coding styles could present problems later on.

Identifying and working through these unique challenges and thoughts has made me more attentive to the details in my design, as well as the potential problems and difficulties development presents to a designer and developer. These challenges are manifested differently, but come down to the same basics: whether a design is feasible, and whether it will work as intended. The most important take away from this entire process has been the near necessity of developers being in direct and open contact with a design team. Without open communication, the design may not be implemented correctly, or thee may be decisions and cuts made to the design without an opportunity for discussion or proper redesign. It feels like there are so many moving parts to keep track of, and in reality, there are.

Product Management Part 1: Development Estimation

For the past few months, we’ve worked to redesign AT&T’s mobile app. The process has included research, concepting, sketching, and wireframing, evaluation, & iteration. The next step of the process is product development — bringing the app into reality.

Design vs. Product Development

As designers, we are user-centered; we focus on people, their goals, and the way the design will support their experience in achieving those goals. Design is all about visualizing an ideal state and sketching our way to that future.

Once we get into production, however, we must also be technology-centered. “Shipping,” or product development requires the product manager to be pragmatic as she brings the product to market.

While the design process is all about the user’s behavior and feelings, product management zeroes in on the system’s components, features, and controls.

Shipping

The first step to shipping is consulting with a developer. While designers love to play in ambiguity, developers tend towards “well-defined” problems, strive for efficiency, hate to do the same work twice, and guard their time.

Although these two mental frameworks may seem antithetical, it helps the designer move from “blue sky” to “realistic,” consider reusable components (leading to efficiencies), and get ready for sizing.

Sizing

Sizing is a method for assigning relative time estimates to build unique features, components, and controls. Sizing helps guide priorities, identify total time on task for a development effort, and steer redesign efforts to quickly launch the product.

Sizing, despite being often being wildly inaccurate, 1) provides clarity to stakeholders who have to commit to delivery (to a board of directors, for example); 2) provides clarity to marketers, who need to build campaigns around delivery dates to understand an order of magnitude for producing a product; 3) gives the entire team a view of the product, so they understand it thoroughly; and 4) forces detailed interactions between the designers and developers.

Sizing Method

To size my wireframes, I walked my developer through my flows, and he estimated how long it would take him to create each component, screen by screen. He gave his calculations in “man days” (a man day being equal to one day of work for one person), and estimates ranged anywhere between a half day to 3 days. 

Then, I asked my developer to estimate how long it would take two developers to create each flow. To my surprise, instead of simply halving his calculations, he divided them by 1.5, to account for sick days, inefficiencies, or unforeseen roadblocks.

As, again, sizing is often inaccurate, I then took his calculations and added a 30% padding to each, a common practice among product managers. You can see calculations illustrated below, as well as in this spreadsheet: myAT&T redesign estimation – Sally 

Login & Homescreen

Autopay Change Password   Upgrade Phone Part 1 Upgrade Phone Part 2

As of the latest calculations, production would take 51.5 man days. The next challenge will be updating the design to only require the allotted 40 man days. Stay tuned for Part 2: Thin Slicing & Road Mapping.

The dilemma of empathy in design — an interview with Chelsea Hostetter

As designers, we are aware of the importance of infusing empathy into the design process, but we rarely stop to reflect on the emotional reverberations and obstacles that empathy infuses into the design process. In our interview, Chelsea(AC4D 2014) speaks specifically about those challenges in designing for and with the queer community in Austin — a group that she is a part of and strongly supports. Chelsea is well known in the AC4D community for her abilities as an illustrator, designer, and patient teacher, purposefully conveying emotion and meaning through sketching.

How did you find out about AC4D?

In 2008, I was working as a graphic design intern at a financial company called MPower Labs. My boss was Suzi Sosa, who is Jon Kolko’s friend and one of the investors in AC4D; she mentioned it offhand. Right after I had gotten out of undergraduate I looked at it again and I felt like I wasn’t 100% ready for graduate school. Around the same time, I applied to the Center for Cartoon Studies (CCS), an exclusive graduate school for professional cartoonists, and they accepted me. I remember thinking, well, I could avoid this shitty economy and go to graduate school in Vermont for cartooning, but there was something in me that said, “no, you should stay in Austin and see what happens.” So I declined to go to CCS, and ended up staying here in Austin and working on my professional career.

Fast forward to 2012, four years later. Alex Wykoff was my coworker at a tech company I was working at and he wanted to be a designer. He told me about this school called the Austin Center for Design, said we should apply together. I felt like the first time was just a coincidence. The second time was a sign. When the universe knocks at your door twice, you go with it. So I decided to apply with Alex. I was excited that we both got in, and we were both nervous about it, but I was very happy to be able to work with him throughout the AC4D experience.

What got you to say definitively you were attending?

I went to the Austin Center for Design bootcamp and realized that there was so much more utility and purpose to design than I previously thought (shameless plug, one’s coming up!). I had been looking at other schools like RISD and SCAD, and the work that I saw people producing was a bunch of really pretty — technically very impressive — but a lot of the work that I saw didn’t have a lot of feeling or emotion behind it. Or even purpose, really. I would see student projects where people would design these beautiful 3D printed bikes, but they weren’t meant for people to ride. They weren’t meant for consumption, they were meant to be art pieces. While I think that’s nice, I’m very utilitarian in professional design. If it doesn’t function as it should but looks nice, it’s useful for beauty but not much else. When I spoke with Jon, and then saw the types of people that Austin Center for Design attracted, I realized that this was the right decision for me to go there instead of any of the other schools. I knew that A, the coursework was going to be difficult and I like to seek out challenges; and B, the social entrepreneurship piece was a piece I really cared about. It wasn’t about making art pieces that go into the world and are forgotten. It was about making an impact and making a difference.

Tell me about your year at AC4D.

It was hard! (laughs) I think one of the things that struck me about AC4D that the curriculum is really at the same pacing and difficulty level as at Carnegie Mellon University (where I did my undergraduate degree). This meant that there were high expectations set for us that I really genuinely wanted to meet. Q1 was okay, Q2 was very difficult. Alex and I were embedded into the trans and queer community of Austin, and it’s really hard when you’re part of the LGBTQ community and also talking with members of that community about their needs to not feel like you want to take action right then and there. There were a lot of emotions I felt around the research that was just difficult to process as a queer person myself.

In Q2, I went to the Transgender Day of Remembrance (November 20th). The Transgender Day of Remembrance is an event held for the community to remember the lives of the transgender victims of homicide within that year and to communally grieve. So throughout all of our interaction with the trans and queer community, the underlying ribbon we found is that somebody usually knows someone who has died whether through murder or through suicide. It’s tragic, and it’s awful, but that is the day to day of our community. As I was listening to the many names called out of lives lost that year, I started sobbing. Watching one person grieve is harrowing, but watching an entire community grieve, and feel connected to people you haven’t even met, is something that is completely, deeply, and soulfully impactful. At the time we dove really deep into the research but it struck a very deep, dear, personal chord with me. I previously considered myself a member of the queer community but through that research, I realized that I hadn’t been a good ally of the trans community like I previously thought. I didn’t have enough information or empathy to properly support our trans community then. Now, I feel much more confident in being an ally, but I’m learning new things every day. It changed my worldview and allowed me to become a more supportive person. That’s how much the research can change you.

Q3 was really difficult, and Q4 was the most difficult. It’s challenging to take your insights from research and turn them into something with which people can empathize, but it is often extremely challenging when you’re doing something socially impactful to take that and put it into an idea that’s going to make money. Because sometimes you wonder if you’re doing the right thing for the community — there’s a societal rule that I believe if you’re making money and benefitting off a community, that that makes you a bad person. So I had a lot of conflicting feelings throughout that entire time. I still believe in social entrepreneurship, but especially with the income disparity of the queer community, I don’t know how that would work. I wouldn’t want to charge an already financially burdened community to use something that might help them find friends. So in hindsight, I had expected the rigor of the classwork, but I did not expect the emotional rigor that I would be going through with my work at AC4D.

How did this empathetic research end up inspiring your ultimate project idea?

We started with our focus being simply the trans and variant community of Austin. We broadened it slightly to the queer community of Austin, which is slightly different than the lesbian and gay community of Austin. The queer community tends to include bisexuals, folks who are trans, gender queer, gender variant, asexuals, anyone who doesn’t exactly fit the norm of heterosexual or homosexual.

We recognized that there were a lot of resources online for people to talk together in the queer community, but ultimately, a lot of the people in the queer community couldn’t find other people in the community even in Austin. There were some pockets here and there, but it was rare to see people who had formed a pocket of community around themselves. A fair amount of people we had talked to felt isolated from the community, often through a series of events, or the way that they had chosen to live their life. There are folks out there that identify as transgender but at some point, they pass as another gender, and at that point, there are some folks who feel like they would rather pass as a binary gender (male or female). While that’s a choice they make for themselves, I also don’t think the community should be isolating or vilifying them for making that choice or vice versa, where “passing” for a binary gender is idolized or looked up to in a hurtful way.

So, this opportunity manifested itself somehow into a mobile application? How did your initial research translate into content or feature requirements for the interface to make it approachable and build community in a way that might be distinct from, for example, joining a Facebook group?

Queery is the name of the application. We devised it specifically for the queer community to talk about targeted subjects that were very relevant to them. These targeted subjects would include coming out and other queer related issues; if people wanted to know, for example, what shops were trans and queer-friendly. Those discussions were already happening online, but more often than not, we saw that people — the people who felt most positively in the community — were the ones who had a physical mentor there with them, saying, “Hey! Here are the shops you go to,” or “I have this doctor, or therapist, they’re very trans and queer-friendly, here let me show you to them.” Someone to sort of walk them through the steps. So, we devised Queery as a way for people to become mentors within the queer community who could be physically there for their mentees as well as a way for people to receive mentorship within the queer community.

It works like this: You are invited to queery by a friend. You create a profile and select a topic that you want to talk about and then it will select for you someone who is nearby who also wants to talk about that topic. The application will pick a coffee shop that is located between the two of you (without showing you the location of the other person) and then the two of you meet at that coffee shop. This serves as a neutral location to talk about whatever you want to talk about — that conversation topic is just a starter — but the intent is for people who have not met each other before to meet in-person in a coffee shop and get to know each other. It helps people on the outskirts of the community feel more connected to people within the community and helps us meet one another in person when we might have only met someone online. It’s a safe, secure way to meet people within the queer community that you might not have otherwise known about.

queery wireframes.
queery wireframes.

After the coffee shop meeting, the system allows people to rate each other and designate whether it was a helpful session for them, if the person was nice, etc. The ratings were helpful to assist in moderating the online community; if, for instance, someone who was rude or trolling entered the community, the system would remove them from the community. The only way to get into the queery system is similar to Gmail’s beta launch, which is to give an invite to another person. People who are already part of a community will start inviting their friends and then their friends will start inviting their friends and at least one person might vaguely know another person, but it allows those large pockets of communities to be able to be connected together through a friend of a friend of a friend. If you have someone who is at the epicenter of a community they might be connected with someone who is sort of at the farther branches of the community and it allows them to pull that person at the edge back into the community. That to us was the biggest thing — and the biggest thing we found in the research — is that if you have a strong community and you have a group of people in-person who are supporting you, then you feel like you have a good place in the world. If you struggle with depression or gender dysphoria you are less likely to act on impulses when you have a physical presence of a community because you are thinking about your own impact. The biggest risk to someone who is struggling or marginalized is being isolated from their community.

How did prototyping this system change the system or your understanding of the community?

One of the things that I realized through prototyping is that when you present people with a binary electronic system for community, people react to this in unusual ways. For instance, in the pilots that we did, we were able to match people up with one another for conversations but we found out later that it was actually better if a third person introduced them. It mattered was that there was a third person there to moderate if things went sour. Because this is a high-risk community, the risk of “going sour” isn’t as mundane as having a disagreement. Sometimes fights like these end in a deadly way. Prototyping surfaced even more challenges we needed to meet so that participants felt safe and comfortable.

Prototyping queery.
Prototyping queery.

Through prototyping, we also realized that there is a discrepancy between people giving information to the system and the desire to get information from the system. When we talked to people about their profile questions vs. the questions they’d want to ask the other people, the questions they wanted to ask other people were questions that they themselves did not feel comfortable answering. Questions like, what their transitioning status is, if they have a partner, what their sexual orientation is, etc. This isn’t information that someone would be comfortable giving out to a stranger, and yet, they reported that they would feel more comfortable if a stranger offered this information to them.

Prototyping also validated the benefit of queery which was when people walked away from the conversations, they felt better, felt like there were more people in the community who understood them and felt connected to the Austin community. In one of our pilots, there was someone from the trans community and someone from the asexual community who met together and the person from the trans community realized that they genuinely did not understand what asexuality was and how it functioned because they themselves were not asexual. The person from the asexual community realized that although they considered themselves part of the queer community, they had only ever hung around people who were heterosexual-leaning or otherwise part of the majority and they understood what it felt to be in the queer community.

The last thing we learned from piloting was that we shouldn’t just pair like with like but also make unexpected pairings because it’s beneficial for people to get to understand the breadth and depth of the queer community and understand someone who they haven’t talked to before. It helps people understand their differences while rallying around the fact that the queer community is just as diverse as its many, many members.

Let’s talk a little about Post-AC4D. Tell us about your experience at Frog Design and your insight after reflecting back on your time at AC4D.

I currently work as an Interaction Designer at frog design. I began by contracting with them. They liked me and I liked them and we decided to stick with one another, which has been a wonderful partnership. The reason I decided to pursue becoming a better designer in lieu of pursuing doing social entrepreneurship is that when I left AC4D I genuinely felt like I had to be a better designer in order to serve the queer community better. So my focus at the time — and I think my focus even now — is to hone my skills so that I become more useful to others.

I don’t think that queery would have worked in the state it was in after AC4D because I wasn’t as embedded in the queer community as I am today. I am now a regular member of several queer community meet-ups and work with an internal group at frog that promotes diversity and inclusion. I realize in my specific case, in order be helpful and beneficial and really design for that community, you have to be embedded in it. In my current position within the community, I feel far more able to help people. It has also allowed me to pursue other things that I thought I would not be able to pursue. Some of the work that I am doing is designing physical spaces, and it’s very exciting to be part of the wide breadth of work we’re doing here at frog.

Chelsea leading a design workshop.
Chelsea leading a design workshop.
 

The other thing here is the people. At the very end of the day, I got into Austin Center for Design because of the people and the reason I’m at frog is because of the people, because there are a ton of incredibly talented and wonderful individuals with whom I have been able to share programs with, I have been able to share the stage with at interaction16, and that I’ve been able to share my work with and become better. I feel like the work that we’re doing at frog is definitely targeted towards a different audience — but all of this work that I’ve been doing, I feel has the end goal in mind of helping the queer community. Even if it’s one small thing at a time. I just had a really great conversation with a good friend of mine who knows someone who is coming out and I was so happy to provide them with advice and information to be supportive during that time. I’m honored that they think of me that way. I didn’t have that kind of support and a lot of the people in the community that we’ve talked to in Austin, don’t have that support. The more I’m involved in the community, the more that people know that I am involved in the community, the more questions that get asked, the more potential link ups that can happen and the more people can become connected.

Is there anything else you’d like to share?

Yeah! One of the things that AC4D taught me is that in the world of technology — and in the world of design affected by technology — things tend to be in absolutes. For instance, Snapchat uses shareable design and all of a sudden everyone wants to know how to integrate shareable design into their application, or Facebook excels at being a viral social platform and now everyone wants to bring social into their applications. All designers and researchers at some point in their career get to the point where they say, “I thought this was totally a certain thing, I thought this was black and white but maybe I was wrong.” So my philosophy as a designer is to force yourself to look at everything in shades of gray rather than black and white because black and white is sexy and it’s something people are naturally drawn to, but people have a hard time finding the balance.

I think some of the greatest designers were the ones who achieved balance in their work and who have achieved balance of form and function. I get tired of hearing people say that one buzz word is going to be the magic bullet, because life isn’t that simple, it’s really complex. Just like the queer community, you can’t put someone in a box and expect them to conform to everything in that box. It’s not possible. Everything will bleed out of the box. We should stop putting people in boxes and allow people to be the weird, wonderful amoebas that they are.

Austin Center for Design is a not-for-profit educational institution on the East Side of Austin, Texas that exists to transform society through design.

myAT&T Re-design

-Getting into evaluation and usability engineering methods-

A brief recap of what has happened to bring us to the evaluation conversation

Last quarter we took a head first dive into the myAT&T mobile application. We talked to people about their most painful interactions with this application and plotted our findings against each other.pain points chartThrough this ranking of difficulty, we came to understand the most common pain points. From there we went into the app itself and studied the ways those actions are executed in the current experience. It was a patchwork of an experience. There were multiple ways a simple ‘+’ button was used, to give 1 example. It truly seemed like developers had stitched together multiple different attributes created at different times.

We then went into sense making. Plotting the verbiage that was particularly important to this specific experience and the 6 tasks we had identified. We plotted these nouns in terms of their relationships with each other. These connections needed to be thought about critically. It was not just a site map, it was connections both tangible and intangible, what was the back and forth of attributes to accomplish common tasks? Noun matrix v2After this initial sense making, we made concept maps which I blogged about and you can find that here. This allowed us to map the connections within the app in a way that imprinted them into our minds. This activity is exponentially important so that moving forward with creation, ideas and redesign we could quickly reference the important details and how they connected to other key components to enable action. After fully understanding the ins and outs we consciously tried to strip away all constructs of what the experience currently looks like. Divergent thinking. Push your mind into new territory. What would simplify? How could this action be streamlined? How can we innovate? In this phase, we allowed ourselves to be unrealistic, no constraints. To translate this expansion back into the applicable context is difficult but needed, to be effective. Here is the blog that I wrote regarding the portion of the process walking you through divergent thinking and into initial paper prototype sketches.

Bringing this comprehension into digital space for the first time, I felt confident. I was quick to learn that I did not have the capability to see all of the details yet. There are things that one cannot be aware of until their attention is explicitly pointed towards the details that are imporant. And often the most important details are the hardest ones to see.

Evaluation Explanation

Which brings us to evaluation and product management. In this class, we are learning how to evaluate the learnability, usability and, essentially, effectiveness of a digital interface. The methodologies we are learning are the best-known combinations of effectiveness while also being inexpensive. The methods are called Cognitive Walkthrough, Heuristic Evaluation, and Think Aloud Protocol. These methods, in combination with eachother, provide a holistic view into the issues that prevent ease of use for a digital interface.

Cognitive walkthrough is a method for observing and assessing the learnability of a product. This method is based on a theory about the way that people analyze and solve problems. The way that I have started to think about it is like the signage of a grocery store. If our user wants some cheerios what words are they going to look for? What other things would they naturally associate as surrounding products? etc. It is performed by walking through the digital experience, screen by screen and asking the series of questions as follows:

  1. Will the user try to achieve the right effect?
  2. Will the user notice that the correct action is available?
  3. Will the user associate the correct action with the effect that user is trying to achieve?
  4. If the correct action is performed, will the user see that progress is being made towards their goal?

From this, a designer gleans the knowledge needed to make sure that the experience invites the user into it in a way that makes sense to them.

Think Aloud Protocol and Heuristic Evaluation are for gauging the overall usability of the interface. Think aloud protocol is a process that a designer facilitates with actual users who have never seen the product before. This is to create an unbiased set of responses. The user is asked to run through a prototype and verbalize the actions they are taking and their thought process to get there. To give an example, if I was opening a door handle my thought process may go something like this “Alright, I am going through the door, I see the door handle, I know that I need to turn it, I turn it, go through it and close the door behind me.” The only words the evaluator uses are “Please keep talking” if the user falls silent. This verbiage is so that they don’t go into introspection, they continue to be focused on the interface rather than think about their answer or if it is good enough. The reason you want to avoid the introspective effect is that this activates a different part of the brain than operational, task oriented thoughts do. The way I think about this one is it is like the grocery store signage designer observing real shoppers walking through her/his systems. Is it being used the way the designer hypothesized? Everybody’s brain works differently, in different patterns, with different values. Then, on the contrary, there are those things that make sense to everyone and everyone can relate to, understand and abide by. Which leads us into Heuristic Evaluation. A metaphor to illustrate the way I think about viewing a digital product through this lens is its like the rules of the road. The heuristics of the road. You have your different kinds of lines (double solid yellow, dashed etc.) on the road that mean different things. There are guidelines for what to do and what not to do. Similarly, there is a common language, and a common list of best practices for mobile applications. Do’s and don’ts. You can build the most gorgeous car ever but if the size of the product is the width of two lanes of the road, your car will not be effective and will never be used. The same goes for heuristics in mobile applications. The heuristics to hold a creation up against are as follows:

  1. Visibility of system status
  2. Match between system and the real world
  3. User control and freedom
  4. Consistency and standards
  5. Error prevention
  6. Recognition rather than recall
  7. Flexibility and efficiency of use
  8. Aesthetic and minimalist design
  9. Help users recognize, diagnose and recover from errors
  10. Help and documentation

These are exponentially useful for understanding applications in general and the common language that is behind the creation of any digital products. These are the kind of details, again, that are the most important…. So important, in fact, that when they are well executed, the user doesn’t even notice they are a thing that someone has to think about.

The utilization of evaluation

As explained, the 3 kinds of evaluations are valuable for different reasons. I used them in a particular sequence that I found to be very effective.

During last quarter, we did multiple iterations of our screens and in between the last few iterations we used Think Aloud Protocol. This quarter, I knew that I had some screens missing. Such as, when something needs to be typed in with a keyboard, I had the keyboard appear fully expanded without a screen for the tapping action that precludes the keyboard being on the screen. The screen on the far left, below, is the iteration before evaluation. I then added the screen to the screen in the middle to preclude the screen on the right to mitigate this keyboard issue as well as protect users passwords because I realized that showing a users password from the get-go was a major security issue.

 Screen Shot 2017-01-19 at 9.11.04 PMScreen Shot 2017-01-19 at 9.10.04 PM

So, for this reason, and knowing that Cognitive Walkthrough is for checking the learnability of a product, I used this method to find the places in the experience where I needed to add less important, filler screens such as the keyboard example. This worked well because it opened my eyes to other details that felt abrupt. The things that I didn’t make the user ready for.

After adding these screens and noting learnability issues I moved on to heuristic evaluations. I had 5 evaluators look at my screens and log the feedback in a spreadsheet. As I got comfortable in using this evaluation I started to not have to reference the list of heuristics, I would just start to see the issues and why they manifested that way. I think learning these methods alongside actually building an app is such a valuable cadence for learning because on both sides of the equation; evaluation vs creation, the learning is applicable. I am excited to rebuild the rest of my screens for this AT&T redesign and am even more excited to begin on the creation of my own application from scratch with all of these “rules of the road” ingrained into the way I look at interfaces now.

Lastly, I did a few Think Aloud Protocols saw that in a lot of ways, users were doing what I thought they would do, and in a lot of ways, they weren’t. Work to be done! Improvements to be made!

Breaking down evaluation findings

I’m going to start at a high-level description of the synthesized take always and then break them down in terms of where they came from and how they inform redesign.

Takeaways:

The experience felt discombobulated. There is little to no feeling of continuity. The user always feels slightly unsure if they’re doing it right.

Attributes and/or features are not effectively communicating their purpose. The design does not provide clarity.

Visibility, control and freedom are huge overarching issues. Each screen is separate with separate actions and it is relatively arduous to go back a couple steps to change something.

Hierarchy and priority are not clearly visualized. Everything feels the same. The design does not draw the users’ attention to the next step.

From Cognitive Walkthrough I learned a lot about verbiage. The wording and phrasing is so important to afford the right actions findable. This also ties in with difficulties in navigating. Navigation flaws are very often linked with the wording that is used to inform movement. Navigation was also inhibited by screen length, mostly in regards to where the screens were getting cut off. I did not think about screen length or scrolling at all in my first phases of design so this was a huge learning. And lastly Cognitive Walkthrough was an excellent method for finding screens that were missing as I said before.

From the Heuristic Evaluations I began to see the gestault of failure in overall visibility of the app and where users were in completing tasks with multiple steps. I found that my users, often times, did not have full control of their experience. There were actions and buttons that were the end all, be all of a task and the user did not have full freedom, I was mandating them fitting into my ‘template’ of actions. And lastly from Heuristic Evaluation I more fully understood the importance of users being able to recognize something as opposed to recalling it from previous usage. This is especially significant in the context of the AT&T app because I assume that users don’t interact with the experience more than once per month. Compliment this with the fact they users probably want to get in and out fairly quickly, recognition should be highly valued.

Lastly, Think Aloud Protocol made me understand more thoroughly the power and importance of word choice. What is applicable to the greatest number of people? I was approaching the initial creation of the application with a  very minimalistic mindset. I felt, and to a large degree still feel, that at this day in age simplicity is largely important. However, when it comes to sensitive information, money, or actions that are difficult to reverse, customers are very tentative and protective of their space. Therefore, enough information needs to be handed to them so that they feel they have full comprehension of what is happening. And sometimes, in the right circumstances, that is a short text block. The combination of explanation and action is important.

Redesign based on evaluation

I will illustrate the redesigns I have made in the context of a flow of screens that are about the action of ‘Upgrading a device’. I chose this flow because it housed all of the issues identified above in some capacity.

First let me break down some of the changed attributes…

I changed the attribute that compares the data used by individual users on the account to the amount of data the account as a whole has. This was informed by a misunderstanding of the comparison of its initial, more boxy, vertical format, with the other ways that timelines and data were visualized.

This first visual is how the billing cycle and data are displayed:

Screen Shot 2017-01-18 at 6.26.55 PM

Take that into account when looking at how individual users data usage are compared with each other. The formatting is utterly not the same…

Screen Shot 2017-01-18 at 6.26.21 PM

Which brought me to the following design:

Screen Shot 2017-01-18 at 6.27.53 PM

I changed the boxy attribute to be the same shape as the other barometers to provide ease of mental digestion of the information.

In terms of wording, I changed quite a few things. To give a few examples, I changed:

Screen Shot 2017-01-18 at 6.24.43 PM

ToScreen Shot 2017-01-18 at 6.24.28 PM

Because I learned, in think aloud protocol that this could be perceived as a positive amount of money rather than an amount that needed to be paid.

Another wording change was from ‘Devices’ to ‘Devices & Users’ on the tab bar.

Screen Shot 2017-01-18 at 6.23.50 PM

To

Screen Shot 2017-01-18 at 6.24.08 PM

This was significant because people were going to the “plans and usage” tab for actions regarding individual users’ information in regards to data rather than to the ‘devices’ tab. Which is fine, but not intended and in 1 circumstance was detrimental to the desired action to be achieved.

To give you a full sense of the changes I made and how they manifest here are my old and revised flows back to back.

First is the old version of ‘Upgrading a device’:

Upgrade a device 1 - originalUpgrade a device 2 - original

Then here is the redesign:

Upgrade a device 1 - redesign Upgrade a device 2 - redesignThe umbrella goal of the revision is to provide ease of back tracking. Buying a phone is a complicated transaction with a lot of steps and a lot of details and it is satisfying being able to see all of the things you have chosen while you are making your next choice.

To summarize my takeaways; this is how I will be thinking about these methods of evaluation going into the future.

Cognitive walkthrough:

What invites users in the experience in a way that makes sense to them?

 

Think aloud protocol:

What doesn’t make sense at all?

 

Heuristic Evaluation:

Structure and guidelines – what are people used to?

 

In combination, these provide clarity and insight into how to make products that people love to use.

You can see my full presentation AT&T Evaluation Deck here.

Evaluation my myAT&T Mobile Redesign

In previous blog posts I have detailed how we the class of 2017 has been tasked with redesigning the current myAT&T experience. We conducted research to conclude the top pain points/features we were to incorporate into our design, concept mapping to gain a high level understanding of the space. We then applied divergent thinking to focus on the experience without being distracted from the granular details. Next was to start manifesting our ideas by sketching out screens to get our ideas into a more tangible form, followed by increasing the fidelity by digitizing our sketches into wireframes.
The next step in the process is to perform evaluation through three primary methodologies: think aloud protocol, cognitive walkthrough, and heuristic evaluation. After spending so much time in the designs it’s easy to become tunnel visioned and miss opportunities to make your designed experience better for the end user. This potentially limiting viewpoint affords the needs to conduct the previously mentioned evaluative methodologies.
The first methodology I conducted is “think aloud protocol,” which evaluates the usability of my designs by encouraging a user to think out loud as they use your product or service with specific tasks to complete. The intention is to understand how users will navigate my designs, and more importantly the thoughts behind the what and why the user is performing certain actions. There may be a resistance to conducting this exercise due to the assumption that verbalizing internal thoughts will interrupt the task at hand. Experiments were conducted to validate this methodology and found that there is no affect on thought sequences, as long as there is no introspection. This is primarily why the only thing a facilitator of this exercise can say is, “please keep talking.” This avoids any introspection that wouldn’t normally exist within the user.

I created an InVision prototype and recorded the user’s comments, reactions, and  screen activity.

Justin
Overall, I conducted two rounds of think aloud protocol. One round with five users, and one round with three users. Between the two rounds I iterated on my designs applying feedback, then conducted the second round to validate whether or not my new designs were successful. New opportunities were presented during the second round as well.
Next, a “cognitive walkthrough” was performed. Cognitive walkthrough is a methodology used for evaluating the learnability of a product based on a theory of problem solving in unfamiliar situations. Think if it as examining the designs from the mentality of a user who had never seen the product before. Although this was not done with actual users, it helps frame the mind to look at the system from the point of view of creating something that doesn’t need any training to use. A cognitive walkthrough is performed by focusing on the primary tasks the user would want/need to perform within the app. These tasks include:

1. Compare your current usage to other data plans, then change your data plan accordingly.
2. Upgrade your current device to the new iPhone 7.
3. Suspend your phone because it was stolen while you were on vacation.
4. Process a insurance claim because you cracked your screen.
5. Make a payment on your current wireless bill and enroll in auto-pay.
6. Update your account’s password.

While examine each task screen by screen I asked four questions to arrive at opportunities to improve the learnability of the design. The questions include:

1. Will the user try to achieve the right effect?
2. Will the user notice that the correct action is available?
3. Will the user associate the correct action with the effect that user is trying to achieve?
4. If the correct action is performed, will the user see that progress is being made towards a goal?

If any of the previous questions show a problem, I then logged that problem into a spreadsheet with a unique identifier, unique screen ID (labeled prior to examination), problem description, severity (1-5, 5 is high), frequency (1-5, 5 is common), and a proposed solution. This makes it easy to identify specific problems relating to specific screens and prioritize opportunities within the design that would have the most significant impact to the overall experience.

Cognitive Walkthrough Thumbnail

 

The last methodology used during this process is called “heuristic evaluation.” The purpose of this exercise is to compare an interface to an established list of heuristics, or best practices, to identify usability problems. These 10 heuristics have been identified by Jakob Nelson in collaboration with Rolf Molich in 1990 and is generally deemed by others to include good principles to follow. Even though they were established 20 years ago, they still hold true to today’s usability standards. Heuristic evaluation is arguably more detail oriented as you identify each individual control on every screen and compare it to these heuristics, which include:

1. Visibility of system status
2. Match between system and the real world
3. User control and freedom
4. Consistency and standards
5. Error prevention
6. Recognition rather than recall
7. Flexibility and efficiency of use
8. Aesthetic and minimalist design
9. Help users recognize, diagnose, and recover from errors
10. Help and documentation

To capture these areas of opportunity a very similar spreadsheet is used as the one used in cognitive walkthrough. The difference being it helping to capture which of the 10 heuristics a specific control violates to be able to easily refer back to. The spreadsheet serves the same purpose in prioritizing efforts where would have the largest impact throughout the system.

Heuristic Evaluation Thumbnail

 

After the three evaluation methods were conducted, I then took the culmination of my findings to be able to decide on which changes could provide a significant improvement to the overall experience. Although there was plenty room for improvement, I chose the following three opportunities.
The first improvement I decided to make was definitely the most simple and easiest to implement, but still has very beneficial implications. When users were asked to both suspend a device and submit an insurance claim, their first tendency was to navigate to the account tab. When there were no options for device management they paused and were confused on what to do next. Eventually they realized that tapping the device tab would give them the necessary options. To solve for this I decided to simply add a manage devices options within the account tab. This will then redirect the user to the devices tab.

First Improvement

The second improvement I decided to to take advantage of is also in the devices section. Before I conducted evaluation I placed a CTA (call to action, or button) below the currently selected device which would then drill down to see more device options. There are a couple of opportunities here. For one, the device information at the top of the second screen shows a number of redundant information and doesn’t add any extra value the user wasn’t already exposed to. In addition to this, the current phrasing of “view device details” could be misinterpreted to mean device hardware information. To solve for both of the issues I decided to consolidate the two screen into a single screen. I believe this solves these problems by eliminating the confusing phrasing and immediately providing the options the user may want.

Second Improvement

The third opportunity I decided to work through was working with the visualization of data usage for the month. There were also a few opportunities within this space. One of which was simplifying the current billing cycle information as well as the data usage information into a single chart. Users were having a difficult time processing the way I displayed the information within a bar. This placement also was difficult to compare, which was another problem. Being able to compare your current usage wasn’t as clear as I’d like it to be. Users were having to take time to process what was on the screen rather than scanning the screen. Lastly, I noticed that users were trying to tap on each individual data plan bar to update their data plan. To combat this I added a button for each plan. I think there is still more opportunity to make this specific screen even more user-friendly which I will continue to explore.

Third Improvement

Moving forward I will continue to work on these key opportunities, but also work on a number of smaller details that will make the overall experience more user-friendly. As a class, we will also be meeting with a couple of professional developer to gain a better insight into feasibility of our ideas and help create a plan of prioritization.

If you would like to see my presentation, you may view it here.

myAT&T Mobile App Redesign: Evaluation

Last quarter, we went through a process of redesigning the myAT&T app. Although smartphones have become more and more important to our everyday lives, actually managing your mobile phone account has remained frustrating and confusing.

The Design Process

Before we started designing, we sought to understand the key purpose of the app. To do so, we surveyed about 20 people to assess what was most important to them when managing their account.

We found that the user wants to:

  • feel like she has the right data plan for her needs, that she doesn’t have too much data (so she’s paying for data that she’s not even using), or too little (so she gets hit with data overage fees);
  • be able to easily upgrade her phone; and
  • never miss a bill payment.

Given this understanding, we decided to design for the following flows 1) managing data, 2) changing your plan, 3) upgrading your phone, 4) paying your bill, 5) setting up autopay, and 6) managing the security settings of your account (e.g. changing your password or updating your email address).

Evaluation

Once we completed our designs, the next step was to evaluate our work. This process included three standard evaluations:

  1. The Cognitive Walkthrough
  2. Heuristic Evaluation
  3. Think Aloud Testing

The Cognitive Walkthrough

Developed in the 1990’s, the Cognitive Walkthrough is a method for evaluating the learnability of a product, based on a theory of problem solving in unfamiliar situations.

When confronted with a unique, new situation, people leverage a problem solving theory based on exploration and progressive behavior refinement, generally with the following steps:

  1. We determine what controls are available for us to take action.
  2. We select the control that makes the most sense towards incremental progress.
  3. We perform the action using the control.
  4. We evaluate the results of our action to see if we have made positive, negative, or neutral progress.

The evaluation is based off these steps, requiring the evaluator to ask the following questions:

  1. Will the user try to achieve the right effect? The user of the system has an end goal in mind, but needs to accomplish various tasks to complete it. Will they even know to perform the specific steps along the way?
  2. Will the user notice that the correct action is available? Is the option visible and on the screen, or at least in a place the user will likely look?
  3. Will the user associate the correct action with the effect that user is trying to achieve? Is a label or icon worded in a way that the user expects?
  4. If the correct action is performed, will the user see that progress is being made towards their goal? Is there any feedback showing that the user selected the right option or is making forward momentum?

Heuristic Evaluation

The Heuristic Evaluation involves comparing an interface to an established list of heuristics – best practices – to identify usability problems.

This evaluation was established by a man name Jakob Nielsen in the 1990’s. Although technology has transformed dramatically since then, these heuristics are based on human behavior, and still apply today.

They include:

  1. Visibility of system status. The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world. The system should speak the users’ language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms.
  3. User control and freedom. Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialogue. Essentially, the design should support undo and redo.
  4. Consistency and standards. Users should not have to wonder whether different words, situations, or actions mean the same thing. The design should follow software/hardware platform conventions.
  5. Error prevention. Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
  6. Recognition rather than recall. Make objects, actions and options visible. The user should not have to remember information from one part of the dialogue to another.
  7. Flexibility and efficiency of use. Accelerators – unseen by the novice user – may often speed up the interaction for the expert user to such an extent that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions retrievable whenever appropriate.
  8. Aesthetic and minimalist design. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognize, diagnose and recover from errors. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation. Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Think Aloud Testing

Think Aloud Testing evaluates the usability of your work by encouraging a user to think out loud as they use your product or service.

It is important that the user does not reflect on why they do what they do, but simply talk aloud as they do it. Reflection uses a different part of the brain, and any introspection would not speak to the intuitive measure of the design.

As the user tests the product, the designer pays particular attention to any communication of surprise, frustration, or design suggestions; she also notes, of course, if the user is unable to complete the task with ease or at all.

Benefits of using all three evaluations

While all three tests stand on their own, there is great benefit to using all three together. For example:

  • Think Aloud participants may have trouble articulating why something was confusing or frustrating; Heuristic Evaluation and Cognitive Walkthrough can provide hypotheses
  • Heuristic Evaluation and Cognitive Walkthrough can help the designer know what to look for during Think Aloud
  • While view to view, Cognitive Walkthrough may assess the flow as seamless, the Heuristic Evaluation and Think Aloud help the designer see the design as a whole — i.e. although something may theoretically work within one flow or screen, it may confuse the user, if it is not consistent with other flows or accepted practices

Key Findings

By taking my app redesigns through these evaluations, we found the following key issues:

  1. Design did not always give a clear sense of place
  2. Intention of screen was sometimes convoluted
  3. Design did not help prioritize action

Design did not always give a clear sense of place

A glaring example of how the original designs did not give a sense of place was in the flow from “Home” to “Set up bill payment.”

MyAT&T Evaluations 2017-01-18.002

When the user taps on the “Set up bill payment” button, not only does a modal pop up, but they are also immediately taken to the Billing tab. This is a violation of the Heuristic principal of Consistency & Standards. Typically, when a modal pops up, the rest of the system stays in the same place. However, in this case, the app takes the user to a completely different place in the system, and there is no clear sense of what would happen if the user tapped on the “x” of the modal — would she be taken to the Billing home screen, or back to the Home screen?

MyAT&T Evaluations 2017-01-18.003

Based of this this evaluation, I changed the flow to from the Home screen, to Billing, to Set up Bill Payment; and I removed the Set up Bill Payment flow from the modal.

MyAT&T Evaluations 2017-01-18.004 MyAT&T Evaluations 2017-01-18.005 MyAT&T Evaluations 2017-01-18.006

 

Intention of screen was sometimes convoluted

The original Home screen provided a strong example of a convoluted design. Most important to the user is that she isn’t paying too much for data. In the original Home screen, there is an alert that the user is at risk for exceeding her data limit, but, because the data and billing cycle are separate on the screen, it’s hard to gauge the seriousness of the risk.

MyAT&T Evaluations 2017-01-18.008

To address this issue, I merged the data and billing cycle into one visual.

MyAT&T Evaluations 2017-01-18.009

 

Design did not help prioritize action

Throughout the design, we found many examples of no clear system prioritization — for many of the flows, the user would need to think carefully about each choice, as opposed to being guided by the system.

For example, if the user wanted to manage her data, the system gave her five different options, with no explanation of what each option might provide, nor any prioritization of what might be the best choice.

MyAT&T Evaluations 2017-01-18.014

MyAT&T Evaluations 2017-01-18.012

To address these issues, I updated the flow to only include three options, with an explanation for each.

MyAT&T Evaluations 2017-01-18.015

 

To see a full list of findings for each evaluation, please see links below.

myAT&T Evaluation 2017-01-24 – Cognitive Walkthrough

myAT&T Evaluation 2017-01-24 – Think Aloud Protocol

myAT&T Evaluation 2017-01-24 – Heuristic Evaluation 

AT&T App User Testing

Previously this AT&T app redesign had been developed over a period of four weeks, with a few rounds of user testing. The next step in the design process is user testing. In broad terms this means reviewing and testing the app with the intentions of a user. This allows for a designer to gain more empathy for the users as they design the app. Over the past two weeks three user tests have been conducted, reviewed and solutions integrated back into the app. Below are the different methodologies used, the resulting breakdowns that were found, as well as potential solutions for these breakdowns.

The three tests the I conducted include:

  • Cognitive Walkthrough
  • Heuristic Evaluation
  • Think-Aloud Protocol

Each of these tests highlights a different aspect of usability, allowing for the results to cover a whole array of usability issues.

The first of these test was the Cognitive Walkthrough. This type of user test evaluates the prototype’s ease of learning. More plainly stated, it evaluates where there would be potential breakdowns for a first time user, performing a standard task. This type of usability test is integral to establishing and understand a theory of how the first interaction with the app will go for a first time user.

To execute this usability test, I printed out each screen. Established my six tasks: Set up Autopay, Make a payment, Change my plan, suspend a device, change password and upgrade device. Then I established who my potential first time user was; any individual who has the AT&T app and uses it for its various features. After this, I began to set up scenarios for myself to help empathise with the user, then I’d run through each flow asking myself a series of questions. These questions were:

  • Will the user try to achieve the right effect?
  • Will the user notice that the correct action is available?
  • Will the user associate the correct action with the right effect they are trying to achieve?
  • If the correct action is performed, will the user see that progress is being made towards a solution to their task?

These questions help evaluate each step necessary to perform a task, and whether or not a first time user would have the ability to connect each of these steps with their overarching task. After reviewing each screen, whenever there was an issue between the questions and the screen, it was logged on an excel sheet. This sheet included several descriptive fields, so that when reviewing later, I could easily remember what needs to be changed. These fields included: Screen Identity, Problem Descriptions, Severity, Frequency and Proposed Solution. Below is an image of the work space I was using to perform these reviews.

 

IMG_0100

 

From this review I found a number of issues in the learnability of the prototype. I’ve also come up with a potential solution for these issues. The main  three issues I’d like to focus on are as follows:

 

Problem: My screen for the Devices only included a list of devices, though with reviewing how this would help with a user’s overall task there was a disconnect. There may be an issues between the step of user’s opening up the “Devices” page and not seeing any actionable next steps.

Devices_Action_V1-01

Solution: In order to combat this disconnect, I added a few choice actions that can be started from the “Devices” page. This allows the user to connect their task with the current screen they are viewing.

 

Devices_Action_V1-02

Problem: Within my “Change Plan’ tasks, a customer won’t immediately understand that they must click the “Lock Plan” button to finalize their changes to the plan.

LockPlan_v1-01

Solution: To manage the customer expectations, I added a brief line of directions explaining that the customer needed to tap on “Lock Plan” once they were satisfied with the changes.

LockPlan_v1-02

 

Problem: Finally the placement of the Autopay signup as a subsection of “Bill” seemed like it would clash with a user’s pre established notions of apps’ setups.

Profile_Autopay_V1-01

Solution: So to mitigate that potential breakdown, I added the option for Autopay to be set up under the “Profiles” screen.

Profile_Autopay_V1-02

 

This was an excellent test for me to more fully understand how a first time user would review the actions available to them, and analyzes if those steps truly would help them complete a task. To access my full feedback here the link, Cognative Walk Through- Complete Feedback

 

The second type of usability testing performed was the Heuristic Evaluation. This tested the app against a list of 10 established usability principles. These are known aspects of well designed products, that facilitate seamless and smooth interactions between user and system. Below are the 10 principles with a short explanation:

  1. Visibility of system status- There needs to be some kind of communication during wait-times so the user understands the system is still working even though no significant visual aspects have changed.
  2. Match between system and the real world- By aligning a product with some aspects of the real world, then the system can become far more familiar to the user upon the first experience.
  3. User control and freedom- This allows a user to be able to make the changes they want to their system, but also allows them the freedom to undo these changes if needed.
  4. Consistency and standards- Following industry standards to establish familiarity of the system to the user will improve the user’s overall experience.
  5. Error prevention- In order to prevent the user from making an accidental change to their account that could cause potential damage, additional messaging needs to be incorporated into the system.
  6. Recognition rather than recall- It’s encouraged to lessen the weight of memory on the user, instead have the instructions or images carry that weight as the user moves through a system.
  7. Flexibility and efficiency of use- Creating shortcuts is encouraged within systems, this helps establish users who are more familiar with the system from those who are more novice.
  8. Aesthetic and minimalist design- Keeping a screen less cluttered is generally the preferred style.
  9. Help users recognize, diagnose and recover from errors- If issues do arise on the system, there needs to be error messages that help the user pinpoint the issue, and resolve it.  
  10. Help and documentation- If users do have the need to reach out for help with the system, there needs to be a way for them to access other outlets of help too.

 

Similar to the Cognitive Walkthrough, to execute this test I reviewed each screen in comparison to each of the 10 Heuristics. After finding problems, they were also logged into an excel sheet, though with an additional field, “Evidence, Heuristic Violated” so that I could easily recognize which heuristic was missing.

The Heuristic Evaluation can be done by a multitude of evaluators. In this case there were several other students who also performed heuristic evaluations on my screens. The benefit of having multiple evaluators is that there is a higher likelihood of identifying the majority of missed or failed heuristics within a prototype. This does have diminishing returns though, so after around 8 evaluators, studies have shown, fewer new usability issues are brought up.  

The top three issues I saw on my prototype are as follows:

Problem: Within the flow of the Upgrade Phone task, I didn’t have a screen that allowed a customer to review the full order, before finalizing it. This broke two heuristic principles more than any other, Consistency and Recognition rather than recall.

 

Solution: To resolve this, I went back into the prototype and added this screen, which includes all points needed to review an order.

ReviewOrder_V1-01

Problem: There were multiple issues around the word choices I had been used. For example I was using language like “Passcode” instead of “Password”, “Update” instead of “Upgrade”, “Payment Methods” instead of “Payment” and “Billing Information” instead of “Account Info”.

Solution: I began to review how each of these words were being used and what I wanted the user to expect from these titles. Then I reviewed the best practice words for these types of information and implemented those words instead.

Billvs.Billing_V1-01

 

Problem: The final issue was already mentioned, the “Lock Plan” button. It again was confusing for evaluators, and broke the second principle; matching the system with the real world.

Solution: Again as a resolution, I altered the screen to include instructions.

 

This test ultimately forced myself to review each screen and critically analyse why different pieces were on the screen and if they needed to stay or not. Now the screens do not contain any useless information or inconsistencies.

The heuristic evaluation was a time consuming task, but after the initial execution it seemed to become easier.  As more of these details are completed, more of the screens and flows can become more usable. The consolidated and completed feedback in located on the link: Heuristic Evaluation- Collection

 

The last and final user test I executed was the Think Aloud Protocol. This test reviews how real users’ react to the app. It is meant to identify where users feel a loss of comprehension of the app, or where user’s lose their understanding of what’s happening within the app. The key difference in this test from the other two is that this test puts the app in front of real users, and this test asks the users to speak out loud about what they are doing as they do it. This is done because there are studies showing that when a person verbalizes their stream of consciousness, they do not disrupt their own flow of comprehension. So as designers, when an individual speaks through an interaction with an app, we can identify where any misses in comprehension arise, what their reactions are and any practices that we are doing well. Its an extremely informative tool, and requires very little in the way of costs or materials.

To perform this test, I collected my printouts of the app and a volunteer to complete the six established tasks. I reassured them that they could quit at any time and that this was really a test of the app’s design not a reflection of them. I explain to them the tasks then began to watch them flow through my prototype. Below are the two biggest issues that came up on the testing.

IMG_0105

 

Problem: The first was the placement of Autopay. Each of my Think Aloud volunteers had issues finding where Autopay was within the app. One of the individuals ended up searching for nearly five minutes for this button. He illustrated his own thought process during the test, “I image it’s under Profile, so click on Profile…now I’m thinking I don’t know, not Plan, not Bill, maybe “Billing info”?…I’d expect it’d be under Settings, which seems like Profile, which it’s not there”. It was the perfect opportunity for me to understand where he thought this piece of the app should be located.

 

Solution: To combat this, just as I stated previously, I moved Autopay to the Profile pages and keep its flow there.

Problem: Secondly individuals had issues understanding that they needed to lock their plan after they made changes to it. One individual said “Wait am I done? Do I need to do anything else? What is ‘Lock Plan’”? Again this helped me understand where their comprehension of my app was coming from.

Solution: Again the solution I’ve implemented is to add a short set of directions at the top.

 

This again was the final user test I performed, so afterwards I began consolidating the feedback and understanding where the lack or loss of comprehension arose from my tests. This is a process, and I know the next iteration is just around the corner.

 

After performing each of these tests, I’ve learned how to incorporate more of these usability principles in my designs. As well as to more complexly imagine myself as a first time user. Both of these new perspectives will help me integrate more usability into my designs earlier on in the process. I’ve also become more comfortable administering Think Aloud Protocol tests, which I’m sure will only increase with more practice.
Currently, I’m still working on integrating ALL the results back into the rest of my screens, but I feel confident that ultimately my app is far better off currently with just these main changes made to it, rather than before the testing was done. Below is a link to a presentation that runs parallel to this document.

 

Usability Testing