Third Time’s the Charm

Garrett and I are happy to finally have the following pieces for our product:

  1. Connect Individuals Experiencing Homelessness (IEH) with housing options: either Landlords through the Landlord Outreach Program or with Housing Programs, through our own network of organizations
  2. Save the IEH’s information and allow them to update/expand upon the saved info
  3. Save the IEH’s status on different housing options: accepted, waiting listed with time, open  to application

The idea behind our product is that the person will answer a series of four questions about personal information, criminal record, eviction history and income, then with this the system will review criteria from the Landlords & housing programs to find the best fit for the individual. The personal information question will ask about age and gender; the other questions will be yes or no. This is meant to help the individual feel the system isn’t asking too much about them. After answer the four sections, then the system will review which of the different Landlords’ criteria the individual meets, then it will ask the Landlord if they’ll accept this person as their tenant. If there are no landlords that are willing to accept their situation, then the system will move to reviewing housing programs. We choose to use the Landlord’s first because it’s more likely that an individual will be able to be housed faster through this metric than through housing programs, which typically have long waitlists. Ultimately the system is just matching Housing Options and the Individual Experiencing Homelessness together.

It hasn’t been easy to get to this idea, we’ve run through at least three iterations of our product in this quarter alone. Knowing that these iterations have all happened before we even piloted the behavior makes me nervous. I anticipate during the pilot things will go ary, but we’ve planned each week so that we have a strategy to align to. If we stick to this strategy we’ll be able to adapt as the pilot does too.

Its also been odd to plan a business around this idea. First because the idea itself hasn’t been solidified, and thus we’ve constantly be change the Business plan. And secondly because business plans and strategy is a newer subject matter for me. Now that we have a better grasp on the product though, it will be easier to structure the business plan and it iteration upon both of them in tandem.

With four more weeks left in this quarter its time to bring back all the tools we’ve learned this year. Relying on those will help us navigate the ambiguous space we’re in now.

The Road to Hell is Paved with Good Intentions

As we began to read the series of articles entitled “With the best intentions”, I found myself questioning not the motivations of the different projects and products, but where the designer’s responsibility lies. When is a designer no longer responsible for the products they create? Within this post I examine two articles’ arguments about designer’s responsibility, then finally come to my own conclusion.

For the purposes of understand where responsibility ends, I’ve created a small chart which illustrates four phases: Research, Design, Development and Real World. Each of these phases are executed in any project or product. Below is the chart:




The piece by Michael Hobbes, made it seem as though the effects of a product are out of the designers hands. Hobbes states  “when you improve something, you change it in ways you couldn’t have expected”. He sees the effects of a product as unpredictable and thus not the responsibility of the designers. It almost seems like he expects these unforeseen changes to happen whenever a designer improves a product. He is rightly supported in this concept that after implementation unforeseen effects of a product begin to develop. He and five other authors that we read, wrote about other products and their unpredicted consequences. Below are the products these other authors used as well as where the breakdowns happened, and what unexpected changes have been the negative effect of the product:


“Fortune at the bottom of the pyramid: A Mirage”, Aneel Karnani

Product: Microcredits

The Breakdown: “Microcredit does not alleviate (income) poverty, but rather reduces vulnerability by smoothing consumption. A few studies have even found that microcredit has worsened poverty; poor households simply become poorer through the additional burden of debt.”

Where it occurred: “The vast majority of microcredit clients are caught in subsistence activities with no prospect of competitive advantage. The self-employed poor usually have no specialized skills and often practice multiple occupations…With low skills, little capital and no scale economies, these businesses operate in an area with low entry barriers and too much competition; they have low productivity and lead to meager earnings that cannot lift their owners out of poverty.”

The change that they couldn’t have expected: Founders of microcredit programs never expected that their recipients would not have the skills to create a more niche service or product. There was a grandiose illusion that all recipients of a microcredit would be innovative business machines. In reality not all individuals living in poverty can lift themselves out through entrepreneurial practices.


“The High Line’s Next Balancing Act”, Laura Bliss

Product: High Line Park

The Breakdown: “We wanted to do it for the neighborhood…ultimately we failed.”

Where it occurred: Residents of the High Line community said they don’t use the park because of three things, “They didn’t feel like it was built for them; they didn’t see people who looked like them using it; and they didn’t like the park’s mulch-heavy programming.”

The change that they couldn’t have expected:  The designers never expected that their park (which did include community input meetings before opening) would have not been enough to encourage the community to participate. The design team thought their efforts would have been enough for the community to feel welcome, but they ultimately fell short. The drastic gentrification their park contributed to was also an unpredicted change caused by their designs. Ultimately, the project’s effects didn’t make the community feel welcomed and pushed many members out.


“Sex doesn’t sell anymore, activism does. And don’t the big brands know it.” Alex Holder

The Product: Large corporations are “allocating their marketing budget to good causes”.

The Breakdown: Corporations are donating with the expectation of a return on their investment.

Where it occurred: Companies expect these returns both in the customer base as well as in “favors”. When corporations donate to organizations, they expect their customers to choose them in the future because the company practices “good business”. When corporations donate in a political capacity, they expect that the candidate or political organization will keep the company’s best interest at heart.

The change that they couldn’t have expected: Corporations didn’t and don’t expect their customer to see through their thinly veiled acts of generosity, as actual acts of compensation and marketing, but individuals are beginning to see them just as that. Customer know these acts of “generosity” as acts of compensates. As companies implement ethically and morally wrong practice within their businesses, they use their corporate social responsibility initiatives to compensate for their destruction of both society and the environment.


“Everything is Fucked and I’m Pretty Sure It’s the Internet’s Fault”, Mark Manson

Product: The Internet

The breakdown: “The internet… makes it profitable to breed distrust”

Where it occurred: “The internet… was not designed to give people the information they need. It gives people the information they want.”

The change that they couldn’t have expected: Creators of the internet could not have anticipated that their invention would be used by individuals not to enlighten themselves, but to comfort themselves. Manson argues that the internet by supplying everyone with all the information of the world didn’t lead to heightened intelligence levels, but rather distrust of information due to the overwhelming saturation of availability. This then leads to individuals only seeking out echo chambers for news and information about the world around them, which creates comfort.


“Save Africa: The commodification of (Product) Red campaign” Cindy N. Phu

Product: (Product) Red Campaign

The Breakdown: The Product (Red) campaign hasn’t “Saved Africa” contrary to its catch phrase.

Where it occurred: The AIDS/HIV epidemic is an ongoing battle in both places the Global Fund is active and inactive.

The change that they couldn’t have expected: The periphery effects of the Product (Red) campaign has lead to “many nonprofit organizations dedicated to the HIV/AIDS crisis in Africa have not been able to receive grants or funding through donations…because of the misconception that Africa is saved.” The organization which created the campaign did not anticipate that the campaign would actually inhibit other humanitarian aid efforts in fighting against the same issues.


In each of these examples, there always seems to be some unforeseen consequence of the product that was overlooked, forgotten or unanticipated, which then have dire or sever consequences. Hobbes would believe these unforeseen consequences as part of life. He would not place the failure or consequences of these products on the designers. Below is the chart with Hobbes’ idea of responsibility marked in the phases of design.




A product’s unforeseen effects or changes on an ecosystem to Jon Kolko’s standards’ are the responsibility of the designers themselves. Kolko writes that “we are responsible for both the positive and negative repercussions of our design decisions, and these decisions have monumental repercussions”. With this concept, Jon would argue that the fact the High Line doesn’t connect with the community it’s built within is a fault of the designers themselves. He would argue, the fact that nonprofits working with HIV/AIDS victims within Africa cannot receive the aid they need, is in fact the responsibility of those who decided that the donations should be generated through a well strategized ad campaign. Finally he would argue that the perpetuated poverty of those individuals who now carry the additional burden of credit debt, lies on the shoulders of those organizations who structured the service. Like Hobbes, below is a chart illustrating what phase of the design process Kolko believe designers are responsible for.


Kolko is right that the consequences of a product are the responsibility of the designer. Though the question then become, how do designers predict the outcome their products will have on the world? Especially if it’s changing the world in “ways you couldn’t image”?

Prediction is impossible, but how a designer reacts to the consequences of their product is changeable. Designers have a responsibility to not only the products the develop, but also to observe and counter the negative effects their products have on a community. Instead of viewing a designer’s responsibility as the repercussion of the product, the designer is also responsible for continuing to iterate and listen to the people it has affected. In the chart below this phase of continued responsibility is called “Continue”. There are two tools all designers should use to help themselves take on the responsibility of countering these unforeseen consequences: Ethnographic research and iteration.


In order to counter the unpredictable side effects, a designer must first know what these effects are and how their design is perpetuating them. They can implement ethnographic research to do just that. This is a research method synonymous with just “listening to customers” (Hempel). The method involves in-person interviews executed at the place where the work is being done. The full effect of this method then allows for designers to gain a more empathetic understanding of the user’s needs when they go back to designing. For our slew of failed products, ethnographic research can be used to understand what happened with the product and where it fell short on serving the needs of the users. An excellent example of this is from the High Line, which has begun conducting interviews with members of the community, asking questions about how they can better serve them now.

The second tool a designer can use is iteration. This is the a process in which the repetition of a sequence of operations is taken to move towards the desired result. In design, this practice is highly emphasized. Not only is iteration encouraged in final products, but within every phase of design. To apply this concept to the failed products, we should see multiple version of the products each moving in a direction that would minimise the problems of the previous iteration.

Once designers implement ethnographic research and iteration on the negative consequences of their product, they can build better products. These designers are now equipped to find out how those most negatively impacted by the products feel and why and then design products that better address those issues. This additional step of continuing contact, feedback and iteration on a product is the real responsibility of the designers.

We cannot create things, let them manifest, then leave them behind. We have a responsibility to those who are left unsatisfied to find out what happens and figure out how to prevent that from happening to anyone else.

AT&T Redesign Brief

Welcome to the final installment of the AT&T Mobile Application Redesign. For the past eight weeks we’ve been refining our designs, tested with users, sliced through features to fit into a release timelines and now we can deliver a final strategic design brief. This blog post contains a description of the final iteration of the application, the collection of the Insights & Value Promise, the construction of the strategic road map, and finally the culmination of combining all the elements into the brief itself. The final version of the brief can be found at the end of this post.

The first task that needed to be tackled was another redesign of the application. The additional challenge was that this iteration needed to be completed in Sketch instead of Illustrator. In order to complete this I first needed to get an IOS toolkit & some icons build for Sketch and watch a few quick tutorials. After that, I was ready to go. It thankfully wasn’t a complete redesign of my application. For example, my flows that had been established and the features I had created, both of these were keep in the redesign. It was more like I was redesign the layout of my application. Sketch also makes the creating of an application extremely simple once you’ve already gotten a toolkit. All total the redesign took me a significantly less amount of time. Below is an image of the Billing Page from the last iteration (left) and the most current version (right), to give you a sense of what was leveraged from the last iteration and what this final version looks like.


Once I had finished redesigning my application, I could move forward with other concrete pieces of the Design Brief. The next portion I chose to complete was to collect and revise my insights and value promise. Again I didn’t have to completely rewrite the insights or the value promise, but both of these elements were a bit rusty since they hadn’t been reviewed in a while. The final insights I decided to include within the brief were:


  1. Customers only spend an extended period of time in the application if there is an issue with their account.
  2. The lack of standardization of visual design within the application cause users to feel disjointed when navigating a flow.
  3. The diverse capabilities of the application are overshadowed by feelings of confusion and frustration.


With each of these I added an explanation containing their significant and evidence from research. These would be integral for an individual reading the design brief to understand what prompted my reason for every design decision.

I also had to review the Value Promise, so that individuals reading the brief would understand what was the goal of my application. During class Jon had said the Value Promise had a structure to it. The first part is a quantifiable goal for the application to strive for, the second is a reason for that thing. For my AT&T application redesign “We promise to expand customers’ knowledge and use of the application, so that they can feel a sense of control over their account and service”. The quantifiable part is a user’s knowledge and the reason is so they have more control over their account. Each of these elements began to feel more concrete and substantial enough to put them into the brief.

The next piece I began to focus on was the Strategic Road map. This was introduced to me with the initial assignment. Jon had shown us an example on one, and explained how it’s very similar to a Product Road map. The key difference is that this map does not contain dates or resource allocation. Before I began my own creative exploration of this map, I did some preliminary research around what these typically look like. Once I had gotten a better idea of what the layout and structure should look like, I began concepting my own. Below is an image of that first sketches I did before moving into Sketch.




I decided to move forward with the bottom right design. The gradual decrease of the triangles perfectly represented the decreasing workload for the team. It also made breaking out the different phases of the release easy to represented. Once I transitioned the file into Sketch I keep messing around with the various elements: the capabilities’ location, the colors, the shape of the triangles. I never felt it was communicating its intention. I decided to show it to Sally and Conner who pressed me with questions about the design. This conversation though short, gave me insight into what elements the design wasn’t communicating properly. For example, Conner asked about the color choice and after explaining the reasoning, I understood that for him the current color options weren’t conveying the idea of a continuing development. So I went back to Sketch with their questions and feedback fresh in my mind and I tweaked the map to be more inline with what I wanted the audience to understand. Below is a PDF of the final image that is included in my design brief.




Since we had already defined the features and controls when meeting with our developers, I know I didn’t need to redefine them again for the Feature Brief. Instead I now needed to build out my actual brief. Knowing from the beginning that the brief would be printed out, it was recommended to build it out in Indesign. This way I could control the layout and size of different elements more easily. Jon gave Elijah and I a quick tutorial as a starting point, and then I began my brief. I was finally ready to begin compiling all the elements into the one document. As I went along adding the Insights and Value Promise I realized the transition was a bit rocky. To provide a smoother transition I decided to add in Design Ambitions, which I considered to be large overarching aspirations that support the Value Promise, but are also parallels to the insights. These gave a direct connection to the Insights and helped support the Value Promise.

Once I had completed the Overview, Insights, Ambitions, Value Promise and Strategic Road Map; the Capabilities came next. This section contained each of the different features with a description so that anyone can see the feature and its value for the application. To help prompt my description of the features, I merely posed the question “what does this do and what does the user get out of this?” Then I also tried to tie each feature and its abilities back to the North Star of user control. That way the reason is clearly defined for the feature.

I ended up doing three other iterations of the full book. Throughout each iteration I would print out a few of the sheets to decide on a layout. Below is an image of two different layouts I tried while reviewing, I ended up going with the option on the right.




The review process, both editing written content and reviewing layouts, helped me to narrow down and polish the final design brief. I especially found it was useful to flip through the final PDF’d brief to catch any misplaced texts or images.

Ultimately the process of combining information and polishing a brief was arduous, but the effect of feeling a printed out piece of my own efforts was rewarding. Below is a link to the final PDF Design Strategy Brief.
AT&T, Design Strategy Brief

My AT&T Redesign: Roadmap

Since last week’s update, the AT&T application redesign has continued to work towards the goal of release. Previously I had presented our newly updated hero flows for all screens within the application and asked a software developer for an estimate on the time it would take to create each feature. Once I meet with the software developer I understood that the application would take around 42 days to complete. This date is contingent on having two designers working on the application for 40 hours a week. Now though, there is an additional constraint, I must release a version of the application within the next four weeks, or 20 working days, that still delivers on the value promise previously established. Within this post I explain the review process the application went through to produce a version that contained limited features, but still delivered on the value. I also explain the process of roadmapping, a technique used to schedule the development of the different features of the application for its phased releases.

Before any of the review process occurred, I reminded myself of the value promise I had originally established. The value promise is what I wanted to deliver to the user. This value promise is derived from direct quotes and insights made through research. For AT&T I had two pieces that grounded my value promise. One individual said the current application “could almost just be a widget, all it needs to do is allow me to pay my monthly bill and view my data usage”. This sentiment helped me understand that users don’t find value in the frivolous additional actions the current application offers. It also alluded to the fact that users want to have control over their service. The most important aspects of a user’s service would seem to be a user’s ability to pay their own bill and their ability to review their data. In being able to view their own data, a user feels more in control to prevent any overages. Additionally, many of the users I spoke with agreed that the current application “does way too much”. From this and other similar sentiments, it can be concluded that users don’t want a swiss army knife application, but one that is deliberate in its capabilities. After synthesizing users quotes, I concluded that the AT&T application redesign should deliver an application that is less capable, but more intentional in its operations and there needed to be a sense of control given to the user on sensitive matters.

Delivering on the value promise was again only one constraint, I also had to work out a way to release my application with features within 20 working days. This meant I needed to review what were the essential features of my application that could be created in the 20 working days and released. These were the two established constraints I was working within.

I began reviewing all the features and flows alongside their estimate. In order to prevent myself from duplicating times, I cut out each screen and physically mapped out how the application would work. Then I began to quantify the total costs of each screen, and added some preliminary notes around what the application needed to do in the first release. I organized my work based on four sections of the application: Login, Billing, Usage, Devices and Account. Below are two images of these physical maps with my additional comments and notes on the various screens.  Billing

Again, mapping out these different sections helped make sure I didn’t duplicate estimated times for feature development. It also helped me separate the different flow between what would be published in each different phase. Finally, the method helped me see where various features overlapped. Knowing when features needed to be put in the same release helped me make sure they were placed in a release order that made sense.

Once I had figured out the costs of each feature and its place within my application, I began to review what features I wanted within the first release of the application. I’ll refer to this release as Phase I. Again the constraint tied to this release was that it could only include 20 working days. The other constraint I was working with was the value promise. I took these constraints and reviewed my application to see what essential actions of the application were needed to deliver the value promise and and what could be developed in the allotted time period.

I knew from the beginning the Phase I release needed to have the Login and Home features. These were essential since they’re necessary for opening the application. Based on the assumption that the application needed to give users control over their monthly bill and their data, I decided that those two flows needed to be added to the application’s Phase I release as well. Now the tricky part was that with the flows that had been included in Phase I: Login, Home screen, Pay bill, View data; there were already a number of additional flows that were necessary to build out the essential flows. These additional flows included: Managing payment methods, Change password for a Word, Change user name. Each of these flows were needed in Phase I in order for the essential flows to be functional.

Then I adding up all these costs associated with the full flows and found I was over the 20 day limit. Below is an image of my math, showing the cost of each feature for the flow and how they were over my 20 day limit.


Phase I Costs

In order to cut down the costs even further I made the decision to remove the option of having no password as a feature. Originally, my three password options were the most expensive part of my application, so I knew from the beginning since security wasn’t part of my value promise I would need to remove a password option feature or two from the Phase I release. After I removed the flow for no password and the login options for no password, I had reach the goal of about 19 days for development.

From there I began the whole process over again with only the remaining features. The challenge with these features and flows, was the question of what would deliver the most value in the next phase. Since there was no time constraint, I had to discern their delivered value as the only constraint.

The foundation of my application is structured around delivering a small number of actions done well, and on giving the user control. The Phase II focus was then structured around data and users control. The activities I had design to be accessed from data were “Change Data” and “Send Notification”. By allowing a customer to change their data on their own, they again feel a newfound sense of control. By giving a user the ability to see their data is almost over and sent out a messages, the user can again feel more in control of their data and service. Each of these actions give the user a better sense of control over their data.

For the final Phase, again there was a reduction on constraints. I no longer needed to worry about the time constraint, and I didn’t need to worry about the value promise, since all my features collectively delivered up on it. Rather with this phase, I focused on how I wanted the software developers working on the features so that there would be a minimal amount of blockage between the two developers. This issue wasn’t ignored on the other flows, it just became a much larger issue with the final Phase III assets. If it came to be I know I would be able to remove a few of these remaining features

After I had learned which features would be placed in which Phase, I began to “Road Map” the features out. This in a broad sense means mapping out when each feature is created by the software designer. On a more granular degree, I created an Excel sheet that contained each of the resources, on the y-axis and the days of the week on the x-axis and I plotted where and when a feature was created. It also contains direction around how many resources would be applied to the feature for creation. For the first round, I was drawing these maps out next to the Phase’s features and estimated time. Below is a picture of my each phases’ sheet:




One of the largest constraint I struggled with while mapping out the features was making sure I wasn’t causes development block. If a feature was dependent on another feature, I had to make sure to organize the timeline so that the first feature was build before the other. After I had a paper version of the graph locked, I then started to digitize them. In digitizing the timelines, I also added a short explanation for each feature being developed, as well as a color that coordinated with the section of the application. Below are links to two PDFs containing the roadmaps.

Roadmap- Phase 1

Roadmap- Phase II&III
As we continue working on the My AT&T redesign, our next steps including writing out the HTML code for a screen of the application to actually work, and creating a document for another individual that explains what the new application looks like, why it’s an improvement from the last version and how to build it out.

My AT&T App Redesign: Features, Capabilities, Controls, and Development Meetings

In the last blog post about the My AT&T Redesign I had completed three usability tests, had evaluated the results, and begun to integrate those findings into my application. A link to the post is here. Since then, the project has turned to a more concrete phase, where cost and time have begun to play a role. Within the past two weeks, I’ve redesign my application (both the actions within it, as well as aesthetics), identified the features, controls and capabilities within it, and meet with a Software developer, Chap to get an estimate on the time it will take to develop. Within this article there is an in depth look at each step to explain what these mean.

After I presented my findings for user testing, I received feedback that my application lacked a “realness” to it. So I asked Jon to meet with me and discuss my app at length. In between meeting with Jon and my presentation, I also design three examples of different application styles for my home screens and asked my mentor to give me feedback on them. Below is a PDF of the three styles.



My mentor replied with resources for inspiration, a fresh perspective on the three styles, and a nod to the third version as their preferences. After reading her feedback, I decided to use the third option as my style for the next iteration of my application.

While I was still working on creating more visually pleasing screens, I meet with Jon. He suggested I focus on two specific details of the application:

  1. Making the application look more like an IOS application. Pulling toolkits and existing resources to create the screens, rather than my own visuals and buttons in Illustrator.
  2. Concentrate on what is happening within the app. Questioning why I’m including certain flows and actions within my app rather than just accepting “how things have always been done.”


An example of how the application lacked a sense of realness can be exemplified in the slider bar I created for the Plan flow. I had created my own slider bar, instead of just pulling from an IOS toolkit. Below is an image of my old slide bar, and now the newer IOS approved one I’ve placed in my application.




Jon explained that by creating my own icons and buttons I’d made it more difficult for a software developer, because IOS elements have already been coded. Developers just have to copy paste when writing the code when creating those elements. In creating my own, I’m making the developer go and write brand new code, which causes additional unnecessary costs. If I had had a certain element that needed to be completely created from scratch, then I could have asked a developer to specifically make it, but I felt all the elements from the IOS toolkit covered my needs within the application.

An example of Jon’s second point was exemplified in the password flow. Previously I had requested that the user add in their new code and type it twice and I had certain requirements on what the password had to contained. Jon reminded me that I needed to be more conscious of what I’m requiring of users. In my next iteration of the application I striped away all the requriements of changing a password, and only included what I thought would be necessary for a user to feel their password was changed.

After meeting with Jon, I reviewed my printouts and went through each screen marking up what I wanted to keep, to remove, to add, and what needed to be on each screen. This evaluation forced me to review what I was asking of users for each flow, and if I wanted to keep it or trash it. Below is an image of what this process looked like.




After reviewing every element of the contents, I began to tackle the organization of the application. Moving flows between the four sections of Billing, Usage, Devices and Account. Once I had a plan that contains all the flows I wanted, and where I wanted them, I was able to start building screen.

In a week, I build every screen for each flow with proper IOS elements and screens that were more consciously crafted. For IOS styling, I downloaded a toolkit to pull elements from, and found resources on the internet that explained the spacing requirements. I referenced the annotated printouts when deciding what elements needed to stay on each screen. I also was able to go back to my inspirations from Q2 and pull in some ideas for that time too.

After creating the new app I needed to get an estimate for the time it would take a developer to create the application. I had learned from class that when developing an application, the real cost is associated with the features and overall time it would take to build the various elements of the application. In order to meet with my developer,  I needed to identify these elements and explain their purpose within the application.

I took all the flows and identified each screen with a number. The on each screen I identified the features and controls with names. A feature is something that adds value to the user’s experience. For example in the application the pie chart containing an account’s data usage. The counterbalance to features are the controls, which are elements that are needed for the structure of the application, such as the accordion information storage style used throughout the application. Once I had each named and identified each feature/control, I printed them out and set up a meeting time with the developer, Chap.

During our meeting, Chap and I discussed each flow individually, identifying the features and controls in context of the flow. As we went through flows Chap and I identified potential error states that needed to be addressed, where there were places of overlap between screens, and some of the more difficult elements of my app and what solutions he saw were possible. As I identified the various elements, Chap was documenting the screen name and element with a point amount associated with the cost of the feature. One point equaled one day of development. Chap then added a 30% padding time for himself so that he had some room if he needed the additional time. So now for each day of estimation, Chap had 1.3 days to complete the feature. Chap then took the total estimated amount of time and divided that between two developers. This is because we assumed the development would be executed by a team of two developers. At the end of our meeting, Chap shared the excel document with me, so I could reference each cost and element. All total my application would take 41.17 day to create if two developers work on it for 8 hours a day. Here is a link with the file: Application Estimates 

The two most expensive features in the application came from the password feature options, first the option to replace your password with the Iphone’s Touch ID and the second was to opt out of having a password. For both of these features they were estimated to take around 3 and a half days for two developers. For perspective, on average my other features were only estimated to take about 87% of two developers days. This is high estimate was due two pieces. First Chap explained that he’d need to understand if it was even possible to have these options within the application, and secondly there needed to be the estimated time associated with building out these elements. His explanation of these two steps helped me understand where these costs were coming from and what that time would be spent doing.

Another interesting element Chap highlighted during our meeting, was that when coding all of these pieces, he would need to work with various teams within AT&T to write out the proper code to extract that information associated with the accounts. An element from my application that required direct communication with AT&T to gain the right information is the Data Usage pie chart. Since the information displayed is so specific to the account, Chap would need to code the right request to display the various percentages of usage. That information would need to come directly from AT&T. This also means there is a general costs with working on any app that is spent by the developer familiarize themselves with other resources they would potentially we working with.

Since my meeting with Chap, I’ve updated the screens to reflect the changes we discussed, and I updated my flows so that they contain both the screen title, and the cost associated with their buildout. Below are links to each sections’ flows with costs associated with them.

Home & Billing Flows

Devices Flow

Plan Flows

Account Flows

Since this project is under a time constraint of four week, I now need to reduce my development time. This means, going through my application and deciding which features and controls are going to be keep and which are going to be eliminated for the first launch, but added back in later. I’m looking forward to working out a way to deliver a valuable application to the client, but still maintain our timeline of four week of development work.

AT&T App User Testing

Previously this AT&T app redesign had been developed over a period of four weeks, with a few rounds of user testing. The next step in the design process is user testing. In broad terms this means reviewing and testing the app with the intentions of a user. This allows for a designer to gain more empathy for the users as they design the app. Over the past two weeks three user tests have been conducted, reviewed and solutions integrated back into the app. Below are the different methodologies used, the resulting breakdowns that were found, as well as potential solutions for these breakdowns.

The three tests the I conducted include:

  • Cognitive Walkthrough
  • Heuristic Evaluation
  • Think-Aloud Protocol

Each of these tests highlights a different aspect of usability, allowing for the results to cover a whole array of usability issues.

The first of these test was the Cognitive Walkthrough. This type of user test evaluates the prototype’s ease of learning. More plainly stated, it evaluates where there would be potential breakdowns for a first time user, performing a standard task. This type of usability test is integral to establishing and understand a theory of how the first interaction with the app will go for a first time user.

To execute this usability test, I printed out each screen. Established my six tasks: Set up Autopay, Make a payment, Change my plan, suspend a device, change password and upgrade device. Then I established who my potential first time user was; any individual who has the AT&T app and uses it for its various features. After this, I began to set up scenarios for myself to help empathise with the user, then I’d run through each flow asking myself a series of questions. These questions were:

  • Will the user try to achieve the right effect?
  • Will the user notice that the correct action is available?
  • Will the user associate the correct action with the right effect they are trying to achieve?
  • If the correct action is performed, will the user see that progress is being made towards a solution to their task?

These questions help evaluate each step necessary to perform a task, and whether or not a first time user would have the ability to connect each of these steps with their overarching task. After reviewing each screen, whenever there was an issue between the questions and the screen, it was logged on an excel sheet. This sheet included several descriptive fields, so that when reviewing later, I could easily remember what needs to be changed. These fields included: Screen Identity, Problem Descriptions, Severity, Frequency and Proposed Solution. Below is an image of the work space I was using to perform these reviews.




From this review I found a number of issues in the learnability of the prototype. I’ve also come up with a potential solution for these issues. The main  three issues I’d like to focus on are as follows:


Problem: My screen for the Devices only included a list of devices, though with reviewing how this would help with a user’s overall task there was a disconnect. There may be an issues between the step of user’s opening up the “Devices” page and not seeing any actionable next steps.


Solution: In order to combat this disconnect, I added a few choice actions that can be started from the “Devices” page. This allows the user to connect their task with the current screen they are viewing.



Problem: Within my “Change Plan’ tasks, a customer won’t immediately understand that they must click the “Lock Plan” button to finalize their changes to the plan.


Solution: To manage the customer expectations, I added a brief line of directions explaining that the customer needed to tap on “Lock Plan” once they were satisfied with the changes.



Problem: Finally the placement of the Autopay signup as a subsection of “Bill” seemed like it would clash with a user’s pre established notions of apps’ setups.


Solution: So to mitigate that potential breakdown, I added the option for Autopay to be set up under the “Profiles” screen.



This was an excellent test for me to more fully understand how a first time user would review the actions available to them, and analyzes if those steps truly would help them complete a task. To access my full feedback here the link, Cognative Walk Through- Complete Feedback


The second type of usability testing performed was the Heuristic Evaluation. This tested the app against a list of 10 established usability principles. These are known aspects of well designed products, that facilitate seamless and smooth interactions between user and system. Below are the 10 principles with a short explanation:

  1. Visibility of system status- There needs to be some kind of communication during wait-times so the user understands the system is still working even though no significant visual aspects have changed.
  2. Match between system and the real world- By aligning a product with some aspects of the real world, then the system can become far more familiar to the user upon the first experience.
  3. User control and freedom- This allows a user to be able to make the changes they want to their system, but also allows them the freedom to undo these changes if needed.
  4. Consistency and standards- Following industry standards to establish familiarity of the system to the user will improve the user’s overall experience.
  5. Error prevention- In order to prevent the user from making an accidental change to their account that could cause potential damage, additional messaging needs to be incorporated into the system.
  6. Recognition rather than recall- It’s encouraged to lessen the weight of memory on the user, instead have the instructions or images carry that weight as the user moves through a system.
  7. Flexibility and efficiency of use- Creating shortcuts is encouraged within systems, this helps establish users who are more familiar with the system from those who are more novice.
  8. Aesthetic and minimalist design- Keeping a screen less cluttered is generally the preferred style.
  9. Help users recognize, diagnose and recover from errors- If issues do arise on the system, there needs to be error messages that help the user pinpoint the issue, and resolve it.  
  10. Help and documentation- If users do have the need to reach out for help with the system, there needs to be a way for them to access other outlets of help too.


Similar to the Cognitive Walkthrough, to execute this test I reviewed each screen in comparison to each of the 10 Heuristics. After finding problems, they were also logged into an excel sheet, though with an additional field, “Evidence, Heuristic Violated” so that I could easily recognize which heuristic was missing.

The Heuristic Evaluation can be done by a multitude of evaluators. In this case there were several other students who also performed heuristic evaluations on my screens. The benefit of having multiple evaluators is that there is a higher likelihood of identifying the majority of missed or failed heuristics within a prototype. This does have diminishing returns though, so after around 8 evaluators, studies have shown, fewer new usability issues are brought up.  

The top three issues I saw on my prototype are as follows:

Problem: Within the flow of the Upgrade Phone task, I didn’t have a screen that allowed a customer to review the full order, before finalizing it. This broke two heuristic principles more than any other, Consistency and Recognition rather than recall.


Solution: To resolve this, I went back into the prototype and added this screen, which includes all points needed to review an order.


Problem: There were multiple issues around the word choices I had been used. For example I was using language like “Passcode” instead of “Password”, “Update” instead of “Upgrade”, “Payment Methods” instead of “Payment” and “Billing Information” instead of “Account Info”.

Solution: I began to review how each of these words were being used and what I wanted the user to expect from these titles. Then I reviewed the best practice words for these types of information and implemented those words instead.



Problem: The final issue was already mentioned, the “Lock Plan” button. It again was confusing for evaluators, and broke the second principle; matching the system with the real world.

Solution: Again as a resolution, I altered the screen to include instructions.


This test ultimately forced myself to review each screen and critically analyse why different pieces were on the screen and if they needed to stay or not. Now the screens do not contain any useless information or inconsistencies.

The heuristic evaluation was a time consuming task, but after the initial execution it seemed to become easier.  As more of these details are completed, more of the screens and flows can become more usable. The consolidated and completed feedback in located on the link: Heuristic Evaluation- Collection


The last and final user test I executed was the Think Aloud Protocol. This test reviews how real users’ react to the app. It is meant to identify where users feel a loss of comprehension of the app, or where user’s lose their understanding of what’s happening within the app. The key difference in this test from the other two is that this test puts the app in front of real users, and this test asks the users to speak out loud about what they are doing as they do it. This is done because there are studies showing that when a person verbalizes their stream of consciousness, they do not disrupt their own flow of comprehension. So as designers, when an individual speaks through an interaction with an app, we can identify where any misses in comprehension arise, what their reactions are and any practices that we are doing well. Its an extremely informative tool, and requires very little in the way of costs or materials.

To perform this test, I collected my printouts of the app and a volunteer to complete the six established tasks. I reassured them that they could quit at any time and that this was really a test of the app’s design not a reflection of them. I explain to them the tasks then began to watch them flow through my prototype. Below are the two biggest issues that came up on the testing.



Problem: The first was the placement of Autopay. Each of my Think Aloud volunteers had issues finding where Autopay was within the app. One of the individuals ended up searching for nearly five minutes for this button. He illustrated his own thought process during the test, “I image it’s under Profile, so click on Profile…now I’m thinking I don’t know, not Plan, not Bill, maybe “Billing info”?…I’d expect it’d be under Settings, which seems like Profile, which it’s not there”. It was the perfect opportunity for me to understand where he thought this piece of the app should be located.


Solution: To combat this, just as I stated previously, I moved Autopay to the Profile pages and keep its flow there.

Problem: Secondly individuals had issues understanding that they needed to lock their plan after they made changes to it. One individual said “Wait am I done? Do I need to do anything else? What is ‘Lock Plan’”? Again this helped me understand where their comprehension of my app was coming from.

Solution: Again the solution I’ve implemented is to add a short set of directions at the top.


This again was the final user test I performed, so afterwards I began consolidating the feedback and understanding where the lack or loss of comprehension arose from my tests. This is a process, and I know the next iteration is just around the corner.


After performing each of these tests, I’ve learned how to incorporate more of these usability principles in my designs. As well as to more complexly imagine myself as a first time user. Both of these new perspectives will help me integrate more usability into my designs earlier on in the process. I’ve also become more comfortable administering Think Aloud Protocol tests, which I’m sure will only increase with more practice.
Currently, I’m still working on integrating ALL the results back into the rest of my screens, but I feel confident that ultimately my app is far better off currently with just these main changes made to it, rather than before the testing was done. Below is a link to a presentation that runs parallel to this document.


Usability Testing

Picking up where we left off: AT&T App Update

Last we left off in this project we had finished working our divergent thinking ideas into the AT&T app, and writing out six short hero flows of our screens. Since then we’ve ramped up quite a bit. We’ve moved from sketching to digital files to user testing and iterations.

To further explain, once we finished making short hero flows of the screens, we took to sketching out the full flows with full screens. We had print outs of an Iphone outline that, within each of the outlines we illustrated all of the pieces needed to create the various screens. At this point I was working out how to integrate all the current app features and my own divergent thinking ideas. To do this some ideas were cut such as the concept of paying for the service at the beginning of the month, rather then at the end. I also cut some of the features the app currently has, such as the search feature. It seemed like it was put in the app perfunctorily. I continued to combine my flows and ideas with the actions needed in the app, continually presenting to my fellow classmates to gain their feedback and integrating that information too. Eventually once I felt confident about my screens and flows, I began to transition these into Illustrator. Below is an image from one of my sketched out screens with feedback.

Digitizing I

I started out by adding the iphone image that we sketched on into the Illustrator file as a background. (This turned out to be a terrible idea, since the sizing was completely off.) I then built out the different buttons, features and flows as I had sketched them. My goal was to transition as much as I could to be digital.

After the first pass at digitizing the screens I brought them in class for feedback. Greg politely but directly tore them apart (not literally though). His feedback culminated to me needing to integrate more of the actual functions of the app into my redesign. I had been cutting too much from the actual app, and I wasn’t including any screens aside from the hero flow, which was a mistake.

So I went back to my files and integrated more pieces from the app. I added more overlap between different pages, so instead of keeping “Plan” in a silo, I started building more crossover between parts. I also found a file from Connor, which was filled with great Illustrator icons specifically from Iphones. This elevated the overall feel of my screens to be more polished. I also removed the iphone image background, it was really only restricting how I was creating pieces. Finally I started adding additional colors as accents and to create contrast.  

The second round I brought my screens in received similar feedback; I needed to add more icons, more color, more standardization to my buttons. I went back into my digitized screens and again attempted to add more details. To more fully align to the actual app, I began reviewing our old screenshots and pulling out where there was overlap between different pieces of the app and any additional features of the app. This helped make sure my app was a redesign and not just the hero flows.

For the third round of review, we were presenting our files to the class. So I printed each of my screens out, two per page, and I ran through each flow. By the end of the critique, I needed to standardize my buttons, size shape and color wise. I also needed to create an emphasized button and deemphasized button. I needed to standardized my types and font sizes. Finally I needed to lighten my secondary color so it wasn’t as noticeable, but still created contrast between the navigation bar and the main activity space on the screen. I also was given the tip to change my data bar to be a slider bar, which turned out to be an excellent recommendation when I tested it.

After receiving the full class’ feedback, I went back and asked Conner for his additional feedback, so I could integrate that into my screens. Since Conner works in Visual Design I knew he could provide me with some 101 guidelines to follow that would increase the fidelity of my screens. He told me the official size of the Iphone screens that I should have been working in, as well as some simple tips on standardization. He also reassured me that my screens communicate the point they need to and that the aesthetic part of wireframing will come with practice.

I took Conner’s and the class’ feedback and got to work on resizing and formatting my screens. I reworked every screen on every flow, creating more hierarchy, standardization and correct sizing. I was far more satisfied with this version than I had been for all other versions of the screens. Below is an image which illustrates each of the different iterations for each of the rounds. 


Throughout this whole time though, Greg was directing us to begin user testing to confirm our screens were properly working.

Once I had the finalized version of the screens, I began user testing. I used the Think Aloud Protocol, which is a method of “evaluating the usability of work by encouraging a user to think out loud as they use your product or service.” During the user tests, I found I had missed two confirmation screens and I needed to add more overlap between the different parts of app. Overall the testing was very helpful in figuring out what was missing from my app.

Below are the final updated version of my screens, specifically for the flows.

Change Plan_v4_fullFlows-01


Make A Payment_r4_fullflows-01


Service Design: Team British Tofu & Farmhouse Delivery

For the past eight weeks, Team British Tofu has been diligently working alongside Farmhouse Delivery to more fully understand their service and how it could be improved.

Two key issues we found were that:

1) Farmhouse Delivery does not communicate to the customer what quality “real food” looks like — customers expect perfect, waxy produce that they see at the grocery store, and when produce looks less than perfect, they are disappointed.

2) Customers find it difficult to cook all their produce. Customers feel guilty about the resulting wasted food and money.

From these two breakdowns, we prototyped an “ugly produce” card, which would inform clients why their “real” produce might look a little different than what they normally see in the grocery store; and a cooking poster, which would help customers cook with ease, and help eliminate food waste.

We picked these two for the following reasons:

  • Limited impact/work needed by Farmhouse Delivery to integrate
  • Illustrator and paper were accessible tools and materials to create these ideas
  • It would have a high impact on the customer

The “Ugly Produce” Card: Prototype & Results

For the card, we reached out to Farmhouse and asked if there were any issues with the produce that week. The CEO quickly replied, saying the broccoli was purple, which was caused by cold damage. She said that she was concerned that customers would not understand that the quality was just as high as green broccoli, it simply looked different.

With this, we designed the copy for the card. We strived to create something that was informative yet still pushing the idea of “this food is okay to eat!”. We had so carefully crafted this copy, but when we brought it to the client asking if the literature was okay to use, they completely changed the feel of the card. Unfortunately we had to change our copy to accommodate the client and the time restraints we were working under. This experience of pushing back with the client was interesting, but something that require a lot of skill that we still need to refine.

Once printed, we placed the card in 22 bins and called customers to see if these cards made an impact. While some got value out of the card, the majority of those we contacted didn’t even see it. One customer said “I didn’t get purple broccoli, so I guess it doesn’t matter?” 

As we reflected on the feedback, we realized we had made some assumptions. For example, we assumed, since Farmhouse already regularly placed recipe cards in the bins, that a paper card would be the best information delivery mechanism. However, during our customer calls, we realized that customers actually have lives outside veggie delivery. Go figure.

Taking into account that customers still need education around their produce, we recommend that Farmhouse deliver this information via text. A quick photo and message will prepare customers for what to expect, and the text will broaden customers’ understanding of fragility of local farming.

Broccoli Text

The Cooking Poster: Prototype & Results

We worked with Farmhouse to develop simple cooking instructions to help customers cook any veggie in their bin. While there are many methods, we narrowed down the cooking options to saute and roast.

We also provided spice recommendations to help customers make their veggies even tastier.

We placed these posters in 22 bins and called customers to gather feedback the next day. Many of the customers said they “already knew this information.” One also remarked that size of the poster was too large — she was remodeling her kitchen and just didn’t have space for it.

While a cooking aid may not have been the most valuable, we also heard from multiple customers that they kept receiving produce they didn’t want and didn’t receive the veggies they really wanted. From this feedback, we realized that, more so than underdeveloped cooking skills, a lack of personalization was the true reason why customers did not use their full bushel.

To enable customization, we suggest that Farmhouse Delivery allow customers to indicate unlimited produce “super likes” and “dislikes.” As of now, customers can only choose three dislikes, and have no way of communicating what they really want.

Farmhouse Website Dislikes Farmhouse Website SUPER LIKES

The Customer Feedback Card

Lastly, through these most recent conversations with customers, we realized that there is no regular customer feedback mechanism. We suggest that Farmhouse include “customer feedback” cards in every bushel. These cards would allow customers to easily change their order size or frequency, and also communicate any issues or desires.

Change Bushel Card

Next Steps

We will be meeting with the CEO to discuss our findings and present our suggestions for the company’s future.

To see our full findings, prototypes, and process, please see our Design Brief here.

Finding the Gap

For the past week I’ve been running circles in my head about the Coordinated Assessment (CA), wondering if it’s benefiting individuals experiencing homelessness or if it’s perpetuating people’s homelessness for longer than necessary. The Coordinated Assessment is a 50 question test that evaluates an individual experiencing homelessness’ vulnerability while living on the street; their “likelihood of dying”. The idea is that with this standardized assessment the right resources can be effectively and efficiently directed to the right individuals. The test  presents itself as serving the popular of homeless in the same manner as an emergency room serves their population. An emergency rooms will prioritize who to treat as they come in the doors, the most vulnerable at the top of the list and those with less severe issues are deprioritized. The CA does the same thing, it prioritized those who are most likely to die on the streets, and the others who could survive on the street for a little while longer are deprioritized.

Coordinated Assessment also claims that it saves the government money by housing the ‘most costly individuals to the state’. An individual that is costly to the state will accrue the most costs in two budgets: emergency room visits and incarceration. An individual may visit an emergency room because of accidents or general sickness, these costs since the individuals cannot pay them back fall to the state. An individuals is also likely to be take into jail while experiencing homelessness as well due to any number of reasons. By housing an individual, they have fewer emergency room visits and are less likely to become incarcerated. By reducing the number of individuals experiencing homelessness, the state then will see a reduction in these two budgets. This cost efficiency was part of the City of Austin’s motivation to implement the CA in the first place.

When we spoke with actual individual’s experiencing homelessness, we found the CA is doing what it was designed to do. It saves the state money and it houses those most likely to die on the street. One of these individuals we spoke with took the CA and scored high enough to be placed in a program offered by ARCH called “Rapid Rehousing”. This is a program designed to take those that are most likely to die on the street and put them in whatever housing facility the caseworker can find as soon as possible. So he was able to be housed within in a few weeks after taking the assessment. In his case, the CA was able to get him in contact with the individuals he needed to receive his housing. For individuals like him, the CA is working.

We also spoke with a woman who is staying at the Salvation Army with her child. She’s been there for 60 days, she’s only allowed to stay there for 90 days. As her time there is coming to an end, she’s been furiously thinking about what she will do next. This women has taken the CA, but she did not scored high enough to qualify for any program that would put her into immediate housing. Based on the CA’s judgement, she should be able to survive out on the street a little while longer. She’s healthy, young, working, has no drug issues, mental health issues, or criminal record, all of these things for the CA support the claim that she can self resolve. In reality, she feels she cannot self resolve and that she doesn’t have any other options after she is let go from Salvation Army.

Previously I keep comparing the two stories, the women who is facing the loss of so much in her life because she’s too healthy/clean/high-functioning to qualify for rapid rehousing, and the man who almost haphazardly received housing after being convinced to take the CA. How can there be a system in place that gets both individuals housed? You can’t dismissed either individual’s needs to be housed, but through a series of questions the CA prioritizes their needs.

I came to something of the conclusion that there is a gap of individuals who aren’t being served by the CA. The individuals who are too high functioning are within this gap. ARCH and the CA are meant to serve those who most ‘need’ housing based on who will die on the streets, but there are others who aren’t seen as ‘needing’ housing because their not in a high enough state of vulnerability.These are the individuals who aren’t being served in the current system. I believe Garrett and I can design for these individuals.

Divergent Thinking & Storyboarding

Previously within our Rapid Ideation class we worked on creating a concept map for the AT&T mobile app. This illustrates how the app functions and how various pieces of the app are connected together. When I was going through each screen to pick up these sections and their connection, I noticed a lot of overlap. From any screen you could practically do all the same things. There was no distinctions of actions when going to the “Plan” page from the “Usage” page. This thought stuck with me as we continued to develop our flows.

After we created our concept map we narrowed our scope of work based on user research. Instead of working on the full app’s abilities we were just going to focus on the top six activities our user research highlighted:

  • Viewing the Plan/usage
  • Changing the Plan
  • Suspending service
  • Paying a Bill
  • Setting up Autopay
  • Changing the password

After establishing these activities, we practice divergent thinking and applying this to the AT&T app to create a new experience. I used the “random word” method as a jumping off point. I developed a list of around 50 words then started pulling in various ones to begin generating ideas. I also started imagining how I wanted the app to feel like, to imagine the experience I wanted it to design. From this point I knew I wanted less pieces on the screen to declutter it and I knew I wanted to create something with specific direction for each page. With this filter, I began to envision a whole new experience for phone service providers. What if, a customer set their own plan amounts, rather than selecting preconceived amounts. What if the app created a recommended plan based off your usage, so the customer never has to go over. What if when you cancel your service you were allowed 24 hours of free service, incase you need your phone again. What would happen if you could just use your fingerprint instead of a passcode, or if you didn’t have to have the passcode at all? Each of these ideas were filtered through the notion of creating an app that would generate a more usable experience and lessen the burden of options. Some of these ideas were worked into my flows, but some were cut. 

When practicing divergent thinking methods, I realized my brain, like most others, will rapidly disregard more outlandish and inconceivable ideas. It is more likely to perceive a divergent idea as not possible more so than possible. When practicing divergent thinking it was hard to get away from ‘normal’ methods of thinking, but it wasn’t impossible. In the future I’ll have to be more cognizant of when I prematurely disregarding ideas.

After developing these ideas of how the app should work, we began storyboarding. The challenge here was to create stories that weren’t too complicated, but still illustrated the screens and flow of the new app. The stories were fun and simple and they made me focus on what I wanted on each screen. I had to go back to my original ideas and rework how and where I wanted to implement them into this new app. This was the first time the app making felt more solidified for me. Ultimately when creating flows, the narrative took a backseat to illustrating my ideas of how the app should look and how a user would interact with it. Illustrating these ideas made my concepts more permanent and helped me easily identify holes. 

When looking at the skills in this class I’ve already started applying them in the Studio class, where Garrett and I are diving into Affordable Housing in Austin. We’ve created concept maps to better understand the systems and players of the space. It felt productive to draw out all the players and their connections to different aspect. To finally see all the pieces and relationship I was holding in my head was profound. It also greatly helps our audience understand the relationships and players of the space when we are presenting our findings. 

I’m excited to pull both the divergent thinking and storyboards into the Affordable Housing space too. I know divergent thinking will come in handy when ideating about the possible solutions for Affordable Housing. The systems in place in the Affordable Housing sector are broken, divergent thinking will help to bring in new life and hopefully solve a few of the issues. Storyboarding will also help myself and the audience communicate better. Once we have an idea, when I illustrate it, then my audience can better understand it. I’m excited to pull in these new skills into a space like Affordable Housing, to see what new systems or process can be applied to it.