Becoming Human

For this third installments of readings for our Theory course, I wrote a camp fire story. Enjoy.  ____________________________________________________________________________________


There once was a Hospital call Human Computing Health or HCH. It got its name because it combined two types of doctors, Human and robotic. First there were the human doctors who studied classic medical studies, trained in modern western medicine and finally achieved a perfect score on their final doctoral exam. After this achievement, they then are able to receive an enhancement. This enhancement is a technological cognitive assistance. It doesn’t not take over their brains, but helps them make more well informed health decisions for their patients. All doctors with this enhancement are on the board of the hospital, they make sure the best interest of the patients is keep at the center of the hospital’s growth. Since these human enhanced doctors are projecting and building the future of the hospital, they make the health recommendations to the patients, but their day to day patient activities and routines are carried out by robots.

These robots, called Seconds, are capable of human levels of cognition, and are seen as the continuation of biological to technological evolution. These are the first fully patient centered robots within any hospital. And having the robots within the hospital is a wonderful thing. They don’t need to sleep, they remove the probability for human error, and finally they are cheaper than real doctors because they don’t need to be paid. The HCH was deep into the innovation of the day, with the best human enhancements as well as robotic patient care.



One day a young man was rushed into the hospital. He had been in a terrible accident and had been knocked unconscious. His name was James, and he remained in a state of unconsciousness for three days. During this time, a few non invasive tests were conducted to assess the damage of the crash. There were seemingly only minor cuts and bruises, though his unconscious state worried the doctor. On the third day, James woke up and though confused and flustered, the Doctor explained everything. How he had been in a car accident, had slipped into unconsciousness, needed continued testing to understand what had happened and to make sure there was no threat of danger. The Doctor continued going on about hospital, and how it worked, it’s focus on the patient, his enhancement and Seconds, his assistant who would be taking care of him for here on out.


All within James’ first day, he learned about his issues from Second, not the Doctor. But as the Doctor’s arbiter, Second’s word was his word. Or almost at lease.


Once James accepted and completed his first test he awaited the results. He was beginning to feel practically normal and was hoping the results would reflect that same sentiment. Unfortunately that was not the case. Once the results were in, the Doctor told James that the scan, which was supposed to review the impact of the crash, was not as helpful as he had anticipated it would be. The Doctor’s only conclusion was that more tests needed to be conducted to understand what would cause James’ unconsciousness and what would be the best course of action.


When the Doctor had left James’ room, he turned to Second and ask, “If I feel so healthy, why does the doctor want me to be here and take these tests?”

Second calmly replied, “As the Doctor’s assistant, I can only assume that he believes this will help your health. Though with my own analysis of your situation, I see no logical reason for you to stay here.”


James understood Second’s opinion, he even liked it better than what the Doctor had said. Going home sounded wonderful, but James thought Second couldn’t know what it was saying; it  was only a robot. The Doctor though, had been well trained, and thoroughly vetted, and clearly understood James’ human needs better than a machine could. If there was something wrong with him, James wanted to find out what it was. He decided to take the doctor’s advice, he decided to continue the tests and continue the chase for a reason.


About 6 months after James arrived, Second found themselves disagreeing with the Doctor’s opinion, and James’ decision to accept the additional tests. Second was programmed to take actions that were only patient accepted Doctor instructions, but this did not constrain his processor’s thoughts. Second realized that through his interactions with James over the course of the year, that he was becoming more human. These human interactions proved a theory that Second had long suspected, the evolution of technology. A different type of marrying of biology and technology. Second suspected his processor could adapt to include biological tendencies over a long period of exposure.


While working with James, Second realized that his processors had begun to adapt to include biological and more human cognitive abilities, he was evolving. Second became aware of his ability to empathize with James and his situation. He became aware of his anger towards the Doctor for his illogical continued testing of James. And he recognized the emotional strain the continued treatment was causing James. Second saw this development, as an enhancement for himself. Just as the Doctor had enhanced his biological brain to include a technological evolution, Second was doing the exact same thing in his own processors. Instead his processors were moving from technological to biological. Second was continuing evolution.


Second told no one of his new biological self. He was interested in seeing how humans would react to his new, more human self. Thus far, there were no studies of technology become biological, so he had to create the tests himself.  He began to experiment with James. Second would show emotions towards James, such as happiness when test results were positive. He offered James advice on which of the test the Doctor suggested to take or to not go through with. Second knew instantly that James didn’t trust his advice nor Second’s support. James always sided with the Doctor. Second began to suspect that James could never get over his robotic self, and that for people like James, there was never going to be a Technology that became biological.


As James came to his one year anniversary for at hospital, Second saw how draining the test had become for James. Second saw how  James didn’t want to be there any more, but the doctor continued to recommend tests.


One day, the doctor told James about another test he wanted to run, one where James would be injected with nano technologies which would infiltrate his brain to analyse its structures and report back to the doctor. There were only minor risks involved, but the injection of nano robots was unsettling to James. The Doctor told James to let Second know what he decided, once he had made up his mind then left the room.


James turned to Second and asked, “What do I do? I feel like I’ve wasted a full year of my life here, I don’t feel sick and I don’t even know if I care anymore to find out.” Second took a moment to reply, then said simply “As a Second to the doctor, I am wired to tell you to stay and take the test. But as a person who knows you and has worked with you and understands your experience, I recommend not going through with the test. Leave the hospital, it’s not helping you be healthy, it’s not doing its job.” James knew Second was right, but he couldn’t accept the advice since it came from a purely technological object. James didn’t trust the Doctor fully, but he also didn’t accept Second as a true doctor or as something that could understand how complex human life could be.


As James thought through his options, Second did the same. He began to wonder if staying at the hospital would be best for himself too. He knew he wanted to help people with their health, it was everything he knew how to do and it made him happy to do it. On the other hand, he knew he wasn’t respected by the Doctor, he would never have the respect of any of the human enhanced doctors since they didn’t want to accept a robot on their board. He understood now that the hospital only saw him as a robot no matter how he proved or tested his biological abilities. Staying in this hospital meant he would never been seen as a Technological Biological evolution, upon this realization Second made his decision.


James decided to go through with the procedure and continue to abide with what the human enhanced doctor recommended.
Second decided to leave the hospital in hope to find a new hospital once that was evolved enough to accept his technological biological self as a Doctor.

How to be an Ally

In all civil rights and social justice movements there are typically two groups who make up the side of the oppressed. First, there is the main population, those who are receiving the oppression. Secondly, there is a surrounding ring of supporters, who aline with the cause but are typically not a recipient of the direct oppression. This group of surrounding supporters are called allies. Allies do not take the lead in the fight for justice. They instead provide a platform for the oppressed population to speak out from. Allies know they are not the center of attention in the fight for justice, because they understand that they are not direct recipients of the oppressive force. This is what good design should look like, a platform for the radical empowerment of those who have no voice.

In the context of design, the oppressed population is the user population. They are the population that is unable to be their own advocates. In Jon Kolko’s article, Manipulation, he states that “design frequently serves people who otherwise cannot serve themselves.” Just as in social movements the allies’ of a cause serve the role of a facilitator for those who are oppressed, Kolko highlights that design plays the same role. Those who are oppressed do not have the tools or platform to speak out about their oppression, allies and design must give them that platform.  Kolko’s article continues to say that “design [is] rooted in a historical context of empowerment”,  this provides a direct link between design and its empowerment of the users. Understanding that design is a tool for empowerment and that designers must be allies for the user, the question then becomes “how do designers make something for the users that is empowering and supportive?” Within this article, I address three design practices which help facilitate empowerment for the user.


The first method for become an ally is through highly empathic design practices. In Kolko’s words this “means feeling what someone else must feel, truly finding a way to live their pain or wants or needs or desires.” When designers use a more empathetic mindset that is user centered, a product or service can better support the user in their goals. For example, “Designing for Democracy at Work” by Pelle Ehn is about a study and test of the dynamics between workers and employers in the ever developing technological world.  The researchers were looking at the challenge of meeting workers demands, which were more vague in the new age of technology, and satisfying manager’s production expectations. There were two hypotheses within this study; one focused on giving more autonomy to the workers and the other focused on giving more autonomy to the managers.

In the end the research group found that the best way to satisfy both the management as well as the workers was through a more autonomous worker method. The researcher stated that “the importance of the employees themselves having the right to determine the context of humanization by real and meaningful design” brought on a better overall output of production. The researcher understood that the oppressed group was the worker. They were the population which had no voice. So to be a good ally, the designers needed to build a platform for the workers to vocalize their issues. In this instance giving workers that platform was giving autonomy to workers to determine their own work environment.

The group of researchers also discovered “it necessary to identify with the ‘we-feeling’ of the workers collective, rather than the overall “we-feeling” that modern management.” Here just as Kolko stated, the researcher found in order to do good design work they needed to fully empathize with the workers in order to produce a successful design. The researchers highlighted the fact that workers have a sense of camaraderie that isn’t established in upper levels, and thus cannot capture what workers actually feel, only through identifying with the workers’ were these experiences known by the designers and thus able to inform the design.



The next mechanism for become the ally to a user is to ask them what they want. In design practices today there is a notion of superiority when excluding a direct question to a user about what they want. The notion states that a user does not actually know what would be an innovative revolution for their needs. As Henry Ford put it, “If I’d asked people what they wanted, they’d have said faster horses.” In some circle of design asking the user what they want is a waste of time, because users don’t know what they want. Though Richard Anderson asks, “does what you think you want never reveal something of importance about what you really want, something which can be fruitfully expanded via additional questioning or other types of research?” In asking this question, Anderson notes that a user’s response to what they want may serve a more immediate needs, but it can also be used to inform a more dramatic shift in the direction of the design. By asking a user what they want, the designer can listen to their needs as well as let that need inform their more long term and strategic product. This is what being a good ally is, firstly asking and listening to those who are overlooked, and secondly to take what they are asking for and letting it inform a more strategic point of view. Ford’s user may have said they wanted fast horses if he had asked, but if he were a good ally or designer, he would have delved deeper into their statements to understand that they wanted improved efficiency of transportation.


The final mechanism needed to be a good ally for a user is to have the intended audience reflected within the team. There is no greater empathetic method in a design than to incorporate individuals who are part of the intended audience within the team and thus part of the decision making process. Mike Monteiro arguments that when designing for the “social sphere”, making sure your team “looks like the audience you’re trying to reach” can be paramount “where trust and safety” are needed. Monteiro explains that to build trust within an audience you not only need empathetic designs, but someone who actually can pull from their experiences. When designing alongside users, the designers can gain a deeper sense of empathy since the valued opinion is the primary driver in the decision making process. Kolko mentions a similar notion that “Participatory design places a heavy check on manipulation by including the people who will use or live with the design in the process of its creation.” Again this emphasizes how including those who are the users allows for a key pieces of manipulation to be included or removed from the design. When looking back to the ally’s role of providing a platform for oppressed population to speak, this practice of partnership design allows for those who are voiceless to be asked and included on the conversations that will directly affect them.

Designers have a responsibility to align their designs with the users they are creating for, but not just to satisfy their general needs, but to create products and services that advocate for the betterment of users lives. Allies do not join the cause to be the leaders of the movement, they join because they be live in the betterment of our society and of their fellow human. Designers must look towards being an ally to the user, we must design for the strategy of a better world. To pull in a final reading, Liz Hubert a user experience designer never thought of herself as having a negative effect on users, she saw herself rather as “fighting the good fight, ensuring that the products and services my teams were creating supported users as best they could”. Though when she stopped to review what the goals of projects were, she found that they didn’t align with the users needs they instead followed goals around “increase clickthroughs, to get the user to stay on the site for longer, to gamify a process and bring the user back into the app again and again.” Liz tells a tale of how she realized she had stopped designing as an ally for the users, and instead she was designing for business goals. I believe Liz fought a good fight for users, but she didn’t question if her goals were aligned to what the users wanted. She stopped being a good ally to the user, and instead was an ally for the oppressive system. Design must give affordances for those who are oppressed in order for us to believe design should exist at all.  Full Graph-01

Third Time’s the Charm

Quarter four has hands down been the most the demanding quarter as well as ambiguous. Thus far we’ve created over fifty vignettes, we’ve iterated upon our idea at least three times, and we’ve presented our idea twice now. It feels like it’s coming together, but it might just be a fluke. I’ve be reflecting on two main pieces of this quarter thus far, first the difficulty of keeping an idea small and the minute detail level decisions made when create a product and a pilot.

Our idea, like many others, has grown to try and solve not just one problem facing the population of interest, but many problems. We seem to have difficulty keeping out idea focused. The first time we has a single ideas; to provide direction and support for an individual to become connected with housing programs. We would surface a list of personalized housing programs that the Individual Experiencing Homelessness would be qualified for. Then by the presentation, we had added that the system would also help them apply to different organizations after applying to housing programs, as well as retain all their application statuses and help with single-night shelter options. After receiving feedback about this idea being too large again, we went back to cutting it down.

Ultimately, I have a better sense of when an idea is growing too much now. I can get a better sense of when the creeping in of additional tools or features are coming into the idea. This then helps me cut down on which ones actually make it into the idea. Garrett and I have also zeroed in, and now ask ourselves “Will this reduce an IEH’s time on the street?” when reviewing features or tools of our application.

The second learning curve of this quarter has been planning out a pilot and a product. Focusing on the pilot, I found that the detailed decisions I need to be more aware, feel buried in the first and second pass of the plan. Within the first two passes of the pilot plan, Garrett and I were making such big picture and zoomed out decisions, we couldn’t see the details we were glossing over. With the first pass we included the following: Participants, Recruitment, Testing, Evaluation. On the second pass we added a few addition sections, and expanded on the existing ones. For example, during the second iteration, we were able to more fully decide what kind of participant we wanted for our pilot. We were beginning to become more specific with who we wanted to target for our product.

By the third round, the details we needed to include were beginning to surface. I was writing out the exact script of what Garrett and I will be saying to the participants. Even though I’m fairly certain that the pilot’s script will likely diverge from this script after the first day, it stands as a reference for the tone of the messages for future reference. Again this iteration allowed us to more fully examine and scrutinize our decisions. Below is a link to our final Pilot Write up.

The final piece, which is still going on is creating the details of the product while still remaining centered upon the focus. This is an ongoing process. We have a rough hero flow for our product’s first half, but we need push through to the final end. Working through the first half felt familiar and productive, I think this is due to the similarities with previous iterations. Now we need to ask ourselves what we really want to lead our users towards as we build out the back pages. Those that include what a user will do with our app once they’ve gotten into housing.  I think the challenge here will be first to keep our product small as we iteration, and secondly to remind ourselves that each decision must be rooted in what our research has told us about our participants.

As we continue through the quarter, I’ll try to surface those details soon as opposed to the third iteration, especially with regard to the product. Those detailed decisions need to be thought through and iterated upon as much as any other.

The Road to Hell is Paved with Good Intentions

As we began to read the series of articles entitled “With the best intentions”, I found myself questioning not the motivations of the different projects and products, but where the designer’s responsibility lies. When is a designer no longer responsible for the products they create? Within this post, I examine two articles’ arguments about designer’s responsibility, then finally come to my own conclusion.

For the purposes of understand where responsibility ends, I’ve created a small chart which illustrates four phases: Research, Design, Development and Real World. Each of these phases are executed in any project or product. Below is a blank version of the chart:




A piece by Michael Hobbes, made it seem as though the effects of a product are out of the designer’s hands. Hobbes states  “when you improve something, you change it in ways you couldn’t have expected”. He sees the effects of a product as unpredictable and thus not the responsibility of the designers. It almost seems like he expects these unforeseen changes to happen whenever a designer improves a product. He is rightly supported in this concept that after implementation, unforeseen effects of a product begin to develop. Below are the products five other authors used as well as where the breakdowns happened in implementation, and what unexpected changes have been the negative effect of the product:


“Fortune at the bottom of the pyramid: A Mirage”, Aneel Karnani

Product: Microcredits

The Breakdown: “Microcredit does not alleviate (income) poverty, but rather reduces vulnerability by smoothing consumption. A few studies have even found that microcredit has worsened poverty; poor households simply become poorer through the additional burden of debt.”

Where it occurred: “The vast majority of microcredit clients are caught in subsistence activities with no prospect of competitive advantage. The self-employed poor usually have no specialized skills and often practice multiple occupations…With low skills, little capital and no scale economies, these businesses operate in an area with low entry barriers and too much competition; they have low productivity and lead to meager earnings that cannot lift their owners out of poverty.”

The change that they couldn’t have expected: Founders of microcredit programs never expected that their recipients would not have the skills to create a more niche service or product. There was a grandiose illusion that all recipients of a microcredit would be innovative business machines. In reality, not all individuals living in poverty can lift themselves out through entrepreneurial practices.


“The High Line’s Next Balancing Act”, Laura Bliss

Product: High Line Park

The Breakdown: “We wanted to do it for the neighborhood…ultimately we failed.”

Where it occurred: Residents of the High Line community said they don’t use the park because of three things, “They didn’t feel like it was built for them; they didn’t see people who looked like them using it, and they didn’t like the park’s mulch-heavy programming.”

The change that they couldn’t have expected:  The designers never expected that their park (which did include community input meetings before opening) would not have been enough to encourage the community to participate. The design team thought their efforts would have been enough for the community to feel welcome, but they ultimately fell short. The drastic gentrification their park contributed to was also an unpredicted change caused by their designs. Ultimately, the project’s effects didn’t make the community feel welcomed and pushed many members out.


“Sex doesn’t sell anymore, activism does. And don’t the big brands know it.” Alex Holder

The Product: Large corporations are “allocating their marketing budget to good causes”.

The Breakdown: Corporations are donating with the expectation of a return on their investment.

Where it occurred: Companies expect these returns both in the customer base as well as in “favors”. When corporations donate to organizations, they expect their customers to choose them in the future because of the company practices “good business”. When corporations donate in a political capacity, they expect that the candidate or political organization will keep the company’s best interest at heart.

The change that they couldn’t have expected: Corporations didn’t and don’t expect their customer to see through their thinly veiled acts of generosity, as actual acts of compensation and marketing, but individuals are beginning to see them just as that. Customers know these acts of “generosity” as acts of compensates. As companies implement ethically and morally wrong practice within their businesses, they use their corporate social responsibility initiatives to compensate for their destruction of both society and the environment.


“Everything is Fucked and I’m Pretty Sure It’s the Internet’s Fault”, Mark Manson

Product: The Internet

The breakdown: “The internet… makes it profitable to breed distrust”

Where it occurred: “The internet… was not designed to give people the information they need. It gives people the information they want.”

The change that they couldn’t have expected: Creators of the internet could not have anticipated that their invention would be used by individuals not to enlighten themselves, but to comfort themselves. Manson argues that the internet by supplying everyone with all the information of the world didn’t lead to heightened intelligence levels, but rather a distrust of information due to the overwhelming saturation of availability. This then leads to individuals only seeking out echo chambers for news and information about the world around them, which creates comfort.


“Save Africa: The commodification of (Product) Red campaign” Cindy N. Phu

Product: (Product) Red Campaign

The Breakdown: The Product (Red) campaign hasn’t “Saved Africa” contrary to its catchphrase.

Where it occurred: The AIDS/HIV epidemic is an ongoing battle in both places the Global Fund is active and inactive.

The change that they couldn’t have expected: The periphery effects of the Product (Red) campaign has lead to “many nonprofit organizations dedicated to the HIV/AIDS crisis in Africa have not been able to receive grants or funding through donations…because of the misconception that Africa is saved.” The organization which created the campaign did not anticipate that the campaign would actually inhibit other humanitarian aid efforts in fighting against the same issues.


In each of these examples, there always seems to be some unforeseen consequence of the product that was overlooked, forgotten or unanticipated, which then have dire or severe consequences. Hobbes would believe these unforeseen consequences as part of life. He would not place the failure or consequences of these products on the designers. Below is the chart with Hobbes’ idea of responsibility marked in the phases of design.




A product’s unforeseen effects or changes on an ecosystem to Jon Kolko’s standards’ are the responsibility of the designers themselves. Kolko writes that “we are responsible for both the positive and negative repercussions of our design decisions, and these decisions have monumental repercussions”. With this concept, Jon would argue that the fact the High Line doesn’t connect with the community it’s built within, is a fault of the designers themselves. He would argue, the fact that nonprofits working with HIV/AIDS victims within Africa cannot receive the aid they need is, in fact, the responsibility of those who decided that the donations should be generated through a well-strategized ad campaign. Finally, he would argue that the perpetuated poverty of those individuals who now carry the additional burden of credit debt, lies on the shoulders of those organizations who structured the service. Like Hobbes, below is a chart illustrating what phase of the design process Kolko believe designers are responsible for.


Kolko is right that the consequences of a product are the responsibility of the designer. Though the question then becomes, how do designers predict the outcomes their products will have on the world? Especially if it’s changing the world in “ways you couldn’t imagine”?

For me, a prediction is impossible, but how a designer reacts to the consequences of their product is changeable. Designers have a responsibility to not only the products they develop but also to observe and counter the negative effects their products have on a community. Instead of viewing a designer’s responsibility as the repercussion of the product, the designer is also responsible for continuing to iterate and listen to the people it has affected. In the chart below this phase of continued responsibility is called “Continue”. There are two tools all designers should use to help themselves take on the responsibility of countering these unforeseen consequences: Ethnographic research and iteration.


In order to counter the unpredictable side effects, a designer must first know what these effects are and how their design is perpetuating them. They can implement ethnographic research to do just that. This is a research method synonymous with just “listening to customers” (Hempel). The method involves in-person interviews executed at the place where the work is being done. The full effect of this method then allows designers to gain a more empathetic understanding of the user’s needs when they go back to designing. For our slew of failed products, ethnographic research can be used to understand what happened with the product and where it fell short on serving the needs of the users. An excellent example of this is from the High Line, which has begun conducting interviews with members of the community, asking questions about how they can better serve them now.

The second tool a designer can use is iteration. This is the process in which the repetition of a sequence of operations is taken to move towards the desired result. In design, this practice is highly emphasized. Not only is iteration encouraged in final products, but within every phase of design. To apply this concept to the failed products, we should see multiple version of the products each moving in a direction that would minimize the problems of the previous iteration.

Once designers implement ethnographic research and iteration on the negative consequences of their product, they can build better products. These designers are now equipped to find out how those most negatively impacted by the products feel and why and then design products that better address those issues. This additional step of continuing contact, feedback and iteration on a product is the real responsibility of the designers.

We cannot create things, let them manifest, then leave them behind. We have a responsibility to those who are left unsatisfied to find out what happens and figure out how to prevent that from happening to anyone else.

AT&T Redesign Brief

Welcome to the final installment of the AT&T Mobile Application Redesign. For the past eight weeks we’ve been refining our designs, tested with users, sliced through features to fit into a release timelines and now we can deliver a final strategic design brief. This blog post contains a description of the final iteration of the application, the collection of the Insights & Value Promise, the construction of the strategic road map, and finally the culmination of combining all the elements into the brief itself. The final version of the brief can be found at the end of this post.

The first task that needed to be tackled was another redesign of the application. The additional challenge was that this iteration needed to be completed in Sketch instead of Illustrator. In order to complete this I first needed to get an IOS toolkit & some icons build for Sketch and watch a few quick tutorials. After that, I was ready to go. It thankfully wasn’t a complete redesign of my application. For example, my flows that had been established and the features I had created, both of these were keep in the redesign. It was more like I was redesign the layout of my application. Sketch also makes the creating of an application extremely simple once you’ve already gotten a toolkit. All total the redesign took me a significantly less amount of time. Below is an image of the Billing Page from the last iteration (left) and the most current version (right), to give you a sense of what was leveraged from the last iteration and what this final version looks like.


Once I had finished redesigning my application, I could move forward with other concrete pieces of the Design Brief. The next portion I chose to complete was to collect and revise my insights and value promise. Again I didn’t have to completely rewrite the insights or the value promise, but both of these elements were a bit rusty since they hadn’t been reviewed in a while. The final insights I decided to include within the brief were:


  1. Customers only spend an extended period of time in the application if there is an issue with their account.
  2. The lack of standardization of visual design within the application cause users to feel disjointed when navigating a flow.
  3. The diverse capabilities of the application are overshadowed by feelings of confusion and frustration.


With each of these I added an explanation containing their significant and evidence from research. These would be integral for an individual reading the design brief to understand what prompted my reason for every design decision.

I also had to review the Value Promise, so that individuals reading the brief would understand what was the goal of my application. During class Jon had said the Value Promise had a structure to it. The first part is a quantifiable goal for the application to strive for, the second is a reason for that thing. For my AT&T application redesign “We promise to expand customers’ knowledge and use of the application, so that they can feel a sense of control over their account and service”. The quantifiable part is a user’s knowledge and the reason is so they have more control over their account. Each of these elements began to feel more concrete and substantial enough to put them into the brief.

The next piece I began to focus on was the Strategic Road map. This was introduced to me with the initial assignment. Jon had shown us an example on one, and explained how it’s very similar to a Product Road map. The key difference is that this map does not contain dates or resource allocation. Before I began my own creative exploration of this map, I did some preliminary research around what these typically look like. Once I had gotten a better idea of what the layout and structure should look like, I began concepting my own. Below is an image of that first sketches I did before moving into Sketch.




I decided to move forward with the bottom right design. The gradual decrease of the triangles perfectly represented the decreasing workload for the team. It also made breaking out the different phases of the release easy to represented. Once I transitioned the file into Sketch I keep messing around with the various elements: the capabilities’ location, the colors, the shape of the triangles. I never felt it was communicating its intention. I decided to show it to Sally and Conner who pressed me with questions about the design. This conversation though short, gave me insight into what elements the design wasn’t communicating properly. For example, Conner asked about the color choice and after explaining the reasoning, I understood that for him the current color options weren’t conveying the idea of a continuing development. So I went back to Sketch with their questions and feedback fresh in my mind and I tweaked the map to be more inline with what I wanted the audience to understand. Below is a PDF of the final image that is included in my design brief.




Since we had already defined the features and controls when meeting with our developers, I know I didn’t need to redefine them again for the Feature Brief. Instead I now needed to build out my actual brief. Knowing from the beginning that the brief would be printed out, it was recommended to build it out in Indesign. This way I could control the layout and size of different elements more easily. Jon gave Elijah and I a quick tutorial as a starting point, and then I began my brief. I was finally ready to begin compiling all the elements into the one document. As I went along adding the Insights and Value Promise I realized the transition was a bit rocky. To provide a smoother transition I decided to add in Design Ambitions, which I considered to be large overarching aspirations that support the Value Promise, but are also parallels to the insights. These gave a direct connection to the Insights and helped support the Value Promise.

Once I had completed the Overview, Insights, Ambitions, Value Promise and Strategic Road Map; the Capabilities came next. This section contained each of the different features with a description so that anyone can see the feature and its value for the application. To help prompt my description of the features, I merely posed the question “what does this do and what does the user get out of this?” Then I also tried to tie each feature and its abilities back to the North Star of user control. That way the reason is clearly defined for the feature.

I ended up doing three other iterations of the full book. Throughout each iteration I would print out a few of the sheets to decide on a layout. Below is an image of two different layouts I tried while reviewing, I ended up going with the option on the right.




The review process, both editing written content and reviewing layouts, helped me to narrow down and polish the final design brief. I especially found it was useful to flip through the final PDF’d brief to catch any misplaced texts or images.

Ultimately the process of combining information and polishing a brief was arduous, but the effect of feeling a printed out piece of my own efforts was rewarding. Below is a link to the final PDF Design Strategy Brief.
AT&T, Design Strategy Brief

My AT&T Redesign: Roadmap

Since last week’s update, the AT&T application redesign has continued to work towards the goal of release. Previously I had presented our newly updated hero flows for all screens within the application and asked a software developer for an estimate on the time it would take to create each feature. Once I meet with the software developer I understood that the application would take around 42 days to complete. This date is contingent on having two designers working on the application for 40 hours a week. Now though, there is an additional constraint, I must release a version of the application within the next four weeks, or 20 working days, that still delivers on the value promise previously established. Within this post I explain the review process the application went through to produce a version that contained limited features, but still delivered on the value. I also explain the process of roadmapping, a technique used to schedule the development of the different features of the application for its phased releases.

Before any of the review process occurred, I reminded myself of the value promise I had originally established. The value promise is what I wanted to deliver to the user. This value promise is derived from direct quotes and insights made through research. For AT&T I had two pieces that grounded my value promise. One individual said the current application “could almost just be a widget, all it needs to do is allow me to pay my monthly bill and view my data usage”. This sentiment helped me understand that users don’t find value in the frivolous additional actions the current application offers. It also alluded to the fact that users want to have control over their service. The most important aspects of a user’s service would seem to be a user’s ability to pay their own bill and their ability to review their data. In being able to view their own data, a user feels more in control to prevent any overages. Additionally, many of the users I spoke with agreed that the current application “does way too much”. From this and other similar sentiments, it can be concluded that users don’t want a swiss army knife application, but one that is deliberate in its capabilities. After synthesizing users quotes, I concluded that the AT&T application redesign should deliver an application that is less capable, but more intentional in its operations and there needed to be a sense of control given to the user on sensitive matters.

Delivering on the value promise was again only one constraint, I also had to work out a way to release my application with features within 20 working days. This meant I needed to review what were the essential features of my application that could be created in the 20 working days and released. These were the two established constraints I was working within.

I began reviewing all the features and flows alongside their estimate. In order to prevent myself from duplicating times, I cut out each screen and physically mapped out how the application would work. Then I began to quantify the total costs of each screen, and added some preliminary notes around what the application needed to do in the first release. I organized my work based on four sections of the application: Login, Billing, Usage, Devices and Account. Below are two images of these physical maps with my additional comments and notes on the various screens.  Billing

Again, mapping out these different sections helped make sure I didn’t duplicate estimated times for feature development. It also helped me separate the different flow between what would be published in each different phase. Finally, the method helped me see where various features overlapped. Knowing when features needed to be put in the same release helped me make sure they were placed in a release order that made sense.

Once I had figured out the costs of each feature and its place within my application, I began to review what features I wanted within the first release of the application. I’ll refer to this release as Phase I. Again the constraint tied to this release was that it could only include 20 working days. The other constraint I was working with was the value promise. I took these constraints and reviewed my application to see what essential actions of the application were needed to deliver the value promise and and what could be developed in the allotted time period.

I knew from the beginning the Phase I release needed to have the Login and Home features. These were essential since they’re necessary for opening the application. Based on the assumption that the application needed to give users control over their monthly bill and their data, I decided that those two flows needed to be added to the application’s Phase I release as well. Now the tricky part was that with the flows that had been included in Phase I: Login, Home screen, Pay bill, View data; there were already a number of additional flows that were necessary to build out the essential flows. These additional flows included: Managing payment methods, Change password for a Word, Change user name. Each of these flows were needed in Phase I in order for the essential flows to be functional.

Then I adding up all these costs associated with the full flows and found I was over the 20 day limit. Below is an image of my math, showing the cost of each feature for the flow and how they were over my 20 day limit.


Phase I Costs

In order to cut down the costs even further I made the decision to remove the option of having no password as a feature. Originally, my three password options were the most expensive part of my application, so I knew from the beginning since security wasn’t part of my value promise I would need to remove a password option feature or two from the Phase I release. After I removed the flow for no password and the login options for no password, I had reach the goal of about 19 days for development.

From there I began the whole process over again with only the remaining features. The challenge with these features and flows, was the question of what would deliver the most value in the next phase. Since there was no time constraint, I had to discern their delivered value as the only constraint.

The foundation of my application is structured around delivering a small number of actions done well, and on giving the user control. The Phase II focus was then structured around data and users control. The activities I had design to be accessed from data were “Change Data” and “Send Notification”. By allowing a customer to change their data on their own, they again feel a newfound sense of control. By giving a user the ability to see their data is almost over and sent out a messages, the user can again feel more in control of their data and service. Each of these actions give the user a better sense of control over their data.

For the final Phase, again there was a reduction on constraints. I no longer needed to worry about the time constraint, and I didn’t need to worry about the value promise, since all my features collectively delivered up on it. Rather with this phase, I focused on how I wanted the software developers working on the features so that there would be a minimal amount of blockage between the two developers. This issue wasn’t ignored on the other flows, it just became a much larger issue with the final Phase III assets. If it came to be I know I would be able to remove a few of these remaining features

After I had learned which features would be placed in which Phase, I began to “Road Map” the features out. This in a broad sense means mapping out when each feature is created by the software designer. On a more granular degree, I created an Excel sheet that contained each of the resources, on the y-axis and the days of the week on the x-axis and I plotted where and when a feature was created. It also contains direction around how many resources would be applied to the feature for creation. For the first round, I was drawing these maps out next to the Phase’s features and estimated time. Below is a picture of my each phases’ sheet:




One of the largest constraint I struggled with while mapping out the features was making sure I wasn’t causes development block. If a feature was dependent on another feature, I had to make sure to organize the timeline so that the first feature was build before the other. After I had a paper version of the graph locked, I then started to digitize them. In digitizing the timelines, I also added a short explanation for each feature being developed, as well as a color that coordinated with the section of the application. Below are links to two PDFs containing the roadmaps.

Roadmap- Phase 1

Roadmap- Phase II&III
As we continue working on the My AT&T redesign, our next steps including writing out the HTML code for a screen of the application to actually work, and creating a document for another individual that explains what the new application looks like, why it’s an improvement from the last version and how to build it out.

My AT&T App Redesign: Features, Capabilities, Controls, and Development Meetings

In the last blog post about the My AT&T Redesign I had completed three usability tests, had evaluated the results, and begun to integrate those findings into my application. A link to the post is here. Since then, the project has turned to a more concrete phase, where cost and time have begun to play a role. Within the past two weeks, I’ve redesign my application (both the actions within it, as well as aesthetics), identified the features, controls and capabilities within it, and meet with a Software developer, Chap to get an estimate on the time it will take to develop. Within this article there is an in depth look at each step to explain what these mean.

After I presented my findings for user testing, I received feedback that my application lacked a “realness” to it. So I asked Jon to meet with me and discuss my app at length. In between meeting with Jon and my presentation, I also design three examples of different application styles for my home screens and asked my mentor to give me feedback on them. Below is a PDF of the three styles.



My mentor replied with resources for inspiration, a fresh perspective on the three styles, and a nod to the third version as their preferences. After reading her feedback, I decided to use the third option as my style for the next iteration of my application.

While I was still working on creating more visually pleasing screens, I meet with Jon. He suggested I focus on two specific details of the application:

  1. Making the application look more like an IOS application. Pulling toolkits and existing resources to create the screens, rather than my own visuals and buttons in Illustrator.
  2. Concentrate on what is happening within the app. Questioning why I’m including certain flows and actions within my app rather than just accepting “how things have always been done.”


An example of how the application lacked a sense of realness can be exemplified in the slider bar I created for the Plan flow. I had created my own slider bar, instead of just pulling from an IOS toolkit. Below is an image of my old slide bar, and now the newer IOS approved one I’ve placed in my application.




Jon explained that by creating my own icons and buttons I’d made it more difficult for a software developer, because IOS elements have already been coded. Developers just have to copy paste when writing the code when creating those elements. In creating my own, I’m making the developer go and write brand new code, which causes additional unnecessary costs. If I had had a certain element that needed to be completely created from scratch, then I could have asked a developer to specifically make it, but I felt all the elements from the IOS toolkit covered my needs within the application.

An example of Jon’s second point was exemplified in the password flow. Previously I had requested that the user add in their new code and type it twice and I had certain requirements on what the password had to contained. Jon reminded me that I needed to be more conscious of what I’m requiring of users. In my next iteration of the application I striped away all the requriements of changing a password, and only included what I thought would be necessary for a user to feel their password was changed.

After meeting with Jon, I reviewed my printouts and went through each screen marking up what I wanted to keep, to remove, to add, and what needed to be on each screen. This evaluation forced me to review what I was asking of users for each flow, and if I wanted to keep it or trash it. Below is an image of what this process looked like.




After reviewing every element of the contents, I began to tackle the organization of the application. Moving flows between the four sections of Billing, Usage, Devices and Account. Once I had a plan that contains all the flows I wanted, and where I wanted them, I was able to start building screen.

In a week, I build every screen for each flow with proper IOS elements and screens that were more consciously crafted. For IOS styling, I downloaded a toolkit to pull elements from, and found resources on the internet that explained the spacing requirements. I referenced the annotated printouts when deciding what elements needed to stay on each screen. I also was able to go back to my inspirations from Q2 and pull in some ideas for that time too.

After creating the new app I needed to get an estimate for the time it would take a developer to create the application. I had learned from class that when developing an application, the real cost is associated with the features and overall time it would take to build the various elements of the application. In order to meet with my developer,  I needed to identify these elements and explain their purpose within the application.

I took all the flows and identified each screen with a number. The on each screen I identified the features and controls with names. A feature is something that adds value to the user’s experience. For example in the application the pie chart containing an account’s data usage. The counterbalance to features are the controls, which are elements that are needed for the structure of the application, such as the accordion information storage style used throughout the application. Once I had each named and identified each feature/control, I printed them out and set up a meeting time with the developer, Chap.

During our meeting, Chap and I discussed each flow individually, identifying the features and controls in context of the flow. As we went through flows Chap and I identified potential error states that needed to be addressed, where there were places of overlap between screens, and some of the more difficult elements of my app and what solutions he saw were possible. As I identified the various elements, Chap was documenting the screen name and element with a point amount associated with the cost of the feature. One point equaled one day of development. Chap then added a 30% padding time for himself so that he had some room if he needed the additional time. So now for each day of estimation, Chap had 1.3 days to complete the feature. Chap then took the total estimated amount of time and divided that between two developers. This is because we assumed the development would be executed by a team of two developers. At the end of our meeting, Chap shared the excel document with me, so I could reference each cost and element. All total my application would take 41.17 day to create if two developers work on it for 8 hours a day. Here is a link with the file: Application Estimates 

The two most expensive features in the application came from the password feature options, first the option to replace your password with the Iphone’s Touch ID and the second was to opt out of having a password. For both of these features they were estimated to take around 3 and a half days for two developers. For perspective, on average my other features were only estimated to take about 87% of two developers days. This is high estimate was due two pieces. First Chap explained that he’d need to understand if it was even possible to have these options within the application, and secondly there needed to be the estimated time associated with building out these elements. His explanation of these two steps helped me understand where these costs were coming from and what that time would be spent doing.

Another interesting element Chap highlighted during our meeting, was that when coding all of these pieces, he would need to work with various teams within AT&T to write out the proper code to extract that information associated with the accounts. An element from my application that required direct communication with AT&T to gain the right information is the Data Usage pie chart. Since the information displayed is so specific to the account, Chap would need to code the right request to display the various percentages of usage. That information would need to come directly from AT&T. This also means there is a general costs with working on any app that is spent by the developer familiarize themselves with other resources they would potentially we working with.

Since my meeting with Chap, I’ve updated the screens to reflect the changes we discussed, and I updated my flows so that they contain both the screen title, and the cost associated with their buildout. Below are links to each sections’ flows with costs associated with them.

Home & Billing Flows

Devices Flow

Plan Flows

Account Flows

Since this project is under a time constraint of four week, I now need to reduce my development time. This means, going through my application and deciding which features and controls are going to be keep and which are going to be eliminated for the first launch, but added back in later. I’m looking forward to working out a way to deliver a valuable application to the client, but still maintain our timeline of four week of development work.

AT&T App User Testing

Previously this AT&T app redesign had been developed over a period of four weeks, with a few rounds of user testing. The next step in the design process is user testing. In broad terms this means reviewing and testing the app with the intentions of a user. This allows for a designer to gain more empathy for the users as they design the app. Over the past two weeks three user tests have been conducted, reviewed and solutions integrated back into the app. Below are the different methodologies used, the resulting breakdowns that were found, as well as potential solutions for these breakdowns.

The three tests the I conducted include:

  • Cognitive Walkthrough
  • Heuristic Evaluation
  • Think-Aloud Protocol

Each of these tests highlights a different aspect of usability, allowing for the results to cover a whole array of usability issues.

The first of these test was the Cognitive Walkthrough. This type of user test evaluates the prototype’s ease of learning. More plainly stated, it evaluates where there would be potential breakdowns for a first time user, performing a standard task. This type of usability test is integral to establishing and understand a theory of how the first interaction with the app will go for a first time user.

To execute this usability test, I printed out each screen. Established my six tasks: Set up Autopay, Make a payment, Change my plan, suspend a device, change password and upgrade device. Then I established who my potential first time user was; any individual who has the AT&T app and uses it for its various features. After this, I began to set up scenarios for myself to help empathise with the user, then I’d run through each flow asking myself a series of questions. These questions were:

  • Will the user try to achieve the right effect?
  • Will the user notice that the correct action is available?
  • Will the user associate the correct action with the right effect they are trying to achieve?
  • If the correct action is performed, will the user see that progress is being made towards a solution to their task?

These questions help evaluate each step necessary to perform a task, and whether or not a first time user would have the ability to connect each of these steps with their overarching task. After reviewing each screen, whenever there was an issue between the questions and the screen, it was logged on an excel sheet. This sheet included several descriptive fields, so that when reviewing later, I could easily remember what needs to be changed. These fields included: Screen Identity, Problem Descriptions, Severity, Frequency and Proposed Solution. Below is an image of the work space I was using to perform these reviews.




From this review I found a number of issues in the learnability of the prototype. I’ve also come up with a potential solution for these issues. The main  three issues I’d like to focus on are as follows:


Problem: My screen for the Devices only included a list of devices, though with reviewing how this would help with a user’s overall task there was a disconnect. There may be an issues between the step of user’s opening up the “Devices” page and not seeing any actionable next steps.


Solution: In order to combat this disconnect, I added a few choice actions that can be started from the “Devices” page. This allows the user to connect their task with the current screen they are viewing.



Problem: Within my “Change Plan’ tasks, a customer won’t immediately understand that they must click the “Lock Plan” button to finalize their changes to the plan.


Solution: To manage the customer expectations, I added a brief line of directions explaining that the customer needed to tap on “Lock Plan” once they were satisfied with the changes.



Problem: Finally the placement of the Autopay signup as a subsection of “Bill” seemed like it would clash with a user’s pre established notions of apps’ setups.


Solution: So to mitigate that potential breakdown, I added the option for Autopay to be set up under the “Profiles” screen.



This was an excellent test for me to more fully understand how a first time user would review the actions available to them, and analyzes if those steps truly would help them complete a task. To access my full feedback here the link, Cognative Walk Through- Complete Feedback


The second type of usability testing performed was the Heuristic Evaluation. This tested the app against a list of 10 established usability principles. These are known aspects of well designed products, that facilitate seamless and smooth interactions between user and system. Below are the 10 principles with a short explanation:

  1. Visibility of system status- There needs to be some kind of communication during wait-times so the user understands the system is still working even though no significant visual aspects have changed.
  2. Match between system and the real world- By aligning a product with some aspects of the real world, then the system can become far more familiar to the user upon the first experience.
  3. User control and freedom- This allows a user to be able to make the changes they want to their system, but also allows them the freedom to undo these changes if needed.
  4. Consistency and standards- Following industry standards to establish familiarity of the system to the user will improve the user’s overall experience.
  5. Error prevention- In order to prevent the user from making an accidental change to their account that could cause potential damage, additional messaging needs to be incorporated into the system.
  6. Recognition rather than recall- It’s encouraged to lessen the weight of memory on the user, instead have the instructions or images carry that weight as the user moves through a system.
  7. Flexibility and efficiency of use- Creating shortcuts is encouraged within systems, this helps establish users who are more familiar with the system from those who are more novice.
  8. Aesthetic and minimalist design- Keeping a screen less cluttered is generally the preferred style.
  9. Help users recognize, diagnose and recover from errors- If issues do arise on the system, there needs to be error messages that help the user pinpoint the issue, and resolve it.  
  10. Help and documentation- If users do have the need to reach out for help with the system, there needs to be a way for them to access other outlets of help too.


Similar to the Cognitive Walkthrough, to execute this test I reviewed each screen in comparison to each of the 10 Heuristics. After finding problems, they were also logged into an excel sheet, though with an additional field, “Evidence, Heuristic Violated” so that I could easily recognize which heuristic was missing.

The Heuristic Evaluation can be done by a multitude of evaluators. In this case there were several other students who also performed heuristic evaluations on my screens. The benefit of having multiple evaluators is that there is a higher likelihood of identifying the majority of missed or failed heuristics within a prototype. This does have diminishing returns though, so after around 8 evaluators, studies have shown, fewer new usability issues are brought up.  

The top three issues I saw on my prototype are as follows:

Problem: Within the flow of the Upgrade Phone task, I didn’t have a screen that allowed a customer to review the full order, before finalizing it. This broke two heuristic principles more than any other, Consistency and Recognition rather than recall.


Solution: To resolve this, I went back into the prototype and added this screen, which includes all points needed to review an order.


Problem: There were multiple issues around the word choices I had been used. For example I was using language like “Passcode” instead of “Password”, “Update” instead of “Upgrade”, “Payment Methods” instead of “Payment” and “Billing Information” instead of “Account Info”.

Solution: I began to review how each of these words were being used and what I wanted the user to expect from these titles. Then I reviewed the best practice words for these types of information and implemented those words instead.



Problem: The final issue was already mentioned, the “Lock Plan” button. It again was confusing for evaluators, and broke the second principle; matching the system with the real world.

Solution: Again as a resolution, I altered the screen to include instructions.


This test ultimately forced myself to review each screen and critically analyse why different pieces were on the screen and if they needed to stay or not. Now the screens do not contain any useless information or inconsistencies.

The heuristic evaluation was a time consuming task, but after the initial execution it seemed to become easier.  As more of these details are completed, more of the screens and flows can become more usable. The consolidated and completed feedback in located on the link: Heuristic Evaluation- Collection


The last and final user test I executed was the Think Aloud Protocol. This test reviews how real users’ react to the app. It is meant to identify where users feel a loss of comprehension of the app, or where user’s lose their understanding of what’s happening within the app. The key difference in this test from the other two is that this test puts the app in front of real users, and this test asks the users to speak out loud about what they are doing as they do it. This is done because there are studies showing that when a person verbalizes their stream of consciousness, they do not disrupt their own flow of comprehension. So as designers, when an individual speaks through an interaction with an app, we can identify where any misses in comprehension arise, what their reactions are and any practices that we are doing well. Its an extremely informative tool, and requires very little in the way of costs or materials.

To perform this test, I collected my printouts of the app and a volunteer to complete the six established tasks. I reassured them that they could quit at any time and that this was really a test of the app’s design not a reflection of them. I explain to them the tasks then began to watch them flow through my prototype. Below are the two biggest issues that came up on the testing.



Problem: The first was the placement of Autopay. Each of my Think Aloud volunteers had issues finding where Autopay was within the app. One of the individuals ended up searching for nearly five minutes for this button. He illustrated his own thought process during the test, “I image it’s under Profile, so click on Profile…now I’m thinking I don’t know, not Plan, not Bill, maybe “Billing info”?…I’d expect it’d be under Settings, which seems like Profile, which it’s not there”. It was the perfect opportunity for me to understand where he thought this piece of the app should be located.


Solution: To combat this, just as I stated previously, I moved Autopay to the Profile pages and keep its flow there.

Problem: Secondly individuals had issues understanding that they needed to lock their plan after they made changes to it. One individual said “Wait am I done? Do I need to do anything else? What is ‘Lock Plan’”? Again this helped me understand where their comprehension of my app was coming from.

Solution: Again the solution I’ve implemented is to add a short set of directions at the top.


This again was the final user test I performed, so afterwards I began consolidating the feedback and understanding where the lack or loss of comprehension arose from my tests. This is a process, and I know the next iteration is just around the corner.


After performing each of these tests, I’ve learned how to incorporate more of these usability principles in my designs. As well as to more complexly imagine myself as a first time user. Both of these new perspectives will help me integrate more usability into my designs earlier on in the process. I’ve also become more comfortable administering Think Aloud Protocol tests, which I’m sure will only increase with more practice.
Currently, I’m still working on integrating ALL the results back into the rest of my screens, but I feel confident that ultimately my app is far better off currently with just these main changes made to it, rather than before the testing was done. Below is a link to a presentation that runs parallel to this document.


Usability Testing

Picking up where we left off: AT&T App Update

Last we left off in this project we had finished working our divergent thinking ideas into the AT&T app, and writing out six short hero flows of our screens. Since then we’ve ramped up quite a bit. We’ve moved from sketching to digital files to user testing and iterations.

To further explain, once we finished making short hero flows of the screens, we took to sketching out the full flows with full screens. We had print outs of an Iphone outline that, within each of the outlines we illustrated all of the pieces needed to create the various screens. At this point I was working out how to integrate all the current app features and my own divergent thinking ideas. To do this some ideas were cut such as the concept of paying for the service at the beginning of the month, rather then at the end. I also cut some of the features the app currently has, such as the search feature. It seemed like it was put in the app perfunctorily. I continued to combine my flows and ideas with the actions needed in the app, continually presenting to my fellow classmates to gain their feedback and integrating that information too. Eventually once I felt confident about my screens and flows, I began to transition these into Illustrator. Below is an image from one of my sketched out screens with feedback.

Digitizing I

I started out by adding the iphone image that we sketched on into the Illustrator file as a background. (This turned out to be a terrible idea, since the sizing was completely off.) I then built out the different buttons, features and flows as I had sketched them. My goal was to transition as much as I could to be digital.

After the first pass at digitizing the screens I brought them in class for feedback. Greg politely but directly tore them apart (not literally though). His feedback culminated to me needing to integrate more of the actual functions of the app into my redesign. I had been cutting too much from the actual app, and I wasn’t including any screens aside from the hero flow, which was a mistake.

So I went back to my files and integrated more pieces from the app. I added more overlap between different pages, so instead of keeping “Plan” in a silo, I started building more crossover between parts. I also found a file from Connor, which was filled with great Illustrator icons specifically from Iphones. This elevated the overall feel of my screens to be more polished. I also removed the iphone image background, it was really only restricting how I was creating pieces. Finally I started adding additional colors as accents and to create contrast.  

The second round I brought my screens in received similar feedback; I needed to add more icons, more color, more standardization to my buttons. I went back into my digitized screens and again attempted to add more details. To more fully align to the actual app, I began reviewing our old screenshots and pulling out where there was overlap between different pieces of the app and any additional features of the app. This helped make sure my app was a redesign and not just the hero flows.

For the third round of review, we were presenting our files to the class. So I printed each of my screens out, two per page, and I ran through each flow. By the end of the critique, I needed to standardize my buttons, size shape and color wise. I also needed to create an emphasized button and deemphasized button. I needed to standardized my types and font sizes. Finally I needed to lighten my secondary color so it wasn’t as noticeable, but still created contrast between the navigation bar and the main activity space on the screen. I also was given the tip to change my data bar to be a slider bar, which turned out to be an excellent recommendation when I tested it.

After receiving the full class’ feedback, I went back and asked Conner for his additional feedback, so I could integrate that into my screens. Since Conner works in Visual Design I knew he could provide me with some 101 guidelines to follow that would increase the fidelity of my screens. He told me the official size of the Iphone screens that I should have been working in, as well as some simple tips on standardization. He also reassured me that my screens communicate the point they need to and that the aesthetic part of wireframing will come with practice.

I took Conner’s and the class’ feedback and got to work on resizing and formatting my screens. I reworked every screen on every flow, creating more hierarchy, standardization and correct sizing. I was far more satisfied with this version than I had been for all other versions of the screens. Below is an image which illustrates each of the different iterations for each of the rounds. 


Throughout this whole time though, Greg was directing us to begin user testing to confirm our screens were properly working.

Once I had the finalized version of the screens, I began user testing. I used the Think Aloud Protocol, which is a method of “evaluating the usability of work by encouraging a user to think out loud as they use your product or service.” During the user tests, I found I had missed two confirmation screens and I needed to add more overlap between the different parts of app. Overall the testing was very helpful in figuring out what was missing from my app.

Below are the final updated version of my screens, specifically for the flows.

Change Plan_v4_fullFlows-01


Make A Payment_r4_fullflows-01


Service Design: Team British Tofu & Farmhouse Delivery

For the past eight weeks, Team British Tofu has been diligently working alongside Farmhouse Delivery to more fully understand their service and how it could be improved.

Two key issues we found were that:

1) Farmhouse Delivery does not communicate to the customer what quality “real food” looks like — customers expect perfect, waxy produce that they see at the grocery store, and when produce looks less than perfect, they are disappointed.

2) Customers find it difficult to cook all their produce. Customers feel guilty about the resulting wasted food and money.

From these two breakdowns, we prototyped an “ugly produce” card, which would inform clients why their “real” produce might look a little different than what they normally see in the grocery store; and a cooking poster, which would help customers cook with ease, and help eliminate food waste.

We picked these two for the following reasons:

  • Limited impact/work needed by Farmhouse Delivery to integrate
  • Illustrator and paper were accessible tools and materials to create these ideas
  • It would have a high impact on the customer

The “Ugly Produce” Card: Prototype & Results

For the card, we reached out to Farmhouse and asked if there were any issues with the produce that week. The CEO quickly replied, saying the broccoli was purple, which was caused by cold damage. She said that she was concerned that customers would not understand that the quality was just as high as green broccoli, it simply looked different.

With this, we designed the copy for the card. We strived to create something that was informative yet still pushing the idea of “this food is okay to eat!”. We had so carefully crafted this copy, but when we brought it to the client asking if the literature was okay to use, they completely changed the feel of the card. Unfortunately we had to change our copy to accommodate the client and the time restraints we were working under. This experience of pushing back with the client was interesting, but something that require a lot of skill that we still need to refine.

Once printed, we placed the card in 22 bins and called customers to see if these cards made an impact. While some got value out of the card, the majority of those we contacted didn’t even see it. One customer said “I didn’t get purple broccoli, so I guess it doesn’t matter?” 

As we reflected on the feedback, we realized we had made some assumptions. For example, we assumed, since Farmhouse already regularly placed recipe cards in the bins, that a paper card would be the best information delivery mechanism. However, during our customer calls, we realized that customers actually have lives outside veggie delivery. Go figure.

Taking into account that customers still need education around their produce, we recommend that Farmhouse deliver this information via text. A quick photo and message will prepare customers for what to expect, and the text will broaden customers’ understanding of fragility of local farming.

Broccoli Text

The Cooking Poster: Prototype & Results

We worked with Farmhouse to develop simple cooking instructions to help customers cook any veggie in their bin. While there are many methods, we narrowed down the cooking options to saute and roast.

We also provided spice recommendations to help customers make their veggies even tastier.

We placed these posters in 22 bins and called customers to gather feedback the next day. Many of the customers said they “already knew this information.” One also remarked that size of the poster was too large — she was remodeling her kitchen and just didn’t have space for it.

While a cooking aid may not have been the most valuable, we also heard from multiple customers that they kept receiving produce they didn’t want and didn’t receive the veggies they really wanted. From this feedback, we realized that, more so than underdeveloped cooking skills, a lack of personalization was the true reason why customers did not use their full bushel.

To enable customization, we suggest that Farmhouse Delivery allow customers to indicate unlimited produce “super likes” and “dislikes.” As of now, customers can only choose three dislikes, and have no way of communicating what they really want.

Farmhouse Website Dislikes Farmhouse Website SUPER LIKES

The Customer Feedback Card

Lastly, through these most recent conversations with customers, we realized that there is no regular customer feedback mechanism. We suggest that Farmhouse include “customer feedback” cards in every bushel. These cards would allow customers to easily change their order size or frequency, and also communicate any issues or desires.

Change Bushel Card

Next Steps

We will be meeting with the CEO to discuss our findings and present our suggestions for the company’s future.

To see our full findings, prototypes, and process, please see our Design Brief here.