What is Design Fiction: Constraints, Technology & Culture

All of the articles we read for the “Technology-Strange and Familiar,” section of our theory class, address our ability to understand the role of technology in society. Specifically, each author looks at the way we are limited in our understanding of technology’s role either by being too close to see clearly or too far away. Similarly, each author’s position can be understood in terms of the relative import it gives to cultural vs. technical constraints for shaping the future role of technology. In this instance I am using culture to refer to the social, political and relational assumptions and constraints that shape civilization as distinct from the capabilities and limitations of our technology.

In the attached document I briefly summarize each reading and position them relative to each other in terms of being too close, or too far, to perceive the role of technology clearly, and focus on technical or cultural constraints.  On first blush it seems that Sterling is critiquing science fiction for being too far removed from our culture to provide insight into the role of technology. On closer analysis, his position is more closely aligned with Bell’s. Science fiction propagates, unexamined, cultural assumptions about technology, and therefore is actually too close to provide meaningful insight. Based on this understanding, I propose a definition of Design Fiction.

What is Design Fiction Diagram

Design Ethic: Connecting Intent, Method & Result

All of the authors we read for the section of our theory course entitled “power” agree that design is powerful social force. They vary however in how they view design, whether it is simply a collection of methods that can be applied regardless of intent, as Martin suggests, or it is fundamentally defined by its intent as Kolko argues.  As is often the case, I argued vehemently, that design is defined by its methods, in the process of arriving at the conclusion that design is, at its core, about the intention to humanize, support and empower. Unfortunately, that does not mean that the methods of design are not very powerful in service of less noble intentions. For this reason it is crucial to develop a way to connect intention and method, which borrowing from Kolko, I am calling an “ethic”.

In the attached document, I briefly summarize each article and diagram the author’s position with respect to the intent, method and result. Additionally, I present a diagram which synthesizes the various arguments and puts forward my own interpretation of how to align the intentions, methods and results of my design practice.

Intent, Method & Result

The Best Intentions: above average results

For the “With best intentions” section of our final theory course we are reading an article on international development by Michael Hobbes,  an article on the role of private corporations in alleviating poverty by Aneel Karnani, and an article about the focus of creative energy in corporations by Jon Kolko.  Each author uses the same basic structure for his argument: Here is what is being done now, here is what we should be doing. The issue of the scale vs. measurability of results and the amount of choice assigned to the recipient (whether of design or aid) also surfaces in each argument. But although all of the authors seem to be coming from a similar world-view, as they move from problem to solution they do not move in same direction on these axes. In fact, at the points of disagreement, where the vector of each author’s problem to solution cross, are key tensions. Tensions, I believe, we must keep in mind and use as a method of course correction, if we want not just to be the people with the best intentions, but also, to achieve above average results.

These tensions plus each author’s position relative to measurability vs. scale and amount of choice given to the user are summarized and diagramed in the attached document.

Best Intentions Position Diagram

Tipping point: Established testing protocol and making it up as we go along

Part 1: How do you Test for that?

Way back in Quarter 1 in our theory course we talked about lateral thinking. Lateral thinking describes part of the creative process that allows us to jump sideway (or laterally) from a logical progression of thought to a parallel path. In retrospect, the connection with the parallel line of thought makes sense, but it can’t be reached by logic. You have allow your mind to “jump the rails” to another track. This week, I have been thinking about the role of lateral thinking in the testing process.

We are creating a service, the working name is Tipping Point, that is designed to interact with users over a period of months or years, to help them get out of debt or save by enabling them to assign small amounts of money to their financial goal, at the same time they are out having fun.  In essence, “tipping” themselves.

So, naturally we have a lot of questions around timing and frequency:

  • How frequently should we contact the user, how should this frequency change over the course of use?
  • How frequently should we provide positive reinforcement for the user, how should this change over time?
  • Are the personas, or voices, we are testing for the product too much? Will the user get sick of them over time?
  • What is the right frequency of reminders to make people feel empowered that they are taking action on their financial goals and not just daunted and guilty about their present circumstance.

The logical way to answer these would be to run a test over the course of many months and gather data about the users experience interacting with our product and how that changes over time. But, ain’t nobody got time for that, at least not this quarter. That’s where the lateral thinking comes in.

We created what we are calling a “frequency test”, designed to get at people’s emotional response to ongoing reminders or check-ins (like they would receive from Tipping Point) on a compressed time scale. The test protocol involves sending SMS messages with suggestions of very simple actions the user can take to improve wellness. One group receives 10 messages the first day, 4 the second and 2 the third, a second group has the opposite: 2 the first day, 4 the second and 10 the third. Every night users receive a short survey via email about wellness suggestions, asking how many they did, how they felt about their self care today in comparison to how they feel about their self care on an average day, and how the frequency felt.  This test will continue into next week, with the focus shifted from wellness to finance.  From the first week we’ve gained a better understanding of how frequently people wish to be contacted, the importance in varying that frequency from day to day. We have also had the chance to sort out a number of glitches in the implementation of our test.

Jeff reviewing the data from the first part of the Frequency Test.

Stay turned for further insights from next round of the frequency test.

Part 2: Scenario Validation Testing

This week we  also ran a scenario validation test. We have been studying this method in our Evaluation of Design Solutions class. In essence, it is a more formalized version of the testing we were doing last week with index cards, beer and donuts.  We met with small groups of people our target user demographic, walked them through two scenarios about a user setting up and using Tipping, each highlighting a  different variation of the product voice.

wireframesForScenarios-01 copy
Screens used as part of Scenario Validation.

We solicited individual feedback via short questionnaires during the testing and then concluded with a group discussion.

Participants writing responses during Scenario Validation

The major insights that emerged from this testing are:

  1. There is a sweet spot for the tone of the app, financial enough to be credible, but also irreverent enough to be friendly and refreshing. We’ve been saying if our app got dressed, it would wear hip business casual.
  2. We tested the idea of creating a light hearted tone by asking the user to create a character that the user takes care of in place of taking care of debt or savings directly. That didn’t resonate particularly with users, what was more compelling was the idea of doing things for your “Future Self”.
  3. Users want some kind of progress report. The most compelling way we found to frame this is in terms of how much faster they are paying off their debt compared to if they just paid their minimum monthly payment and how much this will save them in interest.
  4. Users appreciate a quick and light weight set-up process, but that needs to be balanced with giving the user enough visibility into how the product works so they can form a mental model.  

Recruiting Participants for User Testing

We are three students in the Interaction Design and Social Entrepreneurship program. We are developing a service to help people balance day-to-day enjoyment of their lives with their financial goals of paying down debt or saving. (Learn more about our project).

We have the idea, now we need your help to test it!

We are looking for people to participate in user testing who are:

  • In the Austin Area
  • Between 22-30
  • In their first couple years of post-school employment
  • Making at least 40k
  • Have personal debt or are actively trying to save

We will provide food and/or a small token of thanks. We will also share results of our study. If you are interested in participating please contact us.

Thank you,

Jeff, Lauren and Samara

In the end there can be only one…

Designers are always talking about killing babies. Don’t worry, it’s a metaphor. In this case, the babies are ideas, our ideas. And like parents, we can get pretty attached, pretty quickly. Nonetheless, we’ve had to kill a lot of  ‘babies’ over the past few days.

The dream team (Lauren Segapeli, Jeff Patton and myself) spent Q2 researching how personal and familial narratives affect debt, financial literacy and the economic choices of 18 to 30 year olds. Starting at the end of Q2 and for several days before the beginning of Q3 weeks we generated many, many ideas for possible interventions to improve the situation we saw in our research. We call these design ideas.

DesignIdeas for blog
A selection of the ideas we generated based on our research.

Since then we have sorted through each idea (captured on a green sticky note) and by plotting the ideas based on criteria like feasibility, ability to monetize, social impact, and personal excitement, diagraming the parts of each concept and listening to our collective gut,  narrowed it down to the three ideas about which we are most excited.

Diagrams to flesh out details for the final 9 design ideas.
Diagrams to flesh out details for the final 9 design ideas.
Lauren and Jeff at work, surrounded by our externalization process.

Using story-telling, storyboarding and diagraming we have fleshed out the three remaining possibilities, both to help us think through the details so we can solicit feedback.

Storyboards for the final three ideas with our notes plus feedback from faculty and fellow students.

Sadly, by Saturday two more babies must die. Stay tuned to find out which idea we will be working on for the rest of the year!

Disclaimer: No real babies were harmed in the making of this blog post, or in the activities it describes.

High Fidelity – How to ask the right question

At Austin Center for Design one of the nicest things you can hear is Jon Kolko saying, “You made a thing. I’m proud of you.” You made a thing?  A major tenant of the curriculum is learning to make artifacts as tools to ask questions and make arguments. Ever step of our seven-week-long, iterative assignment to redesign the smart phone app for Austin Public Transit System, Cap Metro, has been about exploring how to create artifacts that ask a question and get the information needed for that stage of the process.

The first week we created a system flow with screen shots from the existing CapMetro app and then created a concept map of the existing app and a concept model of our proposed redesign. A concept map is a visualization of the structure of a system to understand complex interactions that cannot all be viewed simultaneously in the actual product. We started here because this is the right thing to make for the questions we were trying to ask, namely: How does the existing app work and what is wrong with it? What should we do instead? If we had started by recreating and then improving upon a single screen from the CapMetro app, we would have created an artifact that would be excellent for asking questions about CapMetro’s typographic and layout choices, but would not provide useful information about the overall structure of the system.

The second week we started creating wireframes for our proposed new system. Wireframes represent the layout of a screen that focuses on functionality and interactions, rather than visual design. We have iterated on our wireframes each week for the last six weeks, as well as updating our concept model to represent our iterations.

At week three we had iterated on the wireframes once, and it was time to go out and get feedback from real people. To do this we used a method of user testing called the think-out-loud protocol with paper printouts of our wireframes. The user is given a task to perform using the wireframes. While he is completing this task, the user is instructed to “think out loud”, verbalizing each action as he does it. During the test the researcher acts as the “computer” bringing up the appropriate screens in response to the user’s action.




Each subsequent week of iterating and testing I have had the opportunity to see how the changing level of fidelity of my wireframes, or closeness to the actual thing being represented by the artifact, affects the type of feedback I receive.  In addition to documenting at least three critical breakdowns to address each week, I have found a high-level takeaway from my testing. Many of these have involved how the level fidelity, or closeness to the actual thing being represented by the artifacts, have affected test results. A few weeks ago my high-level takeaway was about the need for better visual consistency and hierarchy in my typography and graphic elements throughout the system – a fidelity issue. Last week, I realized that I was trying to test navigation at a level that depended on the user having a spatial understanding of how on and off screen elements related to one and other. This is largely communicated through animation that was not coming though on paper screens. For my final user tests I created an on screen, clickable prototype, so I could model some of those animations.


[Link to clickable prototype here: http://invis.io/9A1W9OYYJ ]


What I found from my user testing is that the fidelity  of the clickable prototype created a number of issues of its own. Because the prototype looked very real, and some of the gestures felt like a real application, users were derailed when it did not behave the way a real application should behave. This was especially true for the keyboard, which was just an image with certain sections mapped as hot spots to link to other screens and the map, which was also just a static image and so did not allow for zooming in and zooming out.

“Wait, the keyboard is broken…” – User 2


Despite these issues, I got useful validation on my updates to the real time bus arrival feature, called Next Bus (see Next Bus Flow below), and feedback that lead to a final tweak to my on-going attempt to integrate step-by-step written directions to a destination with the map of the route (see screen 27 and screen 29 from the Search Flow).



“I don’t expect to swipe [between things] on a map.” – User 4





flows-02 flows-03


So, that’s it for this quarter. I made a thing, and I’m proud of me. I have learned about the specifics of designing a transportation application, about the tools and techniques available for wire framing and I have continued to hone my ability to craft artifacts that elicit feedback that is useful.

A moving experience: Nearing the end of the CapMetro ReDesign

It’s week 7. That might not mean much to you, but to us here at Austin Center for Design it means we have 10 days left to finish out the second quarter of the Interaction Design for Social Entrepreneurship program.

It also means that after spending the last six weeks working on redesigning the smart phone application for the Austin’s Public Transportation system, CapMetro, we only have one more week to update wireframes, user test them and present them for critique. We are using a method of user testing called the think out loud protocol: A user is given a written task to complete using the wireframes (for example, find a route between your current location and this address), the designer performing the test acts as the “computer” bringing up the appropriate screen or component in response to the user’s actions, and the user is instructed to “think out loud,” saying what he or she is doing and why, while performing the task.  With the fact that the quarter is drawing to a close in mind, I planned my user testing this week to focus specifically on a couple of problem areas in my design.

Me conducting a user test at a favorite dive bar. Image courtesy of Lauren Segapeli.

By this point my flow for searching for routes and narrowing down options based on departure time is going pretty smoothly. So, rather than run users through that again, I focused on how the user drills down on a particular route to a destination, looking at the interaction between the information presented through the map and the information presented via text. I also looked at the flow for the Next Bus feature, which allows the user to find out when a bus is next departing from a particular stop based on real time data, rather than just the set schedule.

My high level take away is both encouraging and daunting: my design has reached a high enough level of fidelity that the animation between states is increasingly crucial for the user to understand how to navigate the application. So, for my final round of testing and critique I need explore better ways to describe this animation. In the meantime, below are some of the specific problems I encountered in my testing and the complete flows I tested. I’m also including a high level system diagram that I created last week to explain a problem I was running into and how I have updated it in this week’s iteration.



Above is a search flow as the user finds a route, departing after 5:30, to go from his/her current location on Chestnut Street to my favorite Mexican restaurant in Austin, Polvos, on South First Street. point-01

This is screen appears after the user has selected one of the options for a possible route to Polvos.

“So I guess I walk from point 1 to point 2…” -User 1

The user I was testing with understood the numbered circles on the map to be points to travel between instead of steps on the trip, this caused confusion.



The user can access step by step instructions by swiping up the panel on the bottom. To return to different options the user clicks on the section on top where the options have stacked on top of each other. Most users understood this, however, it was somewhat awkward. This is where I realize I need focus on the animations and transition. It is strange for the options to stack in front of the to/from bar.

The other flow I focused on is for the Next Bus feature below:


I realized while I was testing that I didn’t include a way to get to screen 25, the list of all bus stops on the route.  I have since added the VIEW ALL STOPS button that you can see in the flow above to address this problem.


This shows the screen before the button was added.

Finally, I need make more apparent the connection between the scheduled information used to plan trips in the future and the ability to use real time data to check projected arrival time for in-transit buses.

“I’ll reopen it to check for estimated time of arrival when I’m leaving.” -User 3

I want the user to know that when he/she looks at route to a destination within a certain period of time before departure, the app will automatically query the Next Bus data to update the arrival times for buses and modify the route if necessary.  This issue, plus some improvement since last week are diagramed below.


Everything to everyone: CapMetro redesign iteration 4

Last night I did my third round of user testing at one of my favorite bars in Austin. For those of you who are just tuning in, the quarter-long assignment for the methods class is to redesign the smart phone app for Austin’s public transportation system. Each week we are creating updated wireframes, which are layout sketches for a digital interface based on the results of user testing. We are using the “think out loud protocol” for user testing, which involves asking participants to preform a task using the wireframes, saying out loud what they are thinking and doing as they preform the task.

I made some pretty big changes based on testing and feedback from last week’s iteration. Not surprisingly I found some pretty big holes in my design this week. I have identified two high level problems and three specific system failures from my testing.

The first problem is referenced in the title of this post. I’m still wavering on for whom I am designing this app. Is it the first-time/occasional user, or is it for someone who rides public transit multiple times each week? Once I determine this it will clarify how to prioritize features and how much explanation and hand-holding is required.  In particular, the Next Bus feature has a very different value for a frequent rider who knows the bus route she takes all time and just wants to see if the bus is on time versus someone who isn’t even sure there is a bus that goes to the right place.  We started the redesign process by writing a brief vignette about our user. I am going to revisit and update that vignette before I begin tweaking the wireframes for the next iteration. (From a testing perspective it is probably easier to find people who fit the first time/occasional user persona, but there are some potentially more interesting features for the frequent user. I’ll let you know what I settle on next week.)

The second high level problem is a visual design problem: it is not clear what is clickable, tappable, swipable or otherwise actionable, and what is just informative. As I work through this upcoming iteration I’m going to be very mindful of visual consistency and what visual cues have been successful in past iterations.

These high level problems manifested in the following 3 system failures or critical incidents:

1) The nearby bus stops on the home screen (red shapes) are not obviously clickable nor is it clear to users why they would want to click on them. (Image below is #1, #40, #20 and #30 in the flows at the bottom of the post.)

“Those red things are super annoying.” – Test Participant #3

“They’re just there, they don’t tell me anything.” – Test Participant #4


2) The system doesn’t differentiate between searching for a route to find Next-Bus information and searching for a route to find the complete schedule. (Image below is #21 in the flows at the bottom of the post.)


I created the diagram below to visualize this problem and to understand how the navigation is working (and not working) in general. You can see that there are a number of breakdowns, indicated by the red lightning bolts.



3) Placing the step by step instructions as pop-ups on the map, rather than as a list, is still confusing. Users did not realize the numbers were clickable to reveal the next set of directions. (Image below is #8 in the flows at the bottom of the post.)


Below are the complete sequences of screens that I used for testing, with orange dots to indicate user action.


Innovation, Provocation and Service Blueprints (plus a short rant)

Over the course of the last four weeks I have been learning about the principles and tools of service design in our Q2 theory course. Service design is a subset of interaction design that involves the coordination of actions and artifacts to provide value for another. This past week I was assigned to read Service Blueprinting: A Practical Technique for Service Innovation by Mary Jo Bitner, et al. In this article Bitner introduces a technique called Service Blueprinting and argues for its use to drive innovation at both the tactical and strategic level. According to Bitner service blueprinting is, “a customer-focused approach for service innovation and service improvement.” It is method of diagramming what the customer sees, does and interacts with as part of a service experience and what the customer does not see but is also necessary to support the experience. Here is an example service blueprint for a generic hotel stay from Bitner’s article.Example Service Blueprint

The five elements that Bitner calls out to include in a service blueprint are Customer Actions, Onstage/Visible Contact Employee Actions, Backstage/Invisible Contact Employee Actions, Support Processes and Physical Evidence. These elements run down the left hand column and provide the structure for the blueprint.

I have two thoughts about this article I’d like to share with you. First, is a criticism.  Bitner emphasizes again and again that much of the value that comes from service blueprints is because they communicate visually. She says that service blueprints, “allow all members of the organization to visualize an entire service and its underlying support processes,” that they are “more precise than verbal definitions,” that service blueprints are an example of the old adage that, “a picture is worth a 1000 words,” and that “service blueprinting can help […] overcome the limitations inherent in asking customers to describe such a service by using words alone.” Why belabor this point? Because throughout the article Bitner congratulates herself and her team for using this wonderful visual method, yet produces a 25 page article that has 2 images, and really one those is a detail of the other. She provides the example of the service blueprint for the generic hotel stay (above) and then goes through five case studies of companies that supposedly transformed their businesses using service blueprints and doesn’t show any of them. (End rant).

Second, I’d like to explore how service blueprinting might be a hinderance or an aid to innovation. I can imagine how service blueprints are valuable for identifying breakdowns in existing service structures and for planning and managing the complex interactions involved in a new service. But I wonder if they also become self fulfilling prophesies once the basic armature of the customer actions are created. Do people feel constrained to imagine solutions that fit neatly into one of the cells created by blueprint structure? Seeing the service blueprints from Bitner’s case studies would be helpful in addressing these questions.

Throughout this year we have talked a lot about provocation as tool for focusing creativity around a particular problem, without being weighed down by existing solutions. Provocations usually involved forced mash-ups of seemingly unlike things or prospective shifts. The ideas generated from these provocations can lead to innovation by breaking free from the expected. How could a service blueprint be a tool of provocation? Using the generic hotel example provided by Bitner I have rearranged the data to provoke new design ideas.

I reordered the customer actions at random. Much of what results is absurd. But think of a disruptive service like Uber. The idea that you would pay for a cab ride, and give the driver your destination before getting into the car was absurd, until wasn’t. I have not tried to rationalize a complete system around any of these ideas. I have just used the new relationships created by the customer actions and seemly mismatched employee actions and artifacts as a jumping off point for new ideas.

Here is my modified version of Bitner’s blueprint, in which the yellow boxes containing customer actions have been reordered.


Resulting Design Ideas:

  • Customer includes flight information for arrival on website and bellhop collects the guest’s bags at the airport.
  • Guest orders dinner before traveling to hotel. Hotel monitors travel delays and makes sure dinner is hot and ready when guest arrives.
  • Hotel parking lot becomes like a Sonic Drive-In and food is delivered to guests while still in their cars.
  • Hotel rooms are like parking spots in a lot. Signage indicates which floors have available rooms and guests wonder around and look at open rooms, when a suitable room is found the guest checks in from the room.
  • Distribute a printout with room services specials for that day/night attached to guests’ bags.
  • Make it easier for businesses to control employee travel expenses. Reserve room and check out and pay at the same time. Hotel amenities could be like minutes on a prepaid phone card, you only get as much as initially paid for.
  • Store guests’ luggage outside of room, deliver just what the guest needs at the appropriate moment. Toiletries at night, exercise clothing before guest goes to gym, etc.
  • Room service brings up a mobile buffet table or Benihana style table. Guest looks at what is available and then orders.

Other methods for provocation would be to reorder the physical evidence, switch what is visible and what is invisible to the customer, or overlay the service blueprint for a totally different type of service. In both the case of incremental improvements and wide ranging provocations, what is valuable about the service blueprint is that it brings together many disparate and complex components so they can be understood and manipulated.