News and blog posts from our students and faculty

Monthly Archives: July 2012

Iteration and Variation

Two of the most basic principles of design process are iteration and variation. They aren’t the same, but they are related. Iteration is making informed changes to an existing design. These changes may be provoked by user testing or critique. Commonly, these changes are provoked simply through the act of making the previous iteration; this is the pursuit of perfection, and can be endless (which drives some project managers crazy). When designing software or services, I’ve found that I gain a “sweeping sense” of design ideas, but I can’t keep all of the details of that sense in my head at once. Iteration is the process that allows me to infuse this sense into the work and overcome the limitations of my own memory. The first pass is a “broad stroke”, intended to get the essence of the idea out. For a service, this typically includes a view of the touchpoints, the people involved, the handoffs, and a few key details. In software, this is typically the “hero path”: the main push through the interface, helping a user achieve a single primary goal. The input for this broad stroke is imagination, and the bottleneck is the ability to remember.

Once this broad stroke has been created (drawn, wireframed, coded, etc), further iterations assume the basic framework as fact. The initial iteration acts as a constraint, and becomes rigid: I’ll refine details and extremities, review and change aspects of the idea, but the idea itself has come to life. That’s good, because it serves as a creative anchor. It’s bad, because I now have a subjective sense of ownership over it, and I’ll become unwilling to let it go even when a better idea presents itself. Each further iteration serves to solidify details, and they become taken for granted: they become facts.

Variation is a way of adding a sense of objectivity to design exploration. Variation is an exploration of alternatives. In my own design process, I’ve found that I’ll avoid variations unless explicitly prompted to make them, and sometimes I need to prompt myself. Where an iteration moves an idea forward (or backwards), a variation moves an idea left or right, and is not productive in a typical engineering sense: the expectation is that all of the variations (except one) will be rejected. But variations act as provocations for what-if scenarios. When I’m urging other designers to produce variations, I advocate what my friend Bob Fee calls the A-B-C-Q approach to variation. This is the idea of creating several expected variants, changing minor details (A leads to B, B leads to C), and then creating a wild or surprising jump (Q). These later “Q” jumps will ignore or purposefully reject constraints, or established precedent, or social norms, and it is from these “Q” jumps that more risky but exciting innovations emerge.

F. Scott Fitzgerald describes that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” I usually can’t hold opposed ideas in my mind, but I can visualize them one at a time on paper or in code, and process them at once after-the-fact. Developers are starting to embrace iteration through methods like “agile.” This is useful, because it treats designed (and developed) artifacts as less precious and more malleable. It is useful to also introduce variation into a development process, in order to explore multiple ways of solving a given problem, and it is extremely beneficial to allocate development cycles to the exploration of both iteration and variation. It’s not a “waste of time”; it’s how design works.

Posted in Reflection, Theory | Leave a comment

Scale and Social Entrepreneurship: Is Bigger Better?

Kriss Deiglmeier, Executive Director of Stanford’s Center for Social Innovation, recently wrote a blog post on the nature of scale. She’s pondering the urgency of growth, as described in nearly every entrepreneurship competition, pitch-fest, or best practice. She specifically hones in on the role of locally-specific, effective, but un-scalable solutions. She asks, “It is well known that social issues are interconnected; health, education, environment, and economic development are all intertwined. This is particularly evident across the world in low-income communities. Challenges such as hunger, poor health and poverty impact a child’s ability to learn or engage in education. Thus, there is evidence and research that supports the need for comprehensive solutions –– which are often too complex to be “cookie cutter” scalable. Yet are they not also worthy of funding?”

I wrote a little more about this in Wicked Problems – where my emphasis was on the generalizability of the solution itself. This is a relevant issue when scaling is attempted across cultures, as if proving the efficacy of a solution in Vietnam implies that it will work successfully in Cambodia.

A design solution always begins with a local insight. For many designers, this is taken to an extreme, as it’s an insight about themselves or their personal surrounding. It’s a personal process, one made only slightly more sociable by participatory design or other forms of co-design. Scale is an external force that’s applied or encouraged, often through manufacturing, operationalization, or the amplification effect of digital technology. The externality of scale is artificial: the design solution works and exists independent of the number of people served.

So why an emphasis on serving a large number of people?

For many, it’s a question of ethics, or “goodness”. I once encountered this argument from an extremely wealthy woman, one who gives a great deal of her money to important social causes. She viewed her giving as a selfish act: she was working to improve the quality of the world around her so that she and her children would have a better place to live. As such, she took great pains to research and understand the recipients of any money she provided, and took a self-declared “rational approach to giving” so that her money would “benefit the most people possible.” She thought about it like this: if I’m going to act to help people, I need to be aware of the cost/benefit tradeoff of my actions. Most of us have a practical limit on resources like money; in her case, the scarce resource was her time. And so she based her philanthropic giving on the rationality of maximizing her scarce resource. If it takes 100 hours to evaluate a potential grant recipient, she wanted the most social return on her investment – the most people helped, per hour invested.

The ethical question can be turned around by examining breadth of impact in respect to depth of impact. Pretend we have $100,000 to give to the broad cause of “literacy in the developing world,” and consider this simplistic argument.

Vietnam has approximately 7 million students enrolled in primary and lower secondary schools [pdf link] , and government expenditures per primary school student are an abysmal $23 [pdf link - worth reading in its entirety]. We could take our $100,000 and spend $1 to provide some basic materials to 100,000 students, thus increasing the expenditure per student, for 1.5% of all students, by a small amount. They may purchase books, pencils, or other basic supplies with this money.

Or we can spend $20 to provide more materials, or materials of a higher quality, to 5,000 students, having a more powerful impact but on a limited scale. $20 will appear significant to the students, parents, and teachers, as it represents a doubling of their current resources. This might buy chairs, desks, chalkboards, textbooks, basic electronics, teacher training, benches, more teachers (and therefore more classes – consider that “In Vietnam, more than 90 percent of children in rural areas attend schools with two or more shifts, resulting in an average class time of only 3 hours and 10 minutes per day” [pdf link]).

An ethical argument can be made, successfully, for either of these investments. But I don’t think the ethical conversation happens at the funding level (although I know it happens in excruciating detail at the execution/program level). I think the first investment is broadly assumed to be the best because it touches more lives. It is probably the best in the case of commodity solutions, such as medicine. I don’t think that’s true for most conceptual, qualitative, subjective issues, such as education.

I think scale is also a question of fame and positive press. While the internet has made it possible for massive awareness of an extremely narrowly focused campaign (Kony, for better or worse, provides an example), typical foundation thinking around PR seems to be conservative: a press release describing the massive financial scale of a grant, with the number of people helped as a byline. I’m not smart enough to do the mental arithmetic to figure out what $10B to save more than 8 million children by 2020 means; I also would typically not research child deaths in developing countries to see if 8 million people is a lot or a little (turns out it’s about how many children die each year in the world). Instead, I would marvel at the large numbers, because millions and billions are indeed large numbers, and that would shape my casual view of the effort as extremely positive.

Deiglmeier points out that she hears “… over and over again the frustrations of community driven organizations because funders immediately want to know the ‘scaling’ model of such organizations –– and that funders dismiss them if they cannot provide it.” I hear that, too, and I’ve seen extremely impressive solutions ignored because of their perceived lack of scalability. I would like to see more of a conversation around the need to scale – particularly from the big name funding agencies and foundations – and more questioning of the assumptions around bigger, broader, and more. Design-driven social entrepreneurship can push deep impact, and can provoke meaningful change, without necessarily touching thousands or millions of lives.

Posted in Reflection, Scale, Social Innovation, Theory | Leave a comment

Alumni Christina Tran wins Core77 Service Award!

Congratulations to Austin Center for Design alumni Christina Tran, who won the Core77 Service Design Professional Notable award for her service design work with HourSchool. Her entry, Building Peer Education Programs, One Hour at a Time, was described by the jury as demonstrating “a strong understanding and application of the service design process.”She joins other winners from Lego and the Welsh Government.

You can view the work, and award information, here. Congratulations to Christina and the HourSchool team!

Posted in AC4D In The News | Leave a comment

How to Use a Concept Map to Get Your Way

Most professionals will, at some point, find themselves in positions of selling: of persuading a skeptical audience that their vision of the future is a good one, and is worth pursuing. Most professionals do this poorly, attempting to use words to appeal to logic, as if the best argument is the most rational. Whether that should be the case is debatable; it certainly isn’t the case in most organizations. Instead, a successful argument for a future state is usually made through a combination of emotion and narrative, and appeals to the heart and soul. I’ve previously talked about one way of making this case, by connecting design research to value. And I’ve also talked a great deal about sensemaking, as the way people understand and form relationships with new ideas. Concept maps are a way of persuading an audience, while at the same time, educating them: helping them to see the world from a new perspective (yours!), and giving them a mechanism through which they can make sense of the new future you are proposing.

This is a step by step guide on how to make a concept map. My point here is not to show how to make one, but instead, how to use one. A concept map is overwhelming when it is first presented. It was an extremely effective tool for the person who made it. It becomes a point of confusion for the person who has to read it, and this is where a concept map crashes and burns: it’s a manifestation of the expert blindspot. In creating a concept map, you’ve learned new things and you see the world in a new way. It’s tempting to present the finished artifact as proof of this new vision. But the audience wasn’t along for the ride. They didn’t learn what you learned, they don’t see the world the way you do, and because the map is visually complex (as was your learning), they’ll be intimidated.

And so I recommend that you use a concept map for organizational change over a period of months, strategically, socially, and with a goal of manipulating the trajectory of your company by helping colleagues view the world as you do.

First, you’ll need to have an end-vision, a target for your organization. This may be a new role for your product, a new delivery mechanism, a platform change, a strategic acquisition, a re-org, and so-on. Over time, as you figure out what this end-game is, a concept map becomes a useful tool to represent the vision – to you. Use it as a selfish tool. For example, if I was pushing a healthy-eating rock up-hill at McDonalds, I might arrive at a big concept map, with a small portion that looks like this:

My intent is to show that the only way healthy eating will survive in a fast-food setting is if it is introduced by corporate, pushed to suppliers, and manifested through massive large-scale discounts, negotiated at the same level as corn. It’s complicated, hard to understand, and while it makes perfect sense to me, it’s of no use to anyone else. There’s no story being told; I literally need to explain it with words, or people won’t understand it or use it.

So, I might start by introducing this:

It seems dumb: it’s so simple. It doesn’t really say much. But it stakes out a view of the world with three major constituents. This is, potentially, not how people in the organization currently view the world – commonly, people in big organizations view the world through the lens of an org chart. This challenges that, albeit it extremely subtly.

I would put this in my presentations, email it to a few people, print it out and hang it on my cube.

And then, over time, I would start to replace it with this one:

I would start to describe how our franchise owners are fairly apathetic about what products are actually served in the stores, and instead, care about minimizing change, conflict, and cost.

After a few weeks, a version like this would start to show up:

Over time, the map is introduced into the organization, and at each step, there’s no announcement, unveiling, or massive production associated with it; it’s not a design artifact in a finished sense. Instead, it’s “released” through one on one meetings, in presentations, in conversations. And over time, it gets traction, because it begins to stand for things. The diagram itself starts to act as a placeholder for the conversations you’ve had, the vision of the future you see, and even the roles and responsibilities of individual people or entire business units. One way of thinking about a concept map, released in this style, is that it challenges the org chart: it’s a way of affecting change in a bottom-up fashion, rather than an autocratic manner.

And one day, you’ll be sitting in a meeting, and someone you don’t know, from an area of the business you’ve never had influence in, will present your diagram back to you, as if they made it. That’s an amazing feeling: your design work has shaped the tenor of the organizational dialogue.

Sometimes, you’ll need to introduce the map slowly – taking weeks or even months – until all of the various constituents have accepted it as a common language. We did just this in a client project at frog for an extraordinarily large client (hundreds of thousands of employees). We offered an initial view of a service landscape through a concept map. It was simple, just a few circles and some words – so simple that, at first glance, someone might say “that’s it?” And the map began showing up in various presentations, and posters, and emails. Over time, the map become more complicated – and was still “infused” into decks and emails. And it became part of the organization language; it became the way people talked about the future. This type of strategic introduction of new design language is extremely powerful. Organizational change can occur through design proxy – slowly, methodically, and purposefully.

-

If you like thinking about concept maps, check out Hugh Dubberly’s great archive at www.dubberly.com/concept-maps.

Posted in Reflection, Strategy | Leave a comment

AC4D in the Statesman: Austin design school instills humanitarian concerns

Check out the great article on Austin Center for Design in the Statesman!

Posted in AC4D In The News | Leave a comment

The Long Haul – Thoughts on Acceleration

There’s a lot of startup-hype dealing with speed. Go faster! Accelerate your product! Always releasing, always testing, always honing, always in beta! There’s an implicit idea that for an entrepreneur, faster is better.

When driven by venture capital, there’s an expectation of a massive exit within approximately five years (so investors in the fund can enjoy large returns, and liquidate some of their gains). Entrepreneurs who have taken VC money commonly describe a huge sense of urgency for the company, the drive to launch and scale quickly. I know I felt it at the startups I worked at; there was some large and mythical scary force looming right behind us, and we had to hurry, lest it “won”. Through all of this, tech bloggers and breaking news announcements have created a sense of overnight success and failure. Instagram was just bought for a billion dollars! Just like that – it happened so fast!

The thing is, when you talk to the entrepreneurs who have the amazing success stories of wealth through buy-out or IPO, they constantly talk about a long, slow slog through multiple and unsuccessful versions, flavors, and iterations. While there’s usually a single “moment of success”, most describe the process as more of a constant and ongoing climb up-hill.

In 2011, Foursquare created (and TechCrunch published) an infographic showing the massive growth of the service in a short time – growing from 0 users to 100,000 people in six months, and then to 10,000,000 members in a little over two years. But Dennis Crowley created a service called Dodgeball years before founding Foursquare, and “began toying more than five years ago with ideas about connecting cell-phone users through social networking software” – said in 2006, indicating that he had been pondering location-based check-in services since 2001. Toying and pondering are strange words, words I don’t think we give enough value. These describe a head-in-the-clouds awareness of how things are progressing, how the world is changing, and this feeling becomes tacit. It becomes a frame, a way of viewing the world. It becomes a form of expertise.

The story of Instagram is equally both long and short. The short story goes like this: a billion dollars, after two days of meetings with Facebook. But the reality is harder and longer, as Kevin Systrom describes working on Instragram as Burbn, and his peers in college recall that “As early as 2005… Mr. Systrom had his eyes on mobile phones as the wave of the future.” The quote seems so cliché – the wave of the future – but I think it’s a gesture to the same sort of reflective incubation I see in entrepreneurs. It seems to happen in the valley purely around technology, but it happens around other things too, like health and wellness, education, politics, and so-on. It’s the constant and low-level buzz of viewing the world through a lens and forming a long-term opinion about how things should be.

Facebook didn’t become a giant brand overnight. Zuckerberg started Facemash in 2003, Thefacebook in 2004, and a private, closed network called Facebook in 2005. Google has a similarly long and strange history. The story of Netscape founder James H. Clark, told by Michael Lewis in The New New Thing, is a similar tale of long, arduous, stay-the-course, subtle and reflective thinking.

The drive for speed, the need to always be launching and pivoting and hurrying and moving just doesn’t have a historic grounding in reality. It seems to be a distraction, and I think it’s harmful. I think it encourages a half-cocked form of product thinking, a defeatist attitude towards design and planning and a rejection of quality. You learn from failure, no doubt, but the idea of pursuing failure as a mechanism to short-circuit product incubation just doesn’t work.

Many people feel that there’s a certain window of opportunity for a “product/market” fit, during which a product will achieve the most traction and success. This window of opportunity is theoretically shaped by platform adoption, by familiarity with technology, and by perceived wants and needs of consumers. It’s also shaped by the competitive landscape. As competitors make strategic investments and react to market changes, there’s a sense of looming takeover: a feeling that the next big thing is always steps away from your innovation, ready to grab your market share. But this window of opportunity is just out of your control: it’s a result of all of culture coming together into a blend of technological adoption. The serendipity of product/market fit comes from being in the right place at the right time; it’s luck. But it’s not entirely luck, because you can hedge: you can increase your odds of success through the long-term, nose-to-the-grind refinement of a design, and a careful eye to trends and culture.

Design is not about speed, and design-led entrepreneurship isn’t about speed, either. There is no day where suddenly your product is irrelevant, or your competition has “won”. It happens gradually and amorphously. There are mechanistic (and real) reasons to hurry (your investors will bail on you, you need to pay your bills). But those are externalities to the idea of product/market fit. And if you don’t have those issues, there’s no real need to force such an extreme hurry.

Posted in Reflection | Leave a comment