News and blog posts from our students and faculty

Monthly Archives: April 2012

Do you want critique, or a hug? How to gain valuable criticism on your design

One of the most fundamental parts of the process of design is the critique, a formal opportunity for the designer to receive feedback from a group of people. There’s a lack of good literature on critique, although there are a few notable exceptions, and so for most, critique remains a mysterious tool. Those of us who went to art or design school learned how to do it, but likely never learned explicitly; instead, it was much more of an experiential process. I remember showing up at my first critique at CMU and being completely mortified by the thought of putting our assignment (a self-portrait) on the wall in front of everyone else, and then talking about it.

I wrote about critique in a broad manner for interactions a year ago, but I’ve been thinking about more specific ways to introduce students to the idea of being critiqued. Here are some thoughts about how to receive criticism; I’ll assume that the critique session is actually well organized and not just people sitting around talking, although that might be a poor assumption.

Be quiet.
When you are receiving a critique, it’s extremely tempting to rationalize your design decisions – to explain why you did the things you did. This will always come across as defensive, because it is: your rationalization is actually a way of showing that you’ve thought through the trajectory of the conversation and already considered (and judged) the end state. The defensive quality of a rationalization changes the conversation from a way to produce new knowledge to a verbal debate. But you’ve already chosen the medium to make your argument, and it’s your actual design work. By moving from the artifact to words, you game the system: your users won’t have access to your words when they receive your argument in the form of the final design. All they have is the thing you’ve made, and so it needs to offer the complete argument on its own.

Additionally, when you rationalize and describe design decisions prior to critique, you steer the conversation. For example, if you begin by explaining your color choices, you’ve done two things: called attention to a particular design element, at the expense of the whole (and primed the group to be thinking mostly of color and aesthetics), and set up a boundary around your design choice. Some people refuse to cross these boundaries once they’ve been publicly established, because you’ve implicitly claimed ownership over a design detail: you’ve signaled to the group “I care about this, and if you poke at it, you’ll hurt my feelings.” Ironically, you may have called attention to it because it’s the element you are most concerned about!

Write it all down.
Some of the best parts of a critique come from the small, nuanced details of conversation, and the ideas sparked by the conversation. A participant might say something like “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.” There’s at least three points that are important here (a stylistic decision of using a small, in-line modal; an implicit recommendation that the flow is too long; and an observation that context is important to make decisions). It’s unlikely that you’ll remember all of that when the critique is over, and if it’s a good critique, it’s unlikely you’ll remember anything, because you’ll be actively considering so many new and different ideas. It’s critical that you write it all down.

When my own work is being critiqued, I number each individual item, component, or artifact of the design with a unique identifier. Then, as a person is speaking, I try to type exactly what they say into a document, and link their comment to the design prompt with the unique identifier. I also try to log who said what, so I can follow-up later if I need to. In a few instances, I’ve found written feedback to be politically useful, too – when teams wonder where seemingly irrelevant design decisions came from, it’s effective to be able to point back to the origin of the idea as coming from within the team.

Extract more details.
Talking about interaction design is hard, because it’s multi-layered, requires an understanding of the system, and is highly contextual. It almost always requires a dialogue to really understand any criticism that’s offered. The dialogue, however, is a chance for you to ignore the first suggestion (Be quiet), and so while it’s necessary to ask for clarification, it’s important to do it in a neutral and open-ended fashion.

Consider two different approaches:

*

Other Designer: “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.”

You: “Why do you want to eliminate a step?”

*

Other Designer: “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.”

You: “Can you tell me more?”

*

The second choice seems light and almost therapeutic; it’s entirely non-confrontational, and acts in a way similar to the “five whys” of identifying root causes. While the first approach – “Why do you want to eliminate a step?” – is objective, it won’t be perceived that way.  Most people will hear you say “I don’t want to eliminate a step. Why do you want to eliminate a step?” and you’ll be herding them into a defensive corner and changing the tenor of the conversation.

Reserve time for conflict, and realize that you don’t have to agree.
Just because someone said something, and you wrote it down, doesn’t mean that you have to act on it. A critique is not a mandate. But be warned that there’s something strange that happens in meetings: people leave with very different views about what happened. If you are quiet during the critique, scribbling notes, people will leave the session feeling validated – that you heard them – and expect that their comments will be illustrated in the next round of revisions you do. And they’ll be personally frustrated when they don’t see the changes they described, because they’ll feel like you ignored them and they wasted their time. At the end of the critique, it’s critical that you set expectations about what you intend to change, and why you intend to change it. This is hard, though, because you might not know at that point, and your comments will likely open the door for further discussion (which takes time). It can be effective to end with a simple phrase like “I heard all of what you said, and wrote it all down. You’ve given me a lot to think about. I don’t agree with everything that was said, and so you may not see your comments visualized in the next iteration. If you feel strongly about what you said today, let’s try to talk about it in a one-on-one setting.” In large and politically volatile groups, I would recommend actually emailing both your notes and this disclaimer to everyone that was in attendance, and be prepared to explain – in a presentation or work session, but not in a future critique – why you made choices to ignore design suggestions.

Don’t ask for critique if you only want validation. If you want a hug, just ask.
A “bad critique” is one of the most valuable things a designer can receive, because it short-circuits the expert blindspot and helps you see things in new and unique ways, and it does it quickly. But sometimes in the design process, you don’t actually want feedback at all: you want affirmation, and you want someone to celebrate your work so you feel good. Learning to understand the difference is critical, because if you ask for critique, people will give you critique. But if you ask people to tell you the three best parts of your design, they’ll probably do it. As Adam Connor offered in his IA Summit talk, “Don’t ask for critique if you only want validation. If you want a hug, just ask.”

Posted in Design Education, Reflection | 2 Comments

Challenging The Financial Assumptions That Run The World

New York Times Columnist Joe Nocera can’t retire, because he doesn’t have any money. But, like a lot of people who are probably in the same position, he did mostly the right things, like putting money in a 401(k).

As I was growing up, I was taught the same “right things”, and I’ve taken the majority of these for granted as being appropriate things to do in a modern society. These include:

  • Put your money in a bank, because a bank is safe.
  • Max out your 401(k) contributions.
  • Save 25% of your income.
  • Consider large purchases through careful research.

Underlying all of these are assumptions about how capitalism works, based on ideas like rational actors working methodically to maximize personal value and support their self-interests. This walks alongside assumptions about work ethic: working hard leads to long-term financial success, and the harder you work, the more you’ll succeed. For many of us, these create our base understanding of the world: they are the scaffolds upon which major assumptions are built, and these assumptions color the way we think of democracy and equitable exchanges and fairness.

As these assumptions were being ingrained during my childhood, I remember having a perpetual sense of awe as I discovered various technological advancements. I remember space shuttle launches, and the Lego Technic sets, and learning how modems work. These engineering and technological feats build upon one another, and have always left me with the notion that humans can do anything.

I think, for myself and for a lot of other people, our technical abilities have become conflated with civic abilities, and we’ve made the incorrect leap that because we, as a society, are capable of building fantastic engineering marvels, we are somehow equally as capable of building societal marvels. But the more I understand the extremely short history of our financial system, the more I become convinced that we have no idea what we’re doing. Quite literally, everything we’ve been taught to accept about economics is a crap shoot, and we should all probably start challenging the most basic of financial assumptions. I realize observations like this are challenging, because they make us feel uncomfortable, but let yourself absorb these provocations, and see where your brain heads:

  • The cost of an item should reflect all of its externalities. As you walk through the aisles at Whole Foods, you put some locally produced items in your cart. They cost next to nothing, because they’ve been produced at a facility around the corner from the store. You decide to treat yourself. You select a green pepper. It has a label, which lists the pesticide tax and the VAT, as well as the shipping and freight fees. The pepper costs $84. Later that night, when you eat it, you take your time, prepare it simply with salt and pepper, and savor each bite.
  • A bank that stores capital should not also invest capital. When you go visit your bank, you can enter a room that has your money in it. It’s all there, the $65,000 you’ve saved, in various forms of precious metal. You pay a series of mandatory fees to have your money stored at the bank, but it’s worth it, because you can see your wealth expanding and depleting. When you make a purchase with your bank card, a little robotic arm pushes coins around. It looks like a video game.
  • People shouldn’t retire. As you reach 40, you’ve decided to cut your hours back to four days a week. Then, at 50, you switch to three days of working – you pick Tue/Wed/Thu, so you get a nice solid break for travel. By the time you are 70, you go in to work just one day a week. It’s an accepted norm to scale back to one or two days by the time you are 80.

I’m not naïve enough to think that these ideas should or will happen, or even that people will think them good. But the article cited above ends with a quote from Teresa Ghilarducci, a behavioral economist at The New School who studies retirement and investor behavior. “’The 401(k),’ she concluded, ‘is a failed experiment. It is time to rethink it.’” I think the entire system is an experiment, and many parts of it are failing. Innovation and creativity require a runway for exploration; as we develop new products and services, we’re realizing that comments like “that will never work” are self-defeating and unproductive. I think the same is true in areas of economics and policy. Tom Peters describes an idea as “a fragile thing.” Jonathan Ive explained that ideas “begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.”I can tell you 50 reasons why any of the above scenarios are bad, but I also realize that good ideas come from unexpected places.

Derivatives were awful and didn’t work. They were a financial innovation, and part of innovation is accepting the risk that things will fail. I don’t fault those who invented these financial instruments for trying innovative things. I do fault them for doing it with the resources of those who had no say or ability to weigh the risk. But we need more new thinking around economics and the core assumptions of capitalism, and we need to first realize that new thinking will always come with associated risk, and we need to approach the risk responsibly. This will require that we give more time to ideas before “logically” explaining them away based on our assumptions of economics and policy. We need to start challenging basic assumptions about how financial and policy decisions work, because frankly, they haven’t been working at all.

Posted in Reflection, Regulation | Leave a comment

Thoughts on Risk Diversification in Innovation

The biggest challenge to innovation is not “having a good idea.” It’s the way people in an organization understand, and respond to, the risk of innovation failing. This is commonly called culture: the organizational precedents and attitudes that have been established related to taking creative chances. I’ve been intrigued with the culture of risk since observing good ideas shelved during quarterly reorgs.  There’s a pattern to the death of good ideas, and it looks something like this.

  1. A new product, system, or service is in development. It may or may not be on time or on budget, but it’s aligned, at least superficially, with the larger strategy put forth by senior management.
  2. Mid-way through the development process, one of two things happens.
    1. The management team has rallied around a new series of four or five year “imperatives”, and this has forced a reconsideration (and reorganization) of people, business units, and roles.
    2. Earnings are reported, and they aren’t where they need to be. A message is passed down to various directors: find ways to cut the fat by eliminating unnecessarily risky projects. Risk, in this context, usually means something that will have an unknown market effect.
    3. The head of each business unit evaluates the projects under their control, based on the new priorities, and eliminates several of them based on how far along they are, how much money has been spent, how risky the project is, and how aligned it is (or isn’t) to the new priorities.
    4. Projects are killed, and team members reassigned to accommodate new projects, or laid-off entirely.

By way of example, one of the projects I worked on was for a major consumer electronics manufacturer. Like everyone else, they wanted to “own the home”, and figured that a router and device-management software would help them establish a walled garden around the various electronics a family had, providing more seamless connectivity between devices and locking the family into a particular brand. The software would give the company the ability for all devices to speak to one another elegantly, acting as a less awful version of iTunes. Analytics software on the device could phone-home to the company to indicate usage patterns, providing lots of back-haul, big data opportunities. The company had begun tooling for the physical production of the device, had organized the supply and distribution channels and plans, and the software was halfway done. In retrospect, it would have been a great way for the company to position themselves against Apple’s domination. And then, they killed the project, through a process as described above. A strategic re-alignment, based on poor earnings, sent the various teams scrambling to identify which of their projects were the most risky, and those were the ones that were eliminated.

But to examine the “risk” of a project means more than considering the discrete nature of the market effect for that one project. Risk should be evaluated in the context of a broad strategic importance to the company. This is based on a holistic understanding of how all projects work together to best support a strategy, and requires tempering risk decisions based on, as economist Daniel Kahneman describes, a broad policy-based frame and a lack of individual-project regret. Of course, this means that the individual in charge of making any decisions needs to have visibility of the various other changes being considered, so as to way risk across all projects. Kahneman relates an anecdote of a decision made by the top managers of 25 divisions of a large company. Each was asked to make an extremely risky decision, and each elected not to. And then, the CEO of the same company was asked his opinion, and he said “‘I would like all of them to accept their risks.’ In the context of that conversation, it was natural for the CEO to adopt a broad frame that encompassed all 25 bets.”

In my personal example above, an executive decision to cut spending rolled downhill, and at a middle management level, a decision was made most likely without the benefit of a broad frame of reference. I wonder what the CEO would have thought of the decision to kill the router project, if he approached it from the perspective of an innovation portfolio of risk, rather than from a purely economic standpoint. I suppose if there’s a lesson here, it’s to avoid making go/no-go decisions based on a limited purview, and a decision maker should always attempt to gain the benefit of a broader frame of reference. Talking to people is likely the most immediate way to gain that broader frame.

The broad frame approach manifests in a strange way for startups, at least for those who have taken investment capital. A good VC, taking a broad frame approach, will broadly guide their entire portfolio to make riskier decisions. But the founders in any of the individual portfolio companies will likely steer the ship towards a sure thing. Having never been in the VC position, I can’t imagine how that plays out, but I would guess that it’s conflicting and stressful for everyone involved.

I wonder what the parallel is for startups focused on social entrepreneurship. Organizations in the social sector are far from removed from misguided risk assessment, although they are less likely to be engaged in innovative development work in the first place. Traditional non-profits are extremely conservative, and seem to be in a perpetual state of realignment around a mission statement.  In a startup context, there’s a “risk” to doing anything, and that sometimes causes people to become paralyzed: what if I make the wrong choice? It seems like one approach to utilizing the implicit lesson of the 25-bet broad frame would be to do more things, spreading risk across the various actions and activities. This can be challenging, because most people at a startup already feel like they are running as fast as they can. I also wonder if the broad frame approach plays well with the idea of doing just one thing in an agile and learn way, in order to learn quickly and adapt.

If innovation is desired, and innovation requires risk, and risk is best managed by a broad-frame approach, it would appear that we should all be a) attempting to elevate our perspective by constantly looking for a broader frame, and b) taking more individual little chances, knowing that most will fail in order for a few to succeed.

Posted in Reflection, Startups | Leave a comment

Narrow Focus, Broad Applicability: How a focus on wicked problems can benefit everyone

I’ve noticed an interesting pattern in my students’ work, and it’s one that I wasn’t expecting. Because of the process we’re using – a process where we identify insights related to a wicked problem and use them as scaffolds for a new business – we’re aiming design at problems that matter, which helps to serve an underserved population. That’s by design. But what’s unexpected is that each business that has been created has resonance for “regular people”, too: in each case, students have developed solutions that support their target audience, but have a much broader appeal. The writings on Universal Design describe the same principle: by designing for those with special needs, your solutions will be usable by those with average needs, too. Oxo is the quintessential example: while the original handles were designed for those with arthritis, it turns out that peeling potatoes is uncomfortable for anyone, and so the narrow focus actually has broad resonance.

I’ll explain how this is working at AC4D by way of three example companies we’ve launched or are in the process of launching.

Pocket Hotline creates a way for volunteers to support non-profits by answering calls to those who are in need of a direct, personal and human interaction. The main social goal is to empower a community to support at-risk individuals.

The Pocket Hotline idea was developed after the founders, Scott and Chap, observed a general state of anxiety and chaos occurring at a local homeless shelter. There was a need for more personal interactions between the staff and the homeless clients, but there was an obvious lack of human resources available and a disproportionate amount of clients to staff. Additionally, while the case managers could schedule meetings with the clients, these acted as “strategic” interactions – intended to plan for the future – rather than “tactical” interactions, which would be useful in the moment of a crisis or a question.

But it turns out that there’s broader appeal for voice conversations and one on one interactions, and one of the most successful applications of the hotline has been in Ruby on Rails Software Development (the Rails Hotline). Although the subject matter is drastically different, the main premise is the same: “I need to speak to a person that can help, right now.”

Feast For Days creates a way for low-income families to cook in a communal setting, to eat home-cooked meals, and to increase their knowledge of the importance of healthy and natural ingredients.

Jonathan and Ben, the founders of Feast For Days, observed low-income families at food banks passing over healthy vegetables because they didn’t understand how to prepare the food. During shop-alongs, they learned that many of the people they were trying to help had never learned to cook, and few actually had the required infrastructure to legitimately cook a meal. As a result, most turned to prepared meals or fast food, which are notoriously high in sodium and low in food value. There was a need for a low-stress way to gain access to home-cooked meals in order to introduce new behavior and norms around preparing meals from scratch.

The main social goal is to lower the economic and emotional barriers to healthy eating while introducing new knowledge and techniques. It turns out, those same emotional barriers exist in higher socio-economic contexts, too, and communal cooking has a large social appeal irrespective of demographics. While those in more affluent social contexts may have once learned how to cook from their parents, they find themselves too busy or without the emotional drive to prepare a home-cooked meal.

HourSchool creates a way for the homeless to act as teachers, earning money and gaining self-worth in the process. The main social goal is to establish a more democratized view of who can and cannot teach. By immersing themselves in the culture of homelessness, founders Ruby and Alex identified that many who are homeless have skills or knowledge that they can provide to other people, but lack the setting in which to begin teaching. Additionally, many of the homeless described feelings of self-worth when they taught things to other people and helped other people; it was, for many of them, the most empowering feeling they had experienced. HourSchool gives them a platform upon which to teach, and a way to experience this feeling of empowerment in a more regular and predictable way.

It turns out that there’s a large appeal to teaching and learning in informal situations, as evidenced by the variety of subject matter offered via HourSchool; based on the breadth of topics on the site, it’s unlikely that you’ll be able to identify classes taught by the homeless. And, many of those who teach their first class describe the same feeling of personal empowerment and value that comes from helping someone: it’s an extremely positive moment of growth.

In all three examples, students started by focusing on an at-risk population and targeting their solutions just for them. But because our focus is on a mix of psychology and emotion, rather than technology or utility, we find ourselves playing directly with the material of behavior. And it’s starting to seem like we’re all a lot more similar than we are different, at least when it comes to our influences and aspirations. There’s a pattern here, although I don’t really like implying that this is at all formulaic:

  1. Start with deep immersion in an at-risk population. By literally embedding yourself in the population, you begin to better realize the nuances, needs, and culture.
  2. Identify several insights. These insights are provocative statements of truth related to human behavior, and they act as core assumptions. This is the stuff of abductive reasoning: the inferences that create the initial design scaffolding.
  3. Build to support the target population, based on the core assumptions. Consider the unique emotional and incentive-based qualities of the population.
  4. Generalize the design language to support a broader audience. This might mean changing the literal words and images used, or it might have to do with the core product offering and feature-set.

*

I like that this works, mostly because it implies that the barriers of “us” and “them” are pretty artificial. We’re all just people, and once you start poking at culture and behavior, you get to some pretty poetic places. These are based on big words like identity, community, self-worth, and meaning, and these big words are relevant for any population, irrespective of socioeconomic standing.

Posted in Design Education, Reflection, Startups, Theory | Leave a comment

A/B Testing Ourselves To Death

There’s a great article on A/B testing in Wired today; if you haven’t yet read it, you might read it now and then come back. I feel like somehow, I keep finding myself in a contrarian position related to Things That Are Going To Change Business, and I don’t do it on purpose, honest. But I’m skeptical of A/B testing, just as I’m skeptical of most experiment-driven behavioral economics research, just as I’m skeptical of the use of surveys to prove anything. And in all three cases, the reasons are the same: behavior is complicated, the method is overly reductive, and the approach ignores the magic and the soul.

Behavior is complicated.
I consume every book on behavioral economics and decision making that I can get my hands on, and while I can’t claim to understand all of what I read, I can make a few generalizations.

First, we have two main systems of decision making, one that’s historically and impulsively driven by an urge to stay alive, and one that’s reflective and considered. They both operate, all of the time, and they often contradict each other. That means that, depending on the broader circumstances of use, the same person will respond differently to a stimulus, and so attempts to consider causality related to A/B testing need to correct for things like the ambient environment in which the user is using the system.

Additionally, discrete behavioral rules are compounded by the world around you.  For example, there’s something called the mere exposure effect, where, as Daniel Kahneman explains, “repetition induces cognitive ease and a comforting feeling of familiarity.” Seeing a word, face, shape, or other design pattern over and over increases the likelihood that a person will view that word, face, shape or design pattern as “good.” You have control over your own web property, but none over the rest of the internet, and that’s where this exposure is going to happen. In other words, it’s likely that visual precedent set by other sites will change the way a user feels about your site. That’s just one of hundreds of discrete psychological effects that exist – discrete in how it was tested and observed, but when played out in real life, there’s nothing discrete about it.

Additionally, the way people act on the internet is highly irrational, and anyone who has ever observed a usability test realizes that many people seem to be in a state of chaos when using technology, clicking, quite literally, everywhere. A/B testing almost implicitly assumes a rational agent, one who is taking actions based on a logical assessment of what they see in front of them. My experience tells me that simply isn’t a good assumption, and so the results of your test are likely to be inconclusive (even when the data tells you otherwise).

The method is overly reductive, and we never learn why.
A scientific approach attempts to isolate one thing in order to predict causality. That’s the basis of A/B testing. The problem is, one thing isn’t being “isolated”: the human using the system. Statistical models can start to make predictive assumptions about the likelihood of the human using the system, fitting into various profile types, but it’s going to take someone a lot smarter than your average bear to produce these models. A well respected startup in Austin, Vast, employs David Franke, a brilliant mathematician, as Chief Scientist. A big company like Google has hundreds of people to do this work. But I’ve found it rare that the small companies most likely to engage in A/B think about this at all, much less employ someone with a background in statistics who is qualified to model it correctly.

There’s a great anecdote that I heard from Ron Kurti, also at Vast, and repeated at Luke Wroblewski’s site: putting forms in a mad libs style increases conversion by 25-40%. It’s safe to say that, immediately following this observation, mad libs style forms started appearing all over the internet (if you haven’t heard this yet, you are probably thinking the same thing: how can I change my site to have mad libs forms?). But we don’t know why this works, and because behavior is complicated, we have no way of creating generalized rules for where it works best. And yes, we can A/B test it on our own sites to know if it works for us, but again, we won’t learn why. I don’t want my products, systems, or services to be black boxes; I want to understand how they work, why they work, and I want to have some degree of control over the things I’m introducing into the world.

The approach abdicates responsibility.
The same problem I have with “Lean UX” is evident here: we’re throwing things out in the world without really thinking about the implications these have on real people. As Wired describes, “But with A/B testing, WePay didn’t have to make a decision. After all, if you can test everything, then simply choose all of the above and let the customers sort it out.” Your customers aren’t there to sort it out. They’re real people, with real emotions, and your test is having real implications on their real lives. This may not matter, depending on what it is your company does. It’s hard to argue that, on a site where people rate restaurants, it’s ethically irresponsible to change the color of buttons to determine which has a higher transaction rate. But I would make a much more adamant case that, in a system used on a daily basis by an at-risk population, your customers can’t be your guinea pigs.

The approach ignores the magic and the soul.
I understand the value of data and a rational approach to things like engineering. I would like someone who is designing an airplane to use a rational, data-driven, scientific, rigorous approach to understand how much weight that plane can hold. But in the same example, we find an obvious illustration of what happens when we only use an analytical approach. Flying sucks, and it sucks because it’s been engineered to death. Using Google is starting to be a lot like flying, probably because it’s being engineered to death. An emotional approach has value, because it provides things that are unexpected, sensual, poetic, and things that feel magical.

*

Good design crafts a story, and I can’t think of anything more powerful than a good story. Brian Christian wrote a great piece for Wired, and I’ll be damned if he A/B tested multiple versions of it to find the one with just the right level of engagement. I don’t want to live in a world where things are optimized, much less optimized for transactions and consumption. I want up and down, and high and low, and things that are absurd, and things that have personality, and things that react in unexpected ways.

*

Related:

Designers and A/B Testing

Why A/B testing of web design fails

 

Posted in Methods, Reflection | Leave a comment

Leveraging Analogous Situations: Looking for Precedent In The World Around Us

For some startups, identifying what to build can be emotionally difficult. You may have identified an initial topic area, conducted research, and even synthesized the research into meaningful insights, but may be having trouble identifying a preliminary set of features or capabilities. I’ve seen technologists wary of building the wrong thing and having to rework code, and product managers who aren’t yet confident in their product decisions, and designers who aren’t done synthesizing research, and all of this can culminate in a culture of inaction. I suppose, in many ways, various things like “Lean” and “Agile” are a way of getting over this hump, by trying to build something super-small and super-fast and “fail early and often.” There are other ways to identify a set of features with resonance, though, and I think these are sometimes less troublesome and more effective. These include identifying an analogous emotional experience, and mapping the interactions over time.

By identifying an analogous emotional experience, you can understand and leverage emotional inflection points.

First, think about the insights and goals you’ve identified through your research. If you are working in the space of medicine, you might have described insights like “People want to stay healthy with minimal effort” or “People don’t understand or trust scientific terms for medical conditions”. You might have identified goals like “Safely treat a disease” or “Understand treatment plans.” As Alan Cooper describes, “when technology changes, tasks usually change, but goals remain constant”, and so these goals will be true irrespective of the medium of your solution.

Now, based on the goals and insights, describe the uniquely human interactions and emotions that are typical when people try to achieve their goals. Some interactions and emotions related to safely treat a disease include “Remember to take a pill each day”, “Feel confident of progress being made”, and “Check in with a professional once a month”. Some interactions and emotions related to understand treatment plans include”Read about the treatment plan in plain language”, “Discuss complexities with other people”, and “Feel in control”.

Now, think about a comparable and analogous situation that has nothing to do with health care. What are other situations where all of these qualities are true?

  • Remember to do something each day
  • Feel confident of progress being made
  • Check in with a professional once a month
  • Read about the situation in plain language
  • Discuss complexities with other people
  • Feel in control

I see an analog in things like gardening, doing an executive MBA, and training for a marathon. All of these situations require daily interactions, have a long and slow sense of progress, require infrequent but regular professional interactions, have lots of jargon that can be described in plain language, and require a feeling of control.

Take one of them – say, training for a marathon – and begin to describe how the process happens, over time. Sketch a timeline of it, and describe the main artifacts that are used to support people as they train. For example, there are devices people wear to track their progress through the day. There are calendars that coaches prepare, to remind people of their training regimen. There are groups people attend, in order to receive encouragement and help. And there are magazines people read with inspirational stories of people just like them, succeeding.

All of these artifacts become prompts for your brand new product in healthcare, offering initial touchpoints and pointing at potential features for your new product. Ponder the calendar idea, the group idea, the magazines, and the devices, and think about why these are so effective in the analogous situation. And then, steal the ideas liberally, and re-appropriate them in the new context.

This method of looking at analogous situations works, but requires a rich view of the world, one where it occurs to you to think of marathon training or gardening. And so in addition to this technique as a prompt for sparking momentum, consider how you can more generally broaden your view of culture and society. That might mean reading new blogs, that have nothing to do with software or startups, and going to conferences that are two or three times removed from your comfort zone.

Posted in Methods, Reflection, Startups | 1 Comment

Big Education Is Not Better Education

An article in Forbes, from a few days back:

“The University of Florida announced this past week that it was dropping its computer science department… The school is eliminating all funding for teaching assistants in computer science, cutting the graduate and research programs entirely, and moving the tattered remnants into other departments. Let’s get this straight: in the midst of a technology revolution, with a shortage of engineers and computer scientists, UF decides to cut computer science completely?”

All jokes about Florida aside, it does seem strange, and the response from the University doesn’t really make it any clearer. I don’t think the issue has anything to do with computer science. Instead, it seems like a typical, and poor, financial decision made by administrators of a public university who have the operational luxury (in most cases, mandate) to extract themselves from the reality of those their decisions affect. It’s actually quite similar to the recent move by California State University to completely eradicate financial aid that was already promised to close to 20,000 students: a decision that seems made with a cold focus on numbers and budgets, rather than “the right thing to do.”

It’s strange that we’ve allowed ourselves into a situation where “cold logic” and “the right thing to do” don’t align. But I see this sort of thing – seemingly arbitrary decisions based on an unsustainable financial model – happening in nearly every public campus across the country. It’s all a flavor of the same unsustainable economic model borrowed from the Fortune 500, a byproduct of a goal to scale, where somehow “bigger” has been equated with “better”. There’s been no meaningful reflection on the repercussions of growth for growth sake. I get that, at a mission-level, public universities may feel the need to offer their services to as many as possible, but a thin education for many at the expense of depth for a few seems a particularly bad decision in an economic environment that’s drifting towards a service economy and trying desperately to find broad sources of innovation. Not everyone needs to go to college right after high school, and it’s time we acknowledged that skipping college does not equate to failure.

I’m skeptical of the amount of the buzz around “disrupting higher education”, because I haven’t actually seen any examples of true disruption occurring. Moving your existing curriculum online, providing videos of fact-based learning, or making education free are all nice, but are only tiny examples of change. We’re still stuck with a factory system that has thousands of students sitting through introductory classes in chemistry or statistics, learning little from the professor who doesn’t want to be teaching in the first place.

And, I’m not entirely convinced that the academic focus of higher education needs to be disrupted, because there are many, many things that work well about Universities. At its best, attending a University is a time for boundless cross-disciplinary exploration, personal reflection, and, perhaps most importantly, mentorship. This isn’t some fake nostalgia; there are many of us that were truly changed by our experiences at school. I had the opportunity to learn from both Herb Simon and Richard Buchanan in one place, and that’s something that, without the existence of a formalized research institution, would never have been possible.

But I have a feeling that what attracted them to CMU, and what continues to attract people like them to great schools all over the world, is not the size or scale of the institution. Instead, it’s the promise of doing meaningful work with brilliant people. It isn’t the content of the University that needs to be disrupted, or even the delivery mechanism, at least in the upper levels of specialization (where class sizes are small and the focus is on depth). Instead, disruption needs to occur in the operational areas of higher education. Reconsider the core assumption of growth, and you begin to question if freshman should act as a subsidy for the upper levels, or if massive operational budgets, coordinated by faceless administrators, are needed, or if massive high-school recruitment efforts are worthwhile, or if standardized testing is actually necessary. Instead, offer a commitment towards affordable education, and a commitment towards small and focused schools. A school that is small and 100% operationalized around tuition can eliminate overhead, lower the cost of attendance, allow teachers to teach, allow researchers to research, change quickly to adapt to changes in the world, be selective in admissions, and act as a self-sufficient entity. It’s not nirvana; it’s just a simple model that makes sense.

Posted in Reflection | Leave a comment

Unicorns Exist: A Good Designer Can Do A Lot Of Things.

To be a good designer, you need to be able to design things. That wouldn’t seem controversial, except when you start to poke at “able”, “design”, and “things”, you encounter the unicorn problem. A unicorn is, of course, a magical and non-existent creature, and the metaphor implies that a designer who can research, sketch, code, crank out wireframes, put on a public song and dance, take out global executives to delicious sushi without saying something stupid, and wear a black turtleneck without looking absurd is also a magical and non-existent creature.

I think the unicorn problem is actually intertwined with the problem of failure, which – in order to continue the mystical animal metaphor – I will call and hereby claim copyright over as the Duckbill Platypus Problem. Design is iterative, and we need to continually learn in order to improve a design. Most of us have realized that one of the richest forms of learning comes from experiencing and reflecting on personal failure, and since we’ve all seen that silly shopping cart video a hundred times, the idea of “fail early and often” has become embedded in our brain as a Good Thing To Do. We should all be the Duckbill Platypus, a failure of an animal if ever there was one. I could have selected Camel, but I feel like the Camel already realizes it’s a bit obtuse, while the Platypus is hopefully naïve.

*

Not everyone believes that unicorns are fake, and not everyone believes that failure should be our goal. Andy Budd, who ran the recent conference I attended called UX London, set off a small Twitter firestorm by first stating that “I’m staring to get annoyed by all the design pundits championing failure. There’s something to be said for doing it right first time round” and then qualifying this with “Failure is a form of waste, so Lean start-ups should really try to minimize failure if possible rather than use learning as a handy excuse.”

And about a month ago, Cennydd Bowles described that the Unicorn label “reinforces silos, and gives designers an excuse to abdicate responsibility for issues that nevertheless have a hefty impact on user experience.”

*

Design ability means something, and increasingly, it means a broad something, because we’re both codifying existing practices and constantly identifying new medium in which to manifest this process. The rough process looks like this:

I realize it’s overly reductive, because there’s really no clear delineation between phases. But I would expect anyone calling themselves a designer to have competency in this process.

When you compare the process with the medium, you begin to see a lot of complexity:

This is where our unicorn shows up: should we have expertise in all mediums? That would be hard, but “hard” isn’t necessarily a good reason not to do it. We could look at how science is applied in medicine as a precedent, but I’m not sure I like looking for precedent from areas that are so completely broken. If all you have is a hammer, everything is a nail, and if all you have is competency in print design, everything becomes a brochure and a campaign.

As an aside, I think the most exciting part of design is not in gaining expertise in these axis, but instead, realizing what happens when this is applied against the external subject matter:

One of the things that, I think, intimidates new designers is that they feel an expectation to know all of these things, too. Which is, of course, ridiculous, because all of these things means all technology humans have ever created, as the discipline of design is about humanizing technological culture.

And so back to Andy’s post about failure. It seems to me that a failure from which we can learn is when we apply a competency in process [x axis], and a competency in medium [y-axis], to a new and novel context [z-axis]. If we fail, it should be because of our inability to fully understand the new context, but not because of our inability or inexperience with our process or medium. When you have people in senior and director roles with four and five years’ experience, there’s no doubt that you’ll end up with failure in these areas, and that’s unfortunate, because that really does indicate the “bad kind of failing.”

Ultimately, you can have ability as a designer, in sort of a flat, broad, and generic sense, and you can have ability in a medium, in a rich and deep sense, and to be able to design things, you need both. You need to be a unicorn, and the bigger your horn, the better your work will become.

Incidentally, as I drew this, I couldn’t help but feel like Robin Williams in Dead Poets Society, when he encounters his students plotting the “greatness” of a poem on an x and y axis. Seriously? A chart of design? Maybe I’m the Camel..

Posted in Reflection | 1 Comment

The Ethics of Disruptive Innovation in Wicked Problems

Academics frequently conduct research as an end in itself. Practicing designers (sometimes known as “design researchers”) attempt to “use” the research to provoke new ideas. These practitioners have formalized a process of design-led innovation, where this applied ethnography is followed by reframing (looking at a situation in new or unexpected ways) and iterative ideation (trying things with real people in an effort to see how well these new and unexpected ideas work) as a way of driving disruption in tired or conservative industries. A formal or traditional approach to ethnography requires a researcher to remain impartial and attempt to intervene as little as possible, but the observations extracted by a design researcher are frequently driven by direct participation and active intervention. A design researcher may stop a participant and ask them to explain what they are doing, why they are doing it, or if they always do it that way. One of the “best practices” of a form of design research called Contextual Inquiry is to establish a master and apprentice relationship with a participant, where the design researcher literally learns from the participant by trying things, much like a master craftsman shows an apprentice how to use a tool. The design researcher’s intent is to engage in rapid and active learning, and to gain empathy. This is sometimes called (affectionately by designers, and with disdain by academics) ethnography light or guerrilla ethnography, and is used in contexts as varied as understanding how people purchase perfume to how the workflow of vehicle assembly can be streamlined.

In academia, the phenomenon of “informed consent” plays a major role in determining the scope, scale, and approach of research that is conducted with these at-risk populations. The use of human subjects in experiments has a tainted history, and so governance boards (called IRB, or Institutional Review Boards) have been established to ensure these populations are not targeted in unethical or problematic ways. In the United States, researchers who receive funding from government agencies (such as the National Science Foundation) are required to have their research reviewed by an IRB, which is regulated by the Department of Health and Human Services. The Belmont Report – the document that established a majority of these research rules – was adopted in1978. This is one of the first attempts at defining informed consent. A critical component of the report is duplicated below, in full:

Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them…

Injustice may appear in the selection of subjects, even if individual subjects are selected fairly by investigators and treated fairly in the course of research. Thus injustice arises from social, racial, sexual and cultural biases institutionalized in society. Thus, even if individual researchers are treating their research subjects fairly, and even if IRBs are taking care to assure that subjects are selected fairly within a particular institution, unjust social patterns may nevertheless appear in the overall distribution of the burdens and benefits of research. Although individual institutions or investigators may not be able to resolve a problem that is pervasive in their social setting, they can consider distributive justice in selecting research subjects.

Some populations, especially institutionalized ones, are already burdened in many ways by their infirmities and environments. When research is proposed that involves risks and does not include a therapeutic component, other less burdened classes of persons should be called upon first to accept these risks of research, except where the research is directly related to the specific conditions of the class involved. Also, even though public funds for research may often flow in the same directions as public funds for health care, it seems unfair that populations dependent on public health care constitute a pool of preferred research subjects if more advantaged populations are likely to be the recipients of the benefits.

One special instance of injustice results from the involvement of vulnerable subjects. Certain groups, such as racial minorities, the economically disadvantaged, the very sick, and the institutionalized may continually be sought as research subjects, owing to their ready availability in settings where research is conducted. Given their dependent status and their frequently compromised capacity for free consent, they should be protected against the danger of being involved in research solely for administrative convenience, or because they are easy to manipulate as a result of their illness or socioeconomic condition.

There’s very little in the report that’s controversial, because the report takes a very common-sense, humanitarian approach to research. The majority of researchers doing work in academia are already well aware of this ethical conversation; it’s a standard consideration in formulating a research approach, and it’s part of the culture of the academy.

But this same process of design research is used outside of academia, by practitioners: design research is considered one of the keys to disruptive innovation. Design research, followed by reframing and ideation is increasingly being adopted by practicing designers at companies like Nike, Starbucks, and Procter & Gamble. In these contexts, this design research is positioned as a form of market research, aimed at identifying latent needs and provoking new product and service ideas. And, the same process is used by social entrepreneurs in the context of humanitarian problems, known in circles of design as “Wicked Problems”. Broadly, these problems are the systemic issues of poverty, hunger, education, drug abuse, and so on – the large, interlinked, and societal issues that stem from our public policies, our use of technology, and financial inequality. Designers who engage in tackling these problems realize the potential of design as a tool for affecting positive change, and so they immerse themselves in the cultures they are hoping to effect. They utilize a number of different design research methods, such as Contextual Inquiry, Participatory Design, or Bodystorming, all in an effort to gain empathy and understanding with a target audience. Design researchers, coming from academia, professional practice, or acting as social entrepreneurs, may live on the streets with the homeless, volunteer at shelters, engage with case workers, and otherwise explore the phenomenon of homelessness.

The same process of looking at behavior is used in academic research, in for-profit commercial research, and in both for- and non-profit contexts of social entrepreneurship.  This presents a problem, because the safeguards put in place by law to protect at-risk populations are largely ignored by those doing non-academic research. In my experience, I’ve found that design research, applied outside of academia, is nearly void of a formal ethical process. And there is a cruel irony in this, because these are the same innovators who are likely to actually produce new products and services. The results of their work will be more prevalent and impactful, and the positive and negative repercussions felt more broadly than academic research stuck in the confines of an academic journal. It is in a commercial or entrepreneurial setting that ethics are more important, as the potential for manipulative practice is more likely.

Some (very few) design researchers may make rudimentary efforts to simulate the intent of the IRB. They may have their participants sign consent forms, or the researcher may go out of their way to articulate the research process and the compensation a participant may receive for their participation. They typically describe to the participant that they can quit the research process at any time and will still receive the compensation offered to them. But these efforts are minimal and inconsistent. And the lack of informed-consent form is just one of the problems we encounter when we apply design research in commercial contexts.

Some of these problems are listed below; these are all problems I’ve actually observed, and I’m sure there are many more.

Forming a Non-Sustainable Relationship. Designers, intent on learning about a particular situation, form a relationship with a member of an at-risk population, such as a homeless person. They learn about this person, understand their wants and needs, and learn to empathize with them. In doing so, the participant becomes either emotionally, physically, or socially dependent on the researcher. When the research phase of the project is over, the designer leaves.

Safety. Designers (and particularly, design students) find themselves in unsafe situations, such as sleeping on the streets or participating in drug purchases, in an effort to learn about a particular culture or empathize with a specific audience. The richness of these experiences is alluring, and it’s difficult for the student (and the professor) to identify appropriate boundaries. This is compounded by popular celebration of this behavior (for example, Sudhir Venkatesh’s research work with inner-city gangs, popularized in Freakonomics).

Broad, Impromptu Research Activities. Designers depend on a fluidity of action in the field, where they observe actual behavior and can respond to the activities they observe. This is hard to plan – the entire benefit to the research is in its fluidity and reliance on actual behavior as a prompt – and so the research plan that is produced is broad and vague. An IRB may be unwilling or unable to approve such a broad set of activities. This is echoed by academic Michael Schmidt in a thread on the PhD Design mailing list, “…  the review boards are often comprised of people who know very little about qualitative research and who in some cases even hold a bias against anything outside a conventional quantitative study, randomized trial, or a rigorous mixed methods approach. Ironically, low-impact, non-invasive studies like carefully constructed interview protocols can be the hardest for which to receive approval.”

Equitable Compensation. In order to engage with a population, a design researcher typically offers compensation in response to a particular set of actions. One of my former colleagues at frog design, Jan Chipchase, describes that “Defining ‘equitable compensation’ can sometimes be tricky for the simplest of design research activities (e.g. a home interview), but is especially problematic when researching highly financially constrained communities where the gulf between the wealth/power of the participants and the researchers can be considerable.” He’s exactly right. Unfortunately, I’m not convinced teams have the necessary experience that Chipchase has to make this assessment, and what’s worse, only very rarely do teams even have this conversation. For those in at-risk populations, inequitable compensation may provoke negative consequences, such as the purchase of drugs, a competitive or violent reaction from peers, or the inability for a participant to end a research engagement when they feel uncomfortable.

Use of Research For Questionable Means. Research conducted outside of academia is used to provoke new products and services. There is extraordinarily little conversation in industry as to the responsibility a design researcher has in translating observations into product insights. In the Epic 2006 conference proceedings, Stokes Jones describes a fascinating body of work related to homeopathy remedies in South Africa. As an anthropological study, it sheds light on the unique bottom-up approaches to innovation in developing countries. But it’s not just an anthropological study: this research was funded by Procter & Gamble with an explicit ambition, to “design new preparations specifically for Southern Africa (to fit Africans’ tastes and habits) as well as to target ‘lower income consumers’ (low for P&G’s targeting but average for South Africa).” I don’t fault Jones at all, as his presentation and description of the research indicates a thorough reflection on the ethical complexities of this research. But I also wonder what happened at P&G after this research was presented, and based on my experiences with big-brand consumer insights teams, I can only assume the response to South Africans putting Vaporub in their hot drinks was met with giddiness at the new financial potential. This is the “design imperialism” argument, which I frankly view as less critical and more of a red-herring than the other four points above.

The summary of these points is that, first, there is no IRB for professionals or for social entrepreneurs, and there is no understanding of the role of such a board. While there exist independent review boards, it is safe to assume that the vast majority of practicing design researchers are not aware of them and do not engage with them. Next, there is little shared understanding of the ethics of design research for professionals or for social entrepreneurs, and the degree to which design research activities are examined in a particular context is extremely inconsistent. And finally, there is a larger conversation around ethics in design research that is only happening in the periphery (there are a few small journals and respected individuals talking about this, but only a small number.)

*

And so presents a strange, albeit complicated and extremely textured problem. Well-intentioned designers hoping to learn about an at-risk population with the intent on helping that population must become a part of that very population, learning the language and the culture, understanding the workflow and broken policies and procedures, and trying their best to feel the emotions of that population. And in doing so, these well-intentioned designers may be forming important relationships, acting in life-saving capacities, learning the private and intimate details of people’s lives, and otherwise disrupting the status quo. They are frequently performing these activities on behalf of for-profit companies, and in the context of finite projects. There is a tension between the selfish and the responsible. Research in creativity and innovation increasingly describes the need for iterative design, the ability to fail and learn from failure, and the importance of playful, divergent thinking as a way of sparking new and unexpected ideas. This presents a problem for those engaged with an at-risk population, because these qualities are at odds with accepted behavior about at-risk interventions.

This tension and problem is exemplified by a recent experiment at South By Southwest, an extremely large technology conference in Austin, Texas. At the 2012 conference, one of the strangest stories to emerge was that of the Homeless Hotspots – a project coordinated by the non-profit Frontsteps and the for-profit advertising agency BBH Labs. The premise of the project is simple, but the implications are extraordinarily complex. Homeless individuals in Austin were given technology that allowed their physical presence to act as a 4G hotspot, and wore t-shirts that announced the presence of the hotspot. Nearby individuals could utilize the free bandwidth, and if they wanted, they could donate money to the homeless individual for providing the service. The project received the following feedback:

It sounds like something out of a darkly satirical science-fiction dystopia

The digital divide has never hit us over the head with a more blunt display of unselfconscious gall

It has to do with digital divides, haves and have nots, and the idea that a fellow human being is of no more use to you than as an Internet jack

One way of viewing and considering the Homeless Hotspot project is through a lens of disruptive design, or design thinking. This is the process by which a designer examines a situation and then attempts to reframe it by challenging, reconsidering, or outright rejecting existing norms. A traditional way of thinking about helping the homeless is by fulfilling their basic needs, such as food, water, and shelter, and then providing case management skills to help them find a job. An innovative way of thinking about helping the homeless is by combining their geographic independence with technology, and giving them a service to offer those around them. This argument describes the Homeless Hotspot as a successful design approach, because the designers were able to learn from the activities of the prototypical situation and can now improve subsequent ideas based on these findings. What’s more, this argument paints the process – “try crazy things that question and disrupt our standard way of viewing a situation” – as fundamental for affecting innovative change.

Another way of viewing and considering the Homeless Hotspot project is through a lens of ethics and empowerment. From this stance, the homeless are unable to adequately assess the financial and social implications and repercussions of acting as a mobile hotspot, and by definition, their socio-economic status precludes them from objectively and knowledgably consenting to such an intervention. The work is dehumanizing because it leverages a group that is in no position to appropriately assess the mental or social harm that might come from such an intervention. This argument describes the Homeless Hotspot as a harmful failure, because the designers took advantage of a population that was unable to properly assess the tradeoffs of a decision to participate. What’s more, this argument paints the process of iterative reframing and disruptive design as harmful and exploitive.

The situation is not a simple one, and there is no easily supported, one-sided judgment of the project. What is clear is that there exists no real depth to the conversation of design ethics in the context of wicked problems. The population of design researchers is small, disparate, and without a shared language or set of ethics to ground their important activities.

There are academic design researchers like Chris Le Dantec, who study at-risk populations, design interactions and interventions, and then describe these in written, peer-reviewed journals. Typically, these design researchers are constrained by the rules of their university, which strictly comply with the IRB rules. Graduate students and faculty who conduct research in these contexts submit their research plan to the IRB well in advance of their actual research plan, and revise the plan according to feedback from the IRB committee in order to ensure an ethical and responsible approach to research.

There are practicing design researchers at for-profit companies, who study an at-risk population for a paying client, design interactions and interventions, and then monetize these. Typically, these design researchers have no professional constraints on their behavior, and so they proceed as they have been trained through prior experience. If they experienced ethical oversight of their research in a University or educational setting, they may advance those ethics in their work. If they work in a large corporation, they may have a corporate policy that they must adhere to. Or, most commonly, if they work in an agency or design consultancy and developed their skills without formal training, there is no oversight or formal consideration to the ethical implications of their work. The question of “What are the implications of this research on our target population” is never asked. For all of its positive qualities, IDEO’s Human Centered Design Toolkit – funded by Bill and Melinda Gates Foundation – fails to even mention ethics at all.

And there are design-led social entrepreneurs, who study an at-risk population and attempt to build double-bottom line services to support that population while simultaneously generating profit. Because of the bootstrap, rapid style of entrepreneurship, research in these contexts is usually conducted in a “quick and dirty” fashion. In some evolving methodologies, the entire point of the research is to test a small, ill-thought out idea before investing a great deal of time in planning or reflecting.

Ultimately, I agree with Chipchase that “The real design imperialism comes from those people who assume that the world’s poor are not worthy of the attention.” But I think the ethical considerations of design are being largely ignored by many practicing designers, most of whom are fully intending to do good and have only the best of intentions.  And so I encourage these folk to engage in the conversation, evaluate their own work, and further examine not the intent, but the actual mechanisms and potential repercussions of their actions – and to realize that the power of design demands an ethic of design, as well.

Posted in Reflection, Regulation, Theory | 3 Comments

UX London

In reflecting on the last three days of UX London, I can identify at least one core theme. Our discipline is growing up, and we’re starting to have a more refined and advanced conversation around topics like synthesis, meaning, and impact.

Bill Buxton began with a conversation around ideas. We should stop thinking of design as a “flash of insight.” Instead, it’s a process that’s about continuous, incremental change based on a creative recombination of the new and the old in unexpected ways. He’s right; it’s a constant integration process, as we make new knowledge and try to appropriate it into our work. A wider set of raw material (“knowing more things”) can lead to a wider array of new ideas.

Anders Ramsey described how, when design is involved in an Agile process, it should happen continuously, in short, collaborative bits (like Rugby) rather than in long, individual run (like a Relay Race). I don’t agree, and I’m tremendously skeptical that the output of a process like this can be successful, if success is about anything other than “shipping a product.” Yet that Anders is proposing a way for design to be more effective in an interdisciplinary setting is refreshing.

Luke Wroblewski described that our design paradigms for mobile need to adopt to the qualities of context, related both to mobility and to a small form device. This means being more selective in the content we select to show in mobile software, which implies a more judgment-drive role of design.

Kristina Halvorson gave a series of pragmatic tools for beginning to consider content as part of the larger process of design, and poked at the organizational issues that may creep up when trying to manage content as a living, breathing thing. I’m a big fan of content-driven design, and I’ve always had trouble with the development best practice of separating form, function, and content; it seems like an efficiency gained for our production teams at the expensive of “the whole is greater than the sum of the parts.”

Jared Spool described how four forces are converging to describe a perfect storm. Market maturity, the “emergence of experience”, Kano’s model, and Sturgeon’s Law (90% of anything is crap) are colliding to raise the awareness and respect of our field. I agree, and although I’m not sure I would have picked these four forces as the major scaffolds to make the argument, his larger point is correct: there’s a lot happening, and it’s hard to stay aware of it all and understand how it all connects.

For me, more than ever, a form of constant synthesis through writing and sketching is required to integrate the seemingly absurd, extreme events of the world into a framework for understanding design.

Posted in Conference Notes, Reflection | Leave a comment