How do you cook?


Here is the final AC4D presentation of Feast for Days, a collaborative cooking business co-founded by Jonathan Lewis and I. Enjoy!

Disrupting Higher Education: Some Observations On What To Fix

I hear a lot that higher education is ready for disruption. But it’s overly simplistic to say “higher education is broken,” and the entire focus of most education startups seems to be on delivery platforms: ways to bring content to students. Online learning is, indeed, flawed. But there are a lot of other aspects to higher education that could benefit from disruption, including aspects of academic research, the costs of education, and the basic assumptions that everyone needs to attend a college at all. These are some of my thoughts on the state of higher education today.

Academic Research
Academic research is strange, and I think it’s difficult for people to really understand why it’s necessary or how it works. What a researcher does all day is not that different than what anyone else does. If you don’t know any researchers, you probably imagine them in libraries, pouring through manuscripts, all day, every day. Most researchers I know sit at a desk, and answer emails, have meetings, strategize on whiteboards, and waste time on facebook. The difference, though, is that their work is directed towards a pursuit of knowledge. They run projects and programs, usually with the intent on learning a lot about a little: it’s common to have research that moves the needle only an extremely small amount by building on previous ideas and theories. I think that, broadly speaking, academic research works. The incentive to enter research self-selects individuals who are motivated by the production of knowledge, because the pay isn’t particularly great and the working conditions can be fairly bland. And academic research continues to produce pretty amazing results. But the systems that dictate how this research happens need a lot of change.

Peer review
When an academic researcher publishes their work, it needs to be peer reviewed. That means that other researchers are invited to read it (they typically volunteer – peer review isn’t a paid activity) and determine its worth. This is a fundamental principle that’s rarely challenged. But academic research on academic research (holy meta!) indicates that it might not actually work (consider, for example, the paper “Nepotism and sexism in peer-review” which found that “the system is revealed as being riddled with prejudice” (.pdf link).  More qualitative evidence shows that peer review can be extraordinarily hit or miss. And my own experience has given me evidence on just how broken it is.

When I volunteered to peer review papers for the Computer Human Interaction conference (CHI is the largest academic conference for computer scientists who work on issues of interaction design), I received no training, no instructions, and only the most basic of guidelines on what to do. The system works like this.

First, you sign up to be a reviewer. Anyone can sign up to be a reviewer. You could sign up right now.

Next, papers are assigned to you. You might think that there is a vetting process to judge if you are actually qualified to read and review the paper, but you would be wrong. I’m aware of how difficult it is to find people to actually complete reviews, because it’s unpaid work. Attempts are usually made to assign papers based on some expertise in the field, but that expertise is self-declared and based on a fairly superficial quantitative scale. Like other online systems, you build up a track record of having completed reviews, but there’s no track record of how good your reviews are, and there’s not really any indication of what a “good” review means. And even if there was, it might be irrelevant, because there’s a shortage of reviewers. So when the review deadline is approaching and there are still 40% of papers without one review, it’s tempting to reach out to a more generic set of reviewers.

Then, you read the paper, indicate your score, and respond to a series of questions. There are some guidelines written on what a “good response” might look like (this guide, for example, which is buried in a 2005 conference site. I wonder if anyone reads these guidelines.) I’ve chaired a few tracks at conferences and been completely astounded as to the shallow quality of reviews. I’m not alone here – read the comments, which are pretty telling.

Finally, there’s a back and forth with the authors, giving them a chance to respond to any criticism, and then – based on the average of the scores, papers are accepted or rejected. Consider that acceptance means the community thinks this paper exemplifies good academic research, and that it also means points for tenure.

And so, three reviewers who may or may not be qualified to read a paper, and may or may not understand a paper, and may or may not write a thorough review of the paper, and may or may not calibrate their numeric scores in the same way, decide on the publish-ability of academic research. Which is why, when you attend a conference like CHI, you see truly great things, and truly awful things, and pretty much everything in between. There is a design opportunity to create a better system for vetting academic research.

Walled Garden Output
Even if the review process worked, the output is almost always hidden away in some proprietary locked system like the ACM Digital Library or JSTOR, ensuring that no one outside of academia will read it. There’s been a pretty strong backlash recently against this. For example, as of this writing, over 11,000 researchers have said they won’t publish in journals organized by Elsevier – one of the largest academic publishers. Harvard recently sent a memo to its faculty, urging them to publish only in open journals. Yet the lobbying power of organizations like Elsevier may prove to be as strong as that of the big content producers, and we’re starting to see more SOPA like bills reach congress with the overt attempt of limiting public access to publicly funded research. There exists a design opportunity to create better incentive structure for people to publish academic research in a public manner.

Funding
Academic research requires funding. But, why – what does the money actually fund? While some of it goes to equipment, the majority of it covers salaries. But, this is where the system gets strange. Let’s say you got a grant for $250,000 from an organization like the NSF. Before you do anything, the university takes a cut – in some cases, as much as 20%. You might use the rest of the money to fund some PhD students, who can act as research assistants. At CMU, a student who is studying to receive their PhD costs a research lab about $100,000. But that doesn’t mean the PhD student gets all of that money. Instead, the money covers their tuition (which goes to the university), and then provides a small stipend of a few thousand a month directly to the student. You’ll need to cover a portion of your own salary, too.

So when you play the numbers out, from a quarter million dollar grant,

  • The university gets close to $100,000, in the form of the off-the-top tax, the PhD student’s tuition for two semesters, and a portion of your salary.
  • You can fund about 1.5 PhD students, for one year.

And if you further play these numbers out, funding a doctoral student for a four year commitment will cost you $400,000 in research funds, and to pay this, you’ll need to bring in over half a million dollars.

It seems like a bit of a racket. There exists a design opportunity to fund academic research in a more practical and cost-effective manner.

Online Learning
We’ve heard over and over again that online learning is going to completely overhaul our educational system. The amount of money that has been, and continues to be, invested in online course delivery is amazing. Yet I’m extremely skeptical of the efficacy of online learning for advanced education in fields that are creative and collaborative. My problem with the focus on the platform of delivery (“online”, “blended”) is that it ignores the quality of the educational experience. As John Dewey described, “The belief that a genuine education comes about through experience does not mean that all experiences are genuinely or equally educative.” Simply providing people with “an educational experience” makes no guarantee that they will, in fact, learn anything.

In my experience working with online course delivery, there are some major problems, and nearly all have to do with experiential qualities: the environment, context, emotion, and human to human interaction that shape the experiences learners have.

Structured educational software encourages rote learning without controversy. Perhaps one of the most fundamental qualities of an educational experience is that of dialogue, discourse and debate: the challenging of norms, active experimentation, public failure, and the serendipitous interplay of human interactions. You can learn by passively watching a video, but the learning is shallow because the experience is shallow. You are left on your own to form the connections between the material you watch and your existing knowledge. Some can make these connections; I fear that most can’t or won’t. Forums and message boards are used in an attempt to increase collaboration and communication. This is naively optimistic, and most students I know who have experienced this describe that it doesn’t work. I’ve seen classes where students need to post “one forum post per week”, as if forcing someone to have a question will actually ignite real curiosity about the subject. I do think online learning works tremendously well for extremely motivated and self-directed students (who are rare, but who do exist), and for extremely objective, fact-based learning. There’s no reason the majority of freshman who have to take an introduction to calculus class can’t try it online before engaging with a human directly. But before we rally around fact-based learning, we might question why so many freshman need to take an introduction to calculus class in the first place.

There exists an opportunity to design a better delivery mechanism, to judge the quality of educational experiences, and to encourage experiences that are grounded in the science of learning.

Costs
Going to a four year university is absurdly expensive. The cost of attending a university has grown at a disproportional rate, as compared to any other accepted benchmark. From 1985 to 2011, the overall inflation rate grew 115%, while the cost of higher education grew 500%. The same $10,000 college education in 1985 should cost $21,500; instead, it costs close to $60,000. Why the dramatic difference?

Some argue that programs like the Stafford Loan Program and Pell Grants are actually the main cause of the increased tuition. This article in the Atlantic describes how this happens:

In the past, college degrees conferred higher incomes on those who earned them.  But almost all of that surplus went to the student rather than the college, because aside from a small number of extremely affluent families, the students were young and did not have that much cash.  If colleges wanted to expand their market, college tuition was constrained to what an average student, or their family, could pay. Introducing subsidized loans into the picture allowed students to monetize that future income now.  It’s hardly surprising that colleges began to claim more and more of the surplus created by their college degree.

And if this wasn’t troubling enough, the Stafford Loan interest rate is imminently set to double, which will trigger defaults and late payments.

The increased money that you pay frequently doesn’t improve the quality of your education. According to this article in the LA Times, increased tuition funds athletic teams, administration, and the pay of the presidents. Paula Wallace, the President of the non-profit art and design college Savannah College of Art and Design, paid herself a 2 million dollars salary package, while the cost of a four year degree in furniture design will run a student $127,620 in 2012.

There exists an opportunity to design a cheaper educational offering.

The Major Assumption: Everyone Needs to Go To College
Ultimately, the biggest disruption that I see and hope for is the disruption of the social belief that a high school senior’s next step is a four year college, and if they don’t go, they won’t have a good life. The assumptions baked into this statement include:

  • Everyone will learn something at any college that offers four year degrees
  • A high school senior is emotionally ready and personally interested in spending four more years learning
  • A good life requires a high paying job
  • A high paying job can only be secured with a college degree

We can challenge all of these assumptions logically, and we should challenge them all. I’ll offer an anecdote rather than an argument. When I taught Industrial Design at the aforementioned Savannah College of Art and Design, I constantly had freshman show up in introductory classes who weren’t interested, engaged, and clearly weren’t trying. I remember having a conversation with one of them – call him Tony – after he turned in a project that he clearly didn’t spend much time on. He admitted to throwing the work together at the last minute, and so rather than discuss the project, we elevated the discussion to the major. “Why are you pursuing a degree in industrial design?” I asked him. “I just want to work on motorcycles. This seemed like the closest major to working on bikes.” I’ve heard this time and time again, and so I elevated the discussion once more. “And why are you pursuing a degree at all, if you are so passionate about working on bikes?” Tony replied: “Because my dad made me go to college, and art school was the only place I could get in.”

Like everything else in popular culture, there are a set of norms that we’re taught to follow, truths that are constantly reified. If you work hard, you can accomplish anything. Invest your money, because over time, the stock market always goes up. Trust your employer to reward you for your loyalty. And, go to college, because college opens doors.  College does open doors; it certainly has for me. And Tony’s dad probably thought an awful lot about his hundred-thousand dollar investment in Tony’s future. And Tony, for all of his apathy, might turn it around, if he just sticks with it. I’ve certainly seen students do a 180 after a broken freshman and sophomore year, and I’ve even gone on to hire some of these students who were grasping at straws early in their academic studies.

But the likelihood that he’ll stick with it is slim: “40-50 percent of those who matriculate in colleges and universities do not obtain a degree within six years of entering college.”  There exists a design opportunity to change the perception that college is the only appropriate next-step after high school. Sometimes, it’s one of many appropriate next-steps. Often, it’s an inappropriate next step.

Summary
As the various “disruption” attempts in higher education play out, one thing that’s certain is that there will be more choices and more “appropriate norms” for post-high-school education. We’re already seeing more examples of and acceptance of community colleges, trade schools, apprenticeships, self-curated learning, and hybrid learning; and these are all still fairly predictable! I hope that startups focusing on “blowing up learning” focus not only on new delivery mechanisms, although those are certainly in need to overhaul. In my opinion, online delivery platforms are the least critical part of the entire system. There are so many other areas that can be fixed, with massive social and financial return.

Presenting AC4D's Graduating Class of 2011-2012


It’s with great pride that I present AC4D’s graduating class of 2012. Congratulations to Ben Franck, Jonathan Lewis, Cheyenne Weaver, Diana Griffin, and Jaime Krakowiak for their great work throughout the year, and their excellent final presentations at our graduation event.

Feast For Days
Ben and Jonathan are the founders of Feast For Days, a service that helps people cook together in order to encourage healthier eating. You can hear audio of their final presentation here.

Clean Collective
Jaime is the founder of Clean Collective, a service that helps small farms lease their land to natural energy providers. You can hear audio of her final presentation here.

Girls Guild
Cheyenne and Diana are the founders of Girls Guild, a service that helps girls gain agency and self awareness through a collaborative apprenticeship with an artist. You can here audio of their final presentation here.

To our five graduates: I’m extremely proud of you. Congratulations, and good luck!

AC4D's Alumni: Where Are They Now (1 year out)

Austin Center for Design’s final presentations are tonight, and while our second cohort of students is hard at work, we checked in with our alumni from our first graduating class.


Alex Pappas and Ruby Ku continue to drive peer to peer teaching at HourSchool, the company they founded during their time at AC4D. Through their main product offering at http://www.hourschool.com, anyone can offer to teach a class in an informal setting. Additionally, HourSchool offers peer-education focused consulting services to impact organizations looking to enhance their internal knowledge sharing, and provides a unique enterprise platform for the dissemination of proprietary and tacit knowledge.

 

Christina Tran has been freelancing in LA, and has recently started working with the HourSchool team, helping organize the services side of the business.

 


Scott Magee and Chap Ambrose continue to build and expand their core product offering, Pocket Hotline. Started at AC4D, Pocket Hotline is a platform for real-time community-driven support. Anyone can start a hotline at http://www.pockethotline.com and invite volunteers to take support calls. Since starting, Pocket Hotline has successfully powered a Ruby on Rails hotline, a nutrition hotline, and hotlines for various companies and organizations. And, Scott and his wife are expecting their first child.

 

Kristine Mudd is freelancing in New York City, where she’s working on a variety of interaction design, visual design, and social impact projects.

 

Kat Davis is an interaction designer at frog design, based out of Seattle, and works with the local improv community in her spare time.

 

Saranyan Vigraham is a developer evangelist with Paypal’s X.commerce. Between traveling to conferences, hackathons and other developer facing events, he also takes time to hack on new technologies and learn new development tools. Saranyan and his wife are expecting their first child.

 

Ryan Hubbard is a team member at the Australian Centre for Social Innovation, where he works on various service design and design research programs. In his off-hours, he continues to work on Patient Nudge, the startup he and Christina formed at AC4D.

 

It’s amazing how much our alumni have accomplished, and in such a short time. Keep up the great work: I’m extremely proud of each of you!

The Difference Between Understanding and Empathy: How To Communicate Design Research

A design researcher studies people in a particular context. The context can be physical, geographic, or conceptual. For example, a researcher may strive to learn about how work is done in a particular business, in order to optimize the process. Or, a researcher may seek to learn about the way a different culture engages with a particular technological advancement, like mobile phone use in developing countries. The research goal may be to understand, or it may be to empathize, and the two aren’t the same.

Understanding is about gaining knowledge. I may have no knowledge of a particular context – say, micro-finance in South Africa – because I’ve never encountered that in my daily life. If I’ve never read about it, experienced it, or discussed it, there’s no reason to think I can design to support it, and so the role of design research in this case is to learn. When the goal is to learn, the design research output will typically be factual statements. This is how the system works today. These are the people that make up the system. These are the tools and artifacts being used.

Empathy is about gaining a set of feelings. The goal is to feel what it’s like to be another person. That’s kind of a strange goal, because it’s impossible to achieve – you can never really be another person, which is what it would take to truly achieve this. But you can get close, and so design research intended to build empathy is really about feeling what other people feel. Assuming you aren’t an 85 year old woman, consider for a second what it feels like to be a 85 year old woman. This consideration is still analytical: it’s about understanding. You need to get closer to experiencing the same emotions that am 85 year old woman experiences, and so you need to put yourself into the types of situations she’ll encounter. What does it feel like to drive, given the various physical changes that the human body encounters when it gets old? What does it feel like to read the paper? What does it feel like to use email? The output will be hard to explain to someone else, because feelings are personal. While you can write detailed requirements and use cases about things you understand, it’s particularly difficult to tell someone else about things you feel.

The split between understanding and empathy is overly reductive, made only to illustrate the distinctions. In reality, most design research is about both understanding and empathizing at once, and in the context of learning, experience contributes to both.

*

There are a few reasons you might want to communicate your understanding and empathizing to someone else. You might be on a team, and your goal is to create alignment around the findings. You might be a consultant, and your goal is to create a sea-change for your client.  Or, you might be hunting for a job, and you want to show that you are, in fact, a qualified design researcher.

You might want to communicate things that actually happened or that you actually saw. These are discrete data points acting as facts, and because they actually happened, you can communicate them in a spreadsheet or bullet points on a slide. And that’s typically how these things are communicated. But a better way to communicate them is through pictures with quotes from real people, because this provides both the context of an action or activity as well as a view of intent. What people say, and what people do, provide clues for what people want to do and how people want to be.

Because it’s not practical to communicate everything you saw – it would take too long – there’s a form of selection that occurs. When you make selections, and choose this over that, it’s important to illustrate your selection intent. If you spent two hours in the field, you saw two hours of data. Why did you select five pictures to display? Was there something particularly interesting about them? Do they provide evidence of inefficiency? Evidence of a cultural norm that you want to highlight?

You also might want to communicate things you actually felt. These are hard, but not impossible, to describe – these are emotions. But description is not going to be enough to really get other team members to feel what you felt. I felt sad doesn’t capture the type of sadness, and won’t be enough of a call for action. Typically, it requires data over time to illustrate emotions, because this provides a baseline and a point of reference. This might take the form of a video clip, a timeline, a series of photos, or some other way of showing time-based narrative.

You might want to communicate your interpretations of what happened and your interpretations of how you felt. This is the assignment of meaning to the data: it’s your introspection on why things happened. When you interpret, you’ll begin to combine data in new ways, bring in external sources of data, compare, contrast, and judge the data you gathered. Typically, this interpretation requires some form of visual diagramming – a map, or a chart – to illustrate these forced and provocative connections.

Or, you might want to communicate the implications of your interpretations of what happened and how you felt. The implications act as design constraints, and point towards new design ideas. I’ve found it most useful to illustrate these implications through sketches: this is the translation of data, gathered in the field, to actionable design stuff. Even if you are a design researcher and not an “actual designer”, this is still in your realm of job responsibility. A deck of slides can be ignored (and typically is). A sketch will be much more likely to be used.

How To Start A School In 10 "Easy" Steps

When I describe what I’m doing at Austin Center for Design, the common reaction is, “You started a school? How do you do that?” I’m not exactly sure, honestly, because the process only seems logical in retrospect: I sort of made it up as I went along. However, I thought it might be informative to describe the steps I went through in a purely mechanical manner, as an example of the process one might encounter in trying to bring change to a regulated industry like education. Some of these steps are unique to the state of Texas, and others are general to forming any new company.

1. Forming the organization. l created a new organization, and had a lawyer craft bylaws and articles of incorporation for the non-profit corporation. The decision to act as a non-profit was important for a few reasons. First, as a non-profit, AC4D can receive grants and donations. Next, non-profit status for a school offers an indication to the world that our goal is on the production and distribution of knowledge. What I didn’t realize, however, is that a non-profit doesn’t actually have an owner. Once you create a non-profit, the entity exists, but no human actually has control of it; instead, a legally appointed board of directors assumes control of the organization. So, when people say things like “Wow, you own a school”, that’s technically inaccurate. For what it’s worth, this cost $300.

2. Getting an EIN. I visited the IRS site, and registered for an EIN number. This is the corporate equivalent to a social security number. It’s the easiest interaction with the IRS I’ve ever had.

3. Applying for tax-exempt status. While you can declare yourself a non-profit in Texas (and, I would assume, in other states), that doesn’t get you the benefits of non-profit status, which include a tax-exemption from the IRS and the ability for your supporters to receive tax-deductions for any donations they make. Achieving tax-exempt status is straight forward, but extraordinarily time consuming. I worked on the paperwork (a form called the 1023, which is 29 pages long) for close to two weeks. A few examples of the things I wasn’t expecting:

  • I needed to run a print advertisement in a local paper, articulating our non-discrimination policy, and include a physical clipped copy of it with my form
  • I needed to predict revenue and expense for two succeeding tax years, which I suppose is easy if you have an existing company, but extraordinarily difficult for a brand new entity
  • I needed to include a copy of the Secretary of State Filing Certificate
  • I needed to pay $750 to file the form.

I sent the package to the IRS, and received a response in about five months. A unique representative was assigned to review my proposal, and she requested a few corrections, changes, and clarifications. I received my exempt form (a simple one page letter) approximately six months after I started this process.

4. Developing curriculum. I developed the courses, outcomes, pedagogy, and other academic materials over approximately three months. This includes writing a comprehensive course plan, individual course descriptions and sequencing, the various outcomes and assessment criteria for each course, and then beginning to develop the content in each actual course. A course is 8 weeks long, and has either 8 or 16 sessions; each of these requires planning and attention, and so once the entire structure is developed, I methodically created a framework for each course. When I create curriculum, I treat it like a design problem. I use big brown butcher paper and a sharpie, and I start by considering my audience. I try to map out their wants, needs, and desires, and visualize some of the opportunities for design-led change. Once I have a sketch of the curriculum structure, I use a tool like Excel to create a more formal illustration of how quarters, classes, skills, and outcomes align.

5. Getting certified. AC4D is approved and certified by the TWC, a state organization in Texas. They review our curricula, our faculty, our policies, and so-on. The process to receive certification is straight-forward, but time consuming; the initial package I provided to them has 50 individual word and excel documents, ranging from a description of activities in each class, to profiles for each faculty member (including their transcripts, resumes, etc), to the expected and proposed budget for the school. This certification process (from completing the forms, having our finances audited, having various forms notarized, and having the program approved) took approximately five months. It required me to have a formal audit conducted by a CPA; this cost $1500. The filing fees for the forms cost $1170. We’re up to $3720, and I haven’t actually done anything related to education yet. Yikes.

6. Creating the website. I developed the initial site and all of the content.

7. Recruiting students. Legally, I couldn’t advertise for the program until all of the above had been completed. In reviewing the timeline of creation, it took approximately a year from me telling my wife “Hey, I think I’m going to start a school,” to the point where I actually began telling people that we existed. I then started recruiting, using my academic writing and public speaking as a platform to spread the work about the school. The very first time I spoke publically about AC4D was at interaction’10 in Savannah, February of 2010.

8. Receiving my first applicant! I received my first application for enrollment on April 1st, 2010 (about two months after I started advertising). 78% of my applicants waited until the last 48 hours before our application due date, giving me a mild aneurism.

9. Filling out enrollment paperwork. There are a number of records that are required by various agencies to substantiate our operations. This requires a careful catalog of things like student application forms, payment records, evidence that they toured the school and spoke with a registered representative, and so on.

10. Celebrating the first day of school. We started on August 30th, 2010, with an orientation session on August 28th, 2010. The first day of class was one of the proudest moments of my professional career.

As I reflect on this process and experience, the “hardest” part was also the most fun: writing the curriculum, structuring the academic program, crafting a series of learning interactions that result in a competent interaction designer and a visionary entrepreneur. The majority of the startup process was not hard, but rather, long: it required paperwork, and communications, and conversations, and meetings, and organization. Through this process, I recall countless conversations with my wife, questioning the intent. Would anyone apply? If they did, is the curriculum any good? Would the program be successful?

I observe the same dialogue occurring in my students, as they start their own companies. Will people use the service? Will it be successful? Am I doing it right? How can I do it better? These questions are important, because they indicate a reflective entrepreneur: they show that the students have the ability to observe themselves from afar, rising above the tedium in order to have a “meta moment” of self-evaluation. But these questions can also be poisonous, because they can balloon into a never-ending spiral of self-doubt, which can be crippling for forward momentum. I found the most success when I continually focused on actions, the methodical steps above: what can I do, and what can I control?

I hear that education is ready for “disruption” – for the conservative, traditional structure to come crashing down. As entrepreneurs tackle new educational paradigms, I hope the pragmatics of my own experience serves as a useful point of reference.

Fire Starter: Leveraging Collaboration To Jump-Start Your Startup

About six months ago, we started doing something called Fire Starter. It’s a collaborative way for startups to get make substantial traction in a small amount of time. The idea is simple, and it’s been working really well, so I thought I would share how it works.

We hold a Fire Starter session about once a month. Each session has between 6-8 people in it, all of whom have a startup that is in various stages of development. Our group has been pretty interdisciplinary; we’ve had developers, designers, writers, and even a nutritionist join us. The specific roles aren’t important. What’s important is that everyone has their own project that they are passionate about and diligently working on.

One of the startups is selected as being the focus of the session. They present a problem they are having to the group. There are no constraints on the problem – it’s whatever is top of mind. Some problems we’ve tackled have included:

  • I’m having trouble promoting my product
  • I’m having difficulty getting visitors to convert
  • I’m having trouble designing a new feature or capability
  • I’m having trouble planning all of the things that have to happen when I launch a new substantial feature

Once the problem is articulated, we spend an hour discussing it. This discussion is visualized in real time; someone takes visual notes (on a big sheet of paper) of everything that’s said. The goal is to ensure everyone at the table understands the problem. It’s inevitable that people come up with potential design ideas or solutions to the problems – “what if you did this?” – and those are written on post-it notes.

When an hour is up, the real fun begins. Everyone in the group identifies what they intend to do for the remainder of the day. There are only two rules:

  1. Whatever you do has to directly address the problem that’s been articulated
  2. Whatever you produce has to be completed by the end of the day

And then, we work, for about seven hours. We don’t break for lunch; whichever startup has the focus will also pay for lunch, delivered.

People who know how to code typically decide to code something – a new feature, a landing page, a proof of concept. People who know how to write typically write something – a press release, a blog post, a series of promotional emails. People who design, design; and so on. It’s natural that people fall back on their core competency, and because the group is interdisciplinary, the output is extremely varied.

After seven hours, we call time, and we discuss what’s been produced. Because of Rule 2, the focus is on artifact production, and because of the time constraint and the emphasis on completion, these artifacts are immediately actionable.

Some of the things that have been produced in our Fire Starter sessions have included:

  • A functional prototype of a twitter scraper and visualization for a particular set of hash tags and regular expressions
  • A press release, a cover letter template, and a list of fifty appropriate media contacts and email addresses
  • A landing page, optimized for search and analytics
  • A series of blog posts on a number of different topics, queued up and ready to post over a few weeks
  • The creation of an .epub and .mobi, based on an original print book

We make the session fun. We play music, and drink beer, and joke a lot. And so it becomes a social event as much as a work activity. Because of the public nature of the session, there’s a positive peer pressure quality to the work, too: no one wants to be empty handed when the day is over.

If you do the math (as a few people have done when they hear about this), it may not be worth your time. You’ll volunteer to work on four projects – that’s 32 hours of your time – before you get your day in the spotlight. If you think of yourself like a machine, and calculate 32 hours of lost productivity at $100/hour, that’s $3200 you could just go spend on a PR campaign. I suppose that’s objectively true.

But you aren’t a machine, and there are a few unique benefits to the Fire Starter approach. First, running a startup can be really, really lonely. It’s extremely exciting to have eight people focused on your problem, to be the center of attention, and to have substantial artifacts produced at the end of the day. It makes you feel productive, and that feeling can literally fuel weeks of additional personal productivity.

Next, it’s a way to tap into insights and ideas that you and your team might not have or even fully appreciate. As an example, most of the startups I mentor don’t have a team member with a marketing background, and so they don’t do much marketing – and they don’t think about things in terms of marketing. A Fire Starter session with a few marketing participants will provide a way to augment your skillset, if even for just a day, and to try things out without a large financial investment. More importantly, it will provide a new intellectual frame for the work you are doing. It’s a way of looking at your startup in a new way.

Finally, and I wasn’t exactly expecting this, I’ve found a huge amount of personal satisfaction in putting aside my own problems for a day and focusing on a brand new topic. I thought it would feel like a waste of time; it’s actually been personally gratifying to work on new ideas in a time-boxed, creative, and fun environment. It’s refreshing; even though I’m brain-dead when I’m done with a Fire Starter session, I find myself wanting to go work another eight hours on my own projects.

Give it a shot, and let me know how it goes.

There’s Too Much To Do, And So I Do Nothing: Regaining Momentum In Your Startup

Yesterday, I had a long and productive conversation with some of my friends about their startup. The conversation started when we were discussing roles, and they were noting how – in a startup – everyone is basically doing everything. They weren’t feeling a lot of forward momentum, and when we poked at it and actually listed who was doing what, it turns out that when everyone is doing everything, no one is doing anything. The reason became clear: there was simply too much to do. With no clear delineation of roles, expectations, or deadlines, lots of things were being started and few things being finished.

I think this is a common occurrence. I don’t think it’s unique to startups, and I think it’s prevalent because we’ve never formally learned strategies for managing the overwhelming amount of communication in our lives. Specifically, we’ve never learned how to manage context switching, how to establish effective collaboration, and how to prioritize our actions. I’m pretty sure these are all teachable, but I was forced to learn them through experience, and so I failed at most of them when I started working. Here are some of the strategies I use, and some of the challenges I run into when I use them.

Establishing Effective Collaboration
When we were speaking yesterday, it became clear that different working styles were creating tension. One of my friends likes to dream – to think of the future, and to paint a romanticized vision of it. Another likes to wrangle complexity, by reducing it to actionable parts. And a third thinks in systems, connecting ideas through narrative. This makes for a phenomenal team and for fantastic output. But the mechanics of how work gets done are really different for each person, and this creates conflict. I’ve found that the conflict typically falls around expectations of what it means to be done, and so here are some suggestions to help ease the pain around these misaligned expectations.

Identify that “done” means different things to different people. For someone who thinks in big and broad strokes, “done” means when they’ve solved the problem in their head. They’ve moved on to other things, and they no longer feel a nagging sense of anxiety around a particular issue because they’ve resolved it. They have new knowledge, and so they feel good about it. The trouble is, no one else has that knowledge, and they don’t feel good about it. Simply recognizing this is important, and recognizing if you are this type of person, or if you work with this type of person, is critical. I do this, and I never realized how much it bothered people; I could see an end-state, and so I would build things as if that end-state was achieved. But it wasn’t, and people literally thought I missed steps.

Make ownership handoffs explicit and public. Since all of our work is digital, “handoff” is really misleading. There’s no demarcation of a change in ownership, no object to be literally handed to someone else. And so, when you’ve stopped working on something and think your role is over, think about how you can take some action that illustrates a change in expectations. This might mean saying to someone else, “I’m done working on this. I’m handing it off to you. My expectation is that you will continue working on it.” This is truly weird, and socially awkward, and so no one does it, and so the digital handoff wafts away and is lost.

Empower everyone to make decisions, and be prepared for the resulting conflict. When there are expectations of hierarchy, things don’t get done until the person at the top does them. If you empower everyone to make decisions, a lot more will get done. To empower people to make decisions, you need to continually tell people that “I would like you to make decisions on your own. I expect you to make decisions on your own.” This, like the above, is socially strange, because most of us don’t talk that bluntly about work expectations (or anything else). But I’ve encountered a lot of people that simply don’t feel comfortable making decisions without hearing someone say something like this to them. The flipside of this, of course, is that when people make decisions autonomously, they won’t make the same decisions you will, and that will produce conflict. I think the benefits outweigh the potential problems created.

Context Switching
When you start working in a consultancy, you are normally resourced to a single project, and you have a single set of tasks, and a single set of expectations. And then, when you become a creative director (or, when you join a startup), you suddenly have multiple projects, with multiple tasks, and multiple expectations. At one point, I remember my colleague and I were running about twelve different projects for a single account, and our calendars were literally walls of thirty minute meetings, usually three and four deep, from eight till six. A big part of being a creative director is not directing creative, but instead, playing interference for your team so they can actually get work done, and so a lot of these meetings were conference calls, check-ins, and client-reviews. And because meetings usually start late and run long, we would literally be running from conference room to conference room, arriving just as the PM had started the call, and with no real understanding of what was happening and to whom. It was not rare for me to arrive and mute the phone, ask “What meeting is this?”, unmute the phone, and begin speaking.

This is an absurd illustration of the context switching problem: your brain and entire soul wants to stay with a single problem, exploring the details, talking through the implications, critiquing design work, and forming relationships with the client. Yet now, for a completely arbitrary reason, you need to move on to a different problem, with different details, different implications, and different relationships. I’m not proud of this, because it’s a silly way to work, but it’s a reality, and so here are ways I support context switching.

Think of work in discrete buckets. I consider all of the different projects that I have going on as unique containers. The containers have names: right now, I’m tracking buckets like “AC4D Graduation” and “Trip to Mexico”, because these are projects I’m working on. If I formally start to associate email, communication, projects, activities, meetings, and “design stuff” with a bucket, then the name of the bucket becomes a recall proxy for me. When I had multiple projects for a single client, each project got its own bucket. I could never get the “just imagine it in a palace” memory trick to work. Putting things in buckets seems to function in a similar way: as a single proxy for a larger set of data.

Do a bucket walkthrough once a day. On paper, or in a plain-text notepad, I’ll walk through each bucket, listing the name, and identifying the things that happen, have to happen, or might not happen today. I can’t keep track of all of the buckets in memory, because I have a lot going on, and so writing them down helps to “get it all out.” These are simple lists, but begin to outline expectations for the day. (There’s a pretty miserable feeling that occurs when you walk through the buckets, identify things to do for the day, look at your calendar, and realize there’s no way it’s all getting done. See below, “Prioritizing Action.”)

Be extraordinarily diligent about file management. I maintain a rigid file structure of every document, artifact, presentation, and design deliverable, organized by client, project, project phase, and so on. When I enter a new context, I can immediately change my work space to ensure I’m ready to work; there’s no searching for files, or locating old emails. It’s just ready to go.

Scan every email as it comes in, but don’t reply. As new data and communication arrives, I can let it “marinate” and therefore be aware of the often last-minute changes in strategy and plans by a team. I won’t know the details, but I’ll know that there were last minute changes, and so I’ll be aware of my own knowledge limitations.

Prioritizing Action
With so much to do and so little time, it’s important to pick the right things to do. That can feel like a crap-shoot, and this is where the paralysis occurs: I don’t want to pick the wrong thing, and so I pick nothing. My strategy for handling this was entirely emotional, and had three components.

First, put the bucket list items in order of deadline. What has to happen first, second, third, and so-on? This is a forced stack rank, and that means things can’t tie; only one thing can be first.

Next, examine the list in order, and ask the question, “Is this the most important thing I could be working on right now, OR, will this take an extremely short amount of time?” If the answer is Yes, work on that immediately. If the answer is No, move on to the next item.

Finally, be OK with the fact that you aren’t doing all of the things you said you wanted to do; you are consciously skipping things that have looming deadlines. This is attitudinal; it’s about letting go. I’m truly bad at this. Once I started trying it, I realized that things I deemed were “not the most important thing I could be working on right now” were, in fact, not that important at all. And no one seemed to mind that they weren’t done when they were supposed to be done.

*

When we are in school, we typically don’t learn how to work. It’s strange that we expect people to know these skills when they start their careers, because most of these skills are unnatural and built on truly bizarre social norms. I hope that, by establishing effective collaboration, learning to switch contexts seamlessly, and finding ways to prioritize action, you can start to gain more control over your work.

Do you want critique, or a hug? How to gain valuable criticism on your design

One of the most fundamental parts of the process of design is the critique, a formal opportunity for the designer to receive feedback from a group of people. There’s a lack of good literature on critique, although there are a few notable exceptions, and so for most, critique remains a mysterious tool. Those of us who went to art or design school learned how to do it, but likely never learned explicitly; instead, it was much more of an experiential process. I remember showing up at my first critique at CMU and being completely mortified by the thought of putting our assignment (a self-portrait) on the wall in front of everyone else, and then talking about it.

I wrote about critique in a broad manner for interactions a year ago, but I’ve been thinking about more specific ways to introduce students to the idea of being critiqued. Here are some thoughts about how to receive criticism; I’ll assume that the critique session is actually well organized and not just people sitting around talking, although that might be a poor assumption.

Be quiet.
When you are receiving a critique, it’s extremely tempting to rationalize your design decisions – to explain why you did the things you did. This will always come across as defensive, because it is: your rationalization is actually a way of showing that you’ve thought through the trajectory of the conversation and already considered (and judged) the end state. The defensive quality of a rationalization changes the conversation from a way to produce new knowledge to a verbal debate. But you’ve already chosen the medium to make your argument, and it’s your actual design work. By moving from the artifact to words, you game the system: your users won’t have access to your words when they receive your argument in the form of the final design. All they have is the thing you’ve made, and so it needs to offer the complete argument on its own.

Additionally, when you rationalize and describe design decisions prior to critique, you steer the conversation. For example, if you begin by explaining your color choices, you’ve done two things: called attention to a particular design element, at the expense of the whole (and primed the group to be thinking mostly of color and aesthetics), and set up a boundary around your design choice. Some people refuse to cross these boundaries once they’ve been publicly established, because you’ve implicitly claimed ownership over a design detail: you’ve signaled to the group “I care about this, and if you poke at it, you’ll hurt my feelings.” Ironically, you may have called attention to it because it’s the element you are most concerned about!

Write it all down.
Some of the best parts of a critique come from the small, nuanced details of conversation, and the ideas sparked by the conversation. A participant might say something like “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.” There’s at least three points that are important here (a stylistic decision of using a small, in-line modal; an implicit recommendation that the flow is too long; and an observation that context is important to make decisions). It’s unlikely that you’ll remember all of that when the critique is over, and if it’s a good critique, it’s unlikely you’ll remember anything, because you’ll be actively considering so many new and different ideas. It’s critical that you write it all down.

When my own work is being critiqued, I number each individual item, component, or artifact of the design with a unique identifier. Then, as a person is speaking, I try to type exactly what they say into a document, and link their comment to the design prompt with the unique identifier. I also try to log who said what, so I can follow-up later if I need to. In a few instances, I’ve found written feedback to be politically useful, too – when teams wonder where seemingly irrelevant design decisions came from, it’s effective to be able to point back to the origin of the idea as coming from within the team.

Extract more details.
Talking about interaction design is hard, because it’s multi-layered, requires an understanding of the system, and is highly contextual. It almost always requires a dialogue to really understand any criticism that’s offered. The dialogue, however, is a chance for you to ignore the first suggestion (Be quiet), and so while it’s necessary to ask for clarification, it’s important to do it in a neutral and open-ended fashion.

Consider two different approaches:

*

Other Designer: “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.”

You: “Why do you want to eliminate a step?”

*

Other Designer: “When the user clicks here, instead of going to that other page, it seems like we could do a mini-modal, eliminate a step, and provide a way for them to maintain an understanding of context.”

You: “Can you tell me more?”

*

The second choice seems light and almost therapeutic; it’s entirely non-confrontational, and acts in a way similar to the “five whys” of identifying root causes. While the first approach – “Why do you want to eliminate a step?” – is objective, it won’t be perceived that way.  Most people will hear you say “I don’t want to eliminate a step. Why do you want to eliminate a step?” and you’ll be herding them into a defensive corner and changing the tenor of the conversation.

Reserve time for conflict, and realize that you don’t have to agree.
Just because someone said something, and you wrote it down, doesn’t mean that you have to act on it. A critique is not a mandate. But be warned that there’s something strange that happens in meetings: people leave with very different views about what happened. If you are quiet during the critique, scribbling notes, people will leave the session feeling validated – that you heard them – and expect that their comments will be illustrated in the next round of revisions you do. And they’ll be personally frustrated when they don’t see the changes they described, because they’ll feel like you ignored them and they wasted their time. At the end of the critique, it’s critical that you set expectations about what you intend to change, and why you intend to change it. This is hard, though, because you might not know at that point, and your comments will likely open the door for further discussion (which takes time). It can be effective to end with a simple phrase like “I heard all of what you said, and wrote it all down. You’ve given me a lot to think about. I don’t agree with everything that was said, and so you may not see your comments visualized in the next iteration. If you feel strongly about what you said today, let’s try to talk about it in a one-on-one setting.” In large and politically volatile groups, I would recommend actually emailing both your notes and this disclaimer to everyone that was in attendance, and be prepared to explain – in a presentation or work session, but not in a future critique – why you made choices to ignore design suggestions.

Don’t ask for critique if you only want validation. If you want a hug, just ask.
A “bad critique” is one of the most valuable things a designer can receive, because it short-circuits the expert blindspot and helps you see things in new and unique ways, and it does it quickly. But sometimes in the design process, you don’t actually want feedback at all: you want affirmation, and you want someone to celebrate your work so you feel good. Learning to understand the difference is critical, because if you ask for critique, people will give you critique. But if you ask people to tell you the three best parts of your design, they’ll probably do it. As Adam Connor offered in his IA Summit talk, “Don’t ask for critique if you only want validation. If you want a hug, just ask.”

Challenging The Financial Assumptions That Run The World

New York Times Columnist Joe Nocera can’t retire, because he doesn’t have any money. But, like a lot of people who are probably in the same position, he did mostly the right things, like putting money in a 401(k).

As I was growing up, I was taught the same “right things”, and I’ve taken the majority of these for granted as being appropriate things to do in a modern society. These include:

  • Put your money in a bank, because a bank is safe.
  • Max out your 401(k) contributions.
  • Save 25% of your income.
  • Consider large purchases through careful research.

Underlying all of these are assumptions about how capitalism works, based on ideas like rational actors working methodically to maximize personal value and support their self-interests. This walks alongside assumptions about work ethic: working hard leads to long-term financial success, and the harder you work, the more you’ll succeed. For many of us, these create our base understanding of the world: they are the scaffolds upon which major assumptions are built, and these assumptions color the way we think of democracy and equitable exchanges and fairness.

As these assumptions were being ingrained during my childhood, I remember having a perpetual sense of awe as I discovered various technological advancements. I remember space shuttle launches, and the Lego Technic sets, and learning how modems work. These engineering and technological feats build upon one another, and have always left me with the notion that humans can do anything.

I think, for myself and for a lot of other people, our technical abilities have become conflated with civic abilities, and we’ve made the incorrect leap that because we, as a society, are capable of building fantastic engineering marvels, we are somehow equally as capable of building societal marvels. But the more I understand the extremely short history of our financial system, the more I become convinced that we have no idea what we’re doing. Quite literally, everything we’ve been taught to accept about economics is a crap shoot, and we should all probably start challenging the most basic of financial assumptions. I realize observations like this are challenging, because they make us feel uncomfortable, but let yourself absorb these provocations, and see where your brain heads:

  • The cost of an item should reflect all of its externalities. As you walk through the aisles at Whole Foods, you put some locally produced items in your cart. They cost next to nothing, because they’ve been produced at a facility around the corner from the store. You decide to treat yourself. You select a green pepper. It has a label, which lists the pesticide tax and the VAT, as well as the shipping and freight fees. The pepper costs $84. Later that night, when you eat it, you take your time, prepare it simply with salt and pepper, and savor each bite.
  • A bank that stores capital should not also invest capital. When you go visit your bank, you can enter a room that has your money in it. It’s all there, the $65,000 you’ve saved, in various forms of precious metal. You pay a series of mandatory fees to have your money stored at the bank, but it’s worth it, because you can see your wealth expanding and depleting. When you make a purchase with your bank card, a little robotic arm pushes coins around. It looks like a video game.
  • People shouldn’t retire. As you reach 40, you’ve decided to cut your hours back to four days a week. Then, at 50, you switch to three days of working – you pick Tue/Wed/Thu, so you get a nice solid break for travel. By the time you are 70, you go in to work just one day a week. It’s an accepted norm to scale back to one or two days by the time you are 80.

I’m not naïve enough to think that these ideas should or will happen, or even that people will think them good. But the article cited above ends with a quote from Teresa Ghilarducci, a behavioral economist at The New School who studies retirement and investor behavior. “’The 401(k),’ she concluded, ‘is a failed experiment. It is time to rethink it.’” I think the entire system is an experiment, and many parts of it are failing. Innovation and creativity require a runway for exploration; as we develop new products and services, we’re realizing that comments like “that will never work” are self-defeating and unproductive. I think the same is true in areas of economics and policy. Tom Peters describes an idea as “a fragile thing.” Jonathan Ive explained that ideas “begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.”I can tell you 50 reasons why any of the above scenarios are bad, but I also realize that good ideas come from unexpected places.

Derivatives were awful and didn’t work. They were a financial innovation, and part of innovation is accepting the risk that things will fail. I don’t fault those who invented these financial instruments for trying innovative things. I do fault them for doing it with the resources of those who had no say or ability to weigh the risk. But we need more new thinking around economics and the core assumptions of capitalism, and we need to first realize that new thinking will always come with associated risk, and we need to approach the risk responsibly. This will require that we give more time to ideas before “logically” explaining them away based on our assumptions of economics and policy. We need to start challenging basic assumptions about how financial and policy decisions work, because frankly, they haven’t been working at all.