Driving toward the Minimum Viable Product without breaking down

So far our orientation week has been about learning about process, listening to users, and mustering “design ideas” from the data. While a good deal of time and emotional effort went into those, we hadn’t had to lean far out from source material to create something and then justify it. But today we were tasked with collaborating to reach a goal far away from the data. Things got messy.

Picking an idea: COMMITMENT

When Pat started talking about sketching and storyboarding, the general mood was almost playful. The really tough part came when we had to go back to our walls of ideas on sticky-notes and pick just FIVE to sketch. Our group had a slow start because we could not decide on a method for choosing the “best ideas.” I see now where time-boxing is helpful, a term of which I had not heard until last Monday. It seems the design process does not have to be democratic, just open to all opinions and favorable to better ones. Since there were lot’s of ideas to go through, it might have been easier if we had each picked three favorites (ours or otherwise) and then compared those as a group to get down to five.

Storyboarding: FILLING BLANK PAGES

Next we had to whittle down our five sketches to one idea to storyboard. This was hard both because we each had an attachment to the ideas we had chosen to sketch, and because we all had to have a vision of a product we would hypothetically want to build. Scenes were already filling storyboard pages in our minds. I think we decided on the sketch of “Metro Rewards” because we were running out of time and it seemed like the simplest concept. And isn’t simple good?

Minimum Viable Product: CONSENSUS

Emiliano gave us an overview of what Minimum Viable Product means and what it should mean to us. We should deliver something that gives value to users and tries to get value in return. It should be delightful or at least on the delightful side of the fence before releasing it into the world. Then he tasked us to think of the MVP for our storyboarded ideas and to ask ourselves what our greatest assumptions are. Our idea had seemed simple when we storyboarded it, but now we all had different conceptions of what the thing we were making was. We went around in debate about features and the core product, sometimes getting lost at why we were building it in the first place. The easiest thing to agree on was that we were making a bunch of assumptions.

Testing our assumptions: HUH?

Finally, it was time to come up with a way to test our biggest assumption… How do we do that? What makes a good test before building anything? Do we have to spend our own money on incentives? If we do this, will the data we receive mean anything? How will we know? I feet completely lost. This is a problem space where I swim in circles in a vast ocean. Our “simple” idea seems impossible to test without rigorous data on how people use the bus system. We will be in the field tomorrow with our script and our test to gather data to see if people will actually respond to our idea-products and then we will return for presentations and critiques. If I do not learn a good way to test our assumption, then I at least hope to understand any and all flaws in the test we will try.

Looking back, I think it would have been helpful for us to affirm the core question/problem that we started with from the prompt to the data at each step and level, so at least we’d have a guidepost for why the thing should exist and a criteria for measuring success. We have that, to an extent, but it feels vague to me. Maybe this is that vague space in which we are to accustom ourselves. Maybe this is just my new home.