One Person’s Dystopia
Reporters, satirists, and observers of all kinds have been commenting for years about the tendency to insert tech into situations for which we already have perfectly decent, cheap, analog solutions. From the Vessyl cup, which told you what liquid was inside it, to the $1,500 June Oven that figures out what food is placed inside it and (maybe, sometimes) cooks that food to perfection, technologists and businesses are putting a lot of effort into figuring out how recent advances in technology could improve every task we encounter in our lives.
And why wouldn’t they? Contrary to some thinkers out there, I don’t have a big problem with new products that seem like ridiculous, impractical, “smart” non-improvements of older products. New technologies almost never appear in highly usable forms right from the get-go. At Home, by Bill Bryson, shares a few particularly delightful examples of this historical tendency. For instance, Alice Vanderbilt (of those Vanderbilt’s) came dressed as an electric light to a costume ball in 1883 to celebrate that she had just had electricity installed at her Fifth Avenue home.
But she later had it all ripped out of the walls because it was suspected to have started a fire. Problems with early electricity were not isolated things: Franklin Pope, an inventor and former partner of Thomas Edison, accidentally electrocuted himself while working on the wiring of his own home in 1895. Nevertheless, the vast majority of people on earth today choose to live, work, and play in electrified environments, and there are a lot of compelling arguments for why this has genuinely improved their quality of life.
All of this is not to say that the Vessyl cup, June Oven, or (more sinisterly) the Alexa that is listening to all your conversations right now are not problematic. They are. But they’re also an inevitable byproduct of evolutions in technology or even stepping stones on the way to better applications of current technology. Just because they are not yet more useful or secure than older products doesn’t mean that future iterations one day won’t be.
As an aside, I think it’s dangerous to assume that everyone buying these “smart products” is a dupe suckered in by the siren song of ever more efficient tech. Early adopters are early adopters in part because they tend to fall in love with the idealized possibilities of new tech, and they want to be part of the change. Through their purchasing power, personal influence, and (now) data, they help to shape the future even if they do not invent new products. In other words, they’re valuing something else over efficiency or usability.
Thus endeth the aside.
My optimism aside, tech doesn’t get better by magic, and human history has shown us often enough that dystopian scenarios are very possible. As designers, we do have an obligation to try to craft a better world out of the mess of possibilities afforded us today by the burgeoning expansion of tech capabilities. So, how can we make that better world, and not just a useless Internet of Things piece of crap?
Certainly one starting point is recognizing the limitations created by existing cultural norms. Everyone from sci-fi writer Bruce Sterling to scholars such as Steve Rathje, Byron Good, and Deborah Gordon has pointed out that the realities of how current systems are constructed necessarily limit the lenses with which we view the world. Whether it’s medical reimbursement structures disincentivizing doctors from listening to patients, or current tastes among the reading public limiting what gets written, we all reside within structures that shape our every thought and decision. And so inequitable power structures recreate themselves again and again. All well and good. Given that, how does innovation happen anyway?
There are so many strategies out there. While I confess that Ray Kurzweil’s (and many other’s) idea of just changing humans themselves by replacing our current brains with some superior iteration (whether biological or not) holds a certain charm to me, that is a bit out of reach to this humble designer (at least this decade). That said, humans have been innovating forever using more mundane strategies. You can be inspired by the way things were done in the past, or the way they’re done in other cultures. You can imagine a Bizarro version of reality and then design something to fit that world. You can do any number of provocation strategies to deliberately force your mind to think differently about problems.
I tested out one of these strategies with my fellow AC4D classmates tonight: Worst Possible Idea.
Austin has famously bad traffic, and despite numerous efforts to fix this problem over the years, it is just getting worse. No one has had any particularly successful ideas about solving Austin’s traffic problem, so I thought I would ask my classmates to come up with some terrible ones. I presented my classmates with the following prompt:
And here are some of the ideas they came up with:
Roads drawn much more creatively (Maria Zub)
Though these ideas may not be the best ones out there, or even physically possible, they are innovative in that they break out of the current mold limiting what we often imagine as fixes to our traffic problems. And if we were to continue developing interventions to address the traffic situation in Austin, parts of these ideas, or the opposites of these ideas, or completely new ideas sparked by these weird ones could fall into that magical category of solutions that are both feasible and useful. Dystopian pieces of garbage on sticky notes or (to return to the beginning of this essay) in our homes can lead to more utopian futures. You just can’t stop at the first, second, or hundredth iteration of an idea. Radical innovation is not a thunderbolt, a scream of “Eureka!” from a full bathtub. It’s a process that takes years.
I don’t know how exactly the world will become a better, more equitable place. But I have an atheistic faith that it will, and that technology will be part of that transformation.
Or maybe we’ll slingshot babies to daycare.