At what point do extremist views become a danger to society?

Screen Shot 2019-12-08 at 8.45.55 PM

https://www.technologyreview.com/s/611807/this-is-what-filter-bubbles-actually-look-like/

The graph above represents how the polarization looks even more extreme when the accounts are plotted according to their “valence,” a measure of how politically homogeneous their connections are. A valence of 0 means an account follows or is followed only by progressive accounts, while 1 means it’s connected only to conservative accounts. The center is called “The silence of the center” because the center of the political universe is far quieter than the polarized wings. This plot of average daily tweets (vertical axis) from the network seen in the charts above shows that the extreme partisans on both sides are screaming while the center whispers.

Although this graph only talks about politics, we can see that in the extremes, people are more actively creating, consuming and spreading views and content reflecting their point of view. The question that I wanted to post for myself is, at what point do extremist views become a danger to society?

To start, I will commence by explaining what is filter bubbles. The term “filter bubbles” refers to the results of the algorithms that dictate what we encounter online. According to Eli Pariser, those algorithms create “a unique universe of information for each of us … which fundamentally alters the way we encounter ideas and information.”

I made a graph that represents in the ‘x’ on the left, we have the moderate thinkers and, on the right, we find the radical thinkers. On the ‘y’ axis on the bottom a wide filter bubble and on top a narrow filter bubble. I also divided the graph by four quadrants which describe four different ways that people use their views. The idea of this diagram was to understand how dangerous each quadrant could be. If you put together a group of people with radical point of views, with narrow filter bubbles and you add to that personalized ads, vulnerable targeting and propaganda, there is where it can become dangerous. But who has the fault? who can do something about it?

24

What we talk about when we talk about ethics

Look no further from the morning briefing roll calls under the header TECH, and you’ll find articles on how oil data is the new tech gold rush; how China is using facial recognition technology to monitor and persecute it’s Uighur citizens, a religious minority; how automation threatens to widen the poverty gap.

Ethics feels like a big word for big ticket items. We can easily look to broad issues where there is a clear need for ethical consideration but I think ethics is the stuff of everyday life. If I were going to put that into designer speak, I might say ethics are embedded in interaction. See what I did there?

Often we think of ethics in micro and macro extremes. Micro: as a private, internal matter – a standard we hold ourselves against. Macro: as a BIG E question of life and death – who am I going to kill by the lever of a train? What’s missing from both of those extremes is the space in which we discuss the activity around making ethical decisions, something I’ll refer to as situational ethics.

The framework I shared in my last blog post felt, and is, incredibly personal. When I tried holding it against a question or an issue, it felt difficult to understand how I might apply this without the grappling being in the context of decision making with other people. Which brings me around to the situations in which we will be having these conversations – the workplace.

Screen Shot 2019-12-05 at 11.32.02 PM

Why it Matters

Earlier this month, there was a post to Medium from 12 employees of color within Facebook who had gathered together stories about the racist behaviors and actions being taken against them.

Screen Shot 2019-12-05 at 11.32.39 PMHow does this happen? Where are the colleagues? Why are they failing each other? If we aren’t addressing workplace ethics, how can we expect to have truthful conversations around ethics in the hypothetical or abstract? We need to create more capacity in our workplace to have supportive, dialed in conversations around ethics in that environment.

Screen Shot 2019-12-06 at 12.06.19 PM

Dynamic Scaffolding 

I want the function of the  framework to be one of support – to help me maximize the potential for growth and truth within those engagements. My framework, then, needs to have dynamic scaffolding to support how I might approach these steps in practice, at work.

Leaning into strategies I’ve learned over the last two quarters, as well as techniques I’ve tried during group facilitations, I can see how these could be used to ground, frame, and develop workplace conversation around ethics – to better explore and understand why we are putting something into the world and where intention vs impact come into conflict.

Screen Shot 2019-12-06 at 12.14.17 PM

Being Grounded Leaders

Resmaa Menakem is a therapist, licensed social worker, and police trainer and consultant who specializes in trauma work, addressing conflict, and body-centered healing. He writes about the generational trauma that white bodies, black bodies, and police bodies have inherited and how, often times, people go to a therapist to be around a grounded body. To experience what it means to be grounded in the world.

An ethical center is something that grounds me. It grounds my decision making, it grounds the way I lead, it grounds the way I participate in conversation. If I’m not actively and regularly bringing ethics into practice within my workplace, how can I expect to have conversations around power/privilege, risk/consequence, time/scale with those same decision-making teams?

From a credibility standpoint, if we’re going to be, not just designers but design strategists capable of being a touchpoint across entire organizations, I need to ensure that I’m addressing these questions:

Screen Shot 2019-12-05 at 11.33.00 PM

 

 

Emergent Technologies And Thumb Scanning Out Of Work

Throughout this quarter we’ve been working on constructing our own ethical framework to better guide us through complex problems. I encountered an ethical dilemma I can’t seem to shake while living in China. I taught at a for profit school called Happy Goal Kids. It felt like working at the McDonald’s of English schools in Shanghai. I had no idea what I was getting myself into.

Throughout my time at Happy Goal my privacy was challenged quite often. I recall one point most specifically, a thumb scanner to get into work. I was required to scan in, and out, each time I left the facility. It felt like quite a risk to me. This company, who I already didn’t trust had the ability to know my whereabouts within their building. It felt strange.

China is even more prevalent in the news today in regards to it’s treatment towards Uighur people in the Xinjiang region. A new technology is being used against a group of people, for reasons that just don’t line up.

Something I wanted to pose to my ethics class at AC4D was their tolerance to low and high risk situations in emergent technologies. While China is using facial recognition to track Uighur people, we are logging into our phones, retinal scanning into schools, and thumb scanning ourselves out of work. Our physical attributes are being used as identifying factors. The things we are being asked to opt into may pose more risk than we’re able to recognize at this point in time.

I gave each member of the class the below strips of paper
Screen Shot 2019-12-05 at 5.38.34 PM

I then asked people to write down where they would feel comfortable using these factors of identification. Whether it be civic, personal, or professional use. From there I asked the class to get up and place where they felt each piece fell on a low-risk high-risk axis.

Wechat: Consequences of a Social Credit System

wechat-vector-png-wechat-logo-png-512

the context.

 

The “social credit system,” first announced in 2014, aims to reinforce the idea that “keeping trust is glorious and breaking trust is disgraceful,” according to a government document.

Like private credit scores, a person’s social score can move up and down depending on their behavior. The exact methodology is a secret — but examples of infractions include bad driving, smoking in non-smoking zones, buying too many video games and posting fake news online.

WeChat has more than 1 billion users worldwide.

China’s internet is often referred to as “The Great Firewall,” because the Chinese government strictly regulates and monitors the content of the country’s roughly 800 million online citizens. WeChat has further strengthened that closed system; WeChat’s privacy policy makes it clear that information will be shared with the government when requested.

–   Business Insider

the exploration.

 

For this week’s ethical challenge, I decided to take the idea of the Wechat’s social currency system and make it feel tangible for my fellow students by asking them to give themselves social ranking numbers based on their likelyhood to parke in the following criteria. These criteria are based in real criteria that the Chinese government uses to rate their citizens.

-loitering
-smoking in non smoking areas
-playing video games too long
-wasting money on frivolous purchases
-posting on social media
-spreading ‘fake news’
-refusing military service
-walking dog without a leash
-traffic violations
-jaywalking

Based on the numbers that my classmates come up with, there will be an individual with a lowest score and highest score. The highest score is the highest number of social demerits. The lowest score is the cleanest record.

I will then reveal the real life consequences of a high vs. low score.

the challenge.

 

So, this all seems very risky and invasive, however, what if we attempted to see the positive impacts that a system like this may have on society. In order to challenge and solidify my own ethical framework, I wanted to take a stab at seeing how this may actually be a help.

“Despite the creepiness of the system — Human Rights Watch called it “chilling,” while Botsman called it “a futuristic vision of Big Brother out of control” — some citizens say it’s making them better people already.

“For example, when we drive, now we always stop in front of crosswalks. If you don’t stop, you will lose your points. At first, we just worried about losing points, but now we got used to it.”

In order to expand on the possibilities in class, I assigned each individual a random economic class. ‘Citizens’ with the lowest numbers are rewarded with real benefits that mirrors ones that exist within the current Chinese system. These benefits are varied, but all affect the financial mobility of the individual.

Over time, if those with good credit can continue to benefit from the financial bonuses, there is possibility for new kinds of mobility within their class current class system.

why is this important?

 

1. This is not hypothetical.

“WeChat, today, offers a combination of services available from several different companies in the West, including Facebook, Snapchat, Amazon, Google, PayPal, and Uber, to name a few. Its comprehensive nature has also made it one of the most powerful tools for government surveillance over Chinese citizens.

And apps and social networks in other parts of the world may soon be a lot more like it.” 

-Business Insider

This is not necessarily isolated to China. China is a growing power in our global society, and their influence is broad. The principles that are being enacted in Wechat’s social currency system are likely to be mimicked by other societies social apps.

2. Global trends are important.

Testing against your own ethical gut feeling is a good practice. If this is inevitable for Chinese citizens, as well as something that other societies will likely learn towards in the future (including American society), we need to consider how we should proceed and maintain our ethical framework as a society.

3. There is no way to opt out.

When we are thinking about the importance of privacy and the way our systems may change and adapt in the face of existing and emergent technologies, it is important to consider the power and influence of social systems as the consequences of using them begin to cross over into our non-digital lives.

my takeaway.

When certain trends are inevitable, even if I don’t like them or agree with them, it is important for me as a global thinking designer to test against my own bias in order to think more creatively surrounding how to build safe systems and products within our technology driven world.

How Leave No Trace ethics can make us better designers

The ethical framework I use most frequently is Leave No Trace. It is an impact-oriented set of rules to mitigate human impacts on nature when camping or exploring the outdoors.

The six principles are:
-Plan ahead and prepare
-Camp and Travel on durable surfaces
-Dispose of wastes properly
-Leave what you find
-Use fire responsibly
-Respect wildlife

IMG_2540

When I think about my natural orientation towards ethical questions, I can’t help but be influenced by my values and practices in relation to land use and sustainability, which is also largely impact-oriented.

A focus on impact is an ethical framework is in contrast to other lenses such as a focus on duty or virtue.

Duty: “It is my duty to not litter.”
Virtue: “It is morally wrong to take pine cones from the forest because they do not belong to me.”
Impact: “I must be careful to put out my campfire because a forest fire would be devastating to the plants and animals here.”

Impact seems like an important consideration in living an ethical life. A world of people all creating positive impacts feels like a world I want to live in, even more than a world of dutiful or virtuous people. However, impact is also the least ‘knowable’ aspect of an ethical decision. It is necessarily ‘post hoc.’ We won’t know how something is going to play out until it does.

How can we maximize positive impact given our naivete about the consequences of our actions?

IMG_3768

Control for bias
We are bad at estimating negative impacts, we have a bias towards assuming good outcomes because we know our good intentions. We tend to overestimate the likelihood and magnitude of positive outcomes and underestimate the likelihood and magnitude of negative outcomes.

As designers, we can recalibrate our risk tolerance and our bias by being aware of these tendencies and making and effort to add in the possibility of negative outcomes to our planning. We can also practice the precautionary principle, by assuming negative outcomes until we have evidence to the contrary.

Do your homework
To better estimate the possible impacts of our work we have to do our homework. Sometimes this is in the form of prototyping and testing iterations of our designs with populations before releasing them at scale.

Sometimes it means investing in educating yourself about the struggles of marginalized people, fragile environments, unequal systems, and vulnerable stakeholders so you can better estimate the consequences of your design choices. Impact-oriented designers create a space for people with outsider knowledge or positioning.

Find your center
When thinking about our impacts we can be more intentional if we are deliberate about who we are centering when we are evaluating impact. In LNT the wilderness ecosystem is at the center. When striving to make positive change it’s easy to say that we are ‘human-centered’ designers, but what does that really mean in practice? And what other interests are at risk of stealing our focus?

It’s not enough to just center the entirety of human experience. Being more precise about who our technology is for and how it will help them allows us to more accurately estimate impact. We can’t be all things to all people when trying to orient design work to impact.

IMG_4755

Through controlling for bias, doing our homework and finding a center we can apply lessons from decades of Leave No Trace ethics to design work.

Building trust

We are closing out our class on Ethics in Design by talking about emergent technologies. We read and discussed topics like filter bubbles which insulate our perspectives, biases being built into algorithms and their consequences, the battle between encryption and public safety, and the scary reality of facial recognition technology.

This thread of facial recognition is where I focused my narrative, but the overall theme spreads to all emergent technologies. They have a wildly massive power to change our society. As seen in China, this power is being harnessed for unethical reasons. Stories like this bring to light the very real threat that we are confronted with and need to be aware of. The flipside of this threat, is the potential they have to be used for good. If we imagine machine learning being applied to sift through medical records and find patterns of disease or cancer, the quality of our healthcare could be directly increased. The trouble I see in front of us is a lack of trust for those wielding the technological power.

Trust has been emerging as the backbone of my ethical framework. I think trust directly relates to being ethical, because it is based on the good intention of helping the greater society.

Take these examples. We will hear two pitches about a new hypothetical service that utilizes facial recognition.

scenario 1

We love to see your face at Starbucks – so much so that your pretty mug can get you a pretty mug of coffee, for free! Just sign up for Expresso Line, our new facial recognition software that will automatically order your favorite drink as soon as you walk through the door. No lines, no hassle. Sign up today with your smartphone and upload a picture of your pretty mug. Every 4th time you come in, the coffee is on us. Restrictions and exclusion apply.

(yes I wrote Expresso on purpose)

scenario 2

We are Beautiful Beans, a startup cafe that wants to grow our business with you in mind. Our goal is to use facial recognition to make your morning routine just a little less hectic. Our new facial recognition software will automatically order your favorite drink as soon as you step into the scanning zone. No lines, no hassle, just set your drink preference on our app interface. If you don’t want to be scanned, just order at the kiosk.
The images we take will be secured in our database and will not be shared with anyone else. If we go under, the data will disappear too, that’s our promise. We have also teamed up with BlueHealth to give you the option to have your skin data sent to their lab for review and we can alert you of early warning signs of cancerous cells. No charge, we just care about your health.

When reading these scenarios, we realize they are both using the same technology to do generally the same thing, but the feeling of trust is different between the two.

In this scenario, is there a point where trust will be built?

I can envision a consistency that a large corporation like Starbucks may have to help build trust, and deliver on the promise of every 4th visit someone gets their free drink. There is value to be had when you follow through.

So where does the trust break?

In scenario 1, it may start with the name Starbucks. Capitalism has proven that greed and the bottom line tend to rule all, so we may have a bigger hurdle to climb from the get go. They need to regain our trust. There is also the absent information about how and with whom my data will be used. The use of this system justifies their right to scan everyone who walks through the door, whether they are participating or not, as well as power dynamic of who gets the most value out of this transaction?

Contrast this to scenario 2 with Beautiful Beans. Where is the trust being built?

On the surface, it appears that they have a genuinely good intention of helping people remove hassle from their morning, and even offer the option of giving free health exams. It’s not blatant, but one could assume they receive value from the medical company who is looking for data to help deter skin cancer. The customer receives value in return  by being warned of any health dangers. Trust is further built by being clear about how your data will be used, and by giving people the option to opt out of being scanned. The user chooses if the value of no lines and medical screening is worth them volunteering their data.

So where does the trust break here?

First of all, who is Beautiful Beans? I’ve never heard of them, so why should I believe anything they say? When you have an initial introduction to someone, there is usually a lack of credibility. In this case however, they have yet to break our trust, they also have not yet earned it.

So what are the steps we as designers need to take to earn the trust of users? It helps to look at this through the lens of meeting a new friend.

First we need to have genuine, good intentions. We make an acquaintance, and can usually begin to see if the person has good intentions. Once we realize this person is decent, we start to give them our trust, but only a little. It takes time, and they have follow through with any promises they may have made. I’ll pick you up at 4pm – boom there they are. I’ll help you move apartments – well what do you know here he is. Finally, they need to do this repeatedly. Consistency on delivering a promise is what builds trust, so time and repetition are key to building trust.

Building trust in the use of emergent technologies is no different. Although at this point, most people would lean towards having to regain trust rather than build from scratch. That’s where a designers ethical framework comes in. These are my key points to building – or regaining trust:

Reliability – consistent positive experiences

Protectionminimizing the users exposure to risk 

Inclusiveness – knowing your intention is to help the greater whole, not select groups

Transparency – being honest about how you interact with the user

Accountability – taking responsibility for your actions now, and in the future

These can be applied to our daily duties as designers. Building trust with customers is valuable and we should leverage that with our employers, or employees. Showcase your ethical framework. Show your boss you have integrity, and that integrity adds value to the company. Focus on customers. Speak up when you see untrustworthy actions that may compromise your users trust. Public interest over personal interest will have more longevity. Stay accountable. Building trust from the ground up is hard, but regaining it is even harder. Make plans beyond on-boarding to support your user and maintain their trust. 

Emergent technology needs to be used responsibly, and done in a manner that people begin to trust in it. To me, this is the biggest hurdle we need to clear to be able to harness the positive power we can all receive from emergent technology.

Myanmar, Facebook, and Regulation

Several years ago, I had the opportunity to visit Myanmar for a few days while traveling in Southeast Asia. It was the beginning of 2016, and Myanmar had only been open to the outside world for 3-4 years at that point. For me, it was amazing to see a country so untouched by western influence–there wasn’t a Starbucks or McDonalds in sight, though there was one KFC at the airport.

Of anywhere I’ve traveled, the Burmese people were by far the most warm and friendly. I happened to be there on a national holiday, and this street had been closed off by the people that lived nearby and both kids and adults were enjoying participating in various relay races.

2017-01-03_21-46-13_871

When my husband, sister, and I stopped to watch the fun, a few teenagers immediately came over to us to offer us a balloon and to invite us to join in on the fun.

The country has gone through massive changes since it opened to outside investment. When the telecom market was initially deregulated in 2012, less than 1% of Burmese people had access to the internet. By the time I visited just a few years later, over half of the country had a cell phone and was accessing the internet regularly. Facebook served as the main app for these new mobile adopters, and for many, news and Facebook were one and the same.

It wasn’t long before fake news started to spread, and fear began to build about potential future violence from the minority group of the Rohingya. This lead to extremist attacks against the Rohingya and eventual genocide against this minority group taken out by the Burmese government.

There’s been and is still a huge amount of conversation around Facebook’s unwillingness to quickly respond and regulate the inciting comments that built online against this population. Many are even going so far as to blame Facebook for the genocide. But the situation got even more complicated whenever studies started to uncover that it very well may have been an organized campaign against the Rohingya, planned by the Burmese government that eventually carried out this violence. This is the same government that seemed to have so radically shifted to an open and democratic system just a few years before.

Using this example as a way of fleshing out my personal ethical framework, I’ve found myself asking two questions that seem at odds with each other. First, should tools like Facebook be a platform or a publisher? And second, and maybe more importantly, you may trust the regulator now, but what if the regulator changes?

Capture 2

What I’m most struck by, is that this isn’t the first time the Burmese government has persecuted this population. Thanks to Facebook, however, it might be the first time the people of Myanmar have had an opportunity to hear and read outside voices on what has really happened in their country.

 

Could we give users the power to curate?

Censorship is no longer a discussion of the prohibition of content. With the massive democratization of publishing platforms, the influx of content has created a new opportunity for censorship: information overload and attention redirection. 500 hours of video are uploaded to YouTube every minute, The New York Times posts 250 pieces of content every day. Our president tweets over 4,000 times per year. It’s a lot to manage. The curation of content is the biggest threat to censorship. 

Personalization of content creates filter bubbles that amplify existing biases and essentially forces us to live in different realities. This personalized reality decreases the quality of information we consume, lowers the likelihood that we will consider (or even hear) opposing viewpoints, and ultimately ruins civil discourse.  After the 2016 presidential election, the polarization and manipulation of content have been widely discussed around the globe

Curation of content is not simply a taste issue or an entertainment issue. The curation of content is at the core of a productive democracy. For something so important, we must ask: could we give users the power to curate the content they consume?

Ultimately, the goal of effective curation would be to develop an unbiased understanding of the world that is free of fractured realities or perspectives. Differences of opinion are welcome – but those conversations should be able to exist on the same plane. If curation continues to polarize, there will be no equal ground to stand on. 

Risks and Benefits of User Curation

To first understand this question, I wanted to clearly debate the risks and benefits associated with user-controlled curation. 

What are the risks associated with giving users the power to curate?

  • Users prefer echo chambers. These filter bubbles offer the reassurance of your opinion, reinforce existing biases, and keep you engaged with content that you’ve been proven to enjoy. This could worsen divides. 
  • Curation requires prior knowledge. To truly curate a truly broad and representative view of a topic, proper knowledge is helpful. How can you represent multiple viewpoints on a topic if you don’t understand it? 
  • Information overload could cause opt-out. If users aren’t fully empowered to curate content effectively, they could be overwhelmed and opt-out completely. Is biased information better than none at all?
  • Do they want power? If not given the proper tools to curate effectively, the cognitive load associated with decision-making could be too great. What if you don’t want to think? 
  • Will misinformation worsen? Are users informed and engaged enough to fight social control and propaganda?

What are the benefits associated with giving users the power to curate?

  • Creates awareness of biases. By actively engaging in content curation to counteract bias, you will become more acutely aware of existing biases. 
  • Rebalances power dynamics. In the most extreme cases like content bans in China to nipple bans on Instagram, the ability to curate is the hands of the powerful. By giving control make to users, we can work to redistribute power. 
  • Respects user autonomy. In addition to rebalancing power, giving control back to the user also respects their autonomy, intellect, and ability to choose. 
  • Teaches to combat misinformation. This is an area that is becoming increasingly urgent. As deepfakes and AI-assisted content creation become more popular, it’s vital for citizens to continue to fine-tune their filters for real content and misinformation. Relying completely on platforms to filter content trains users to be complacent over time. We must continue to ask ourselves: is this a credible source? Is this content logical? Can I fact-check this before sharing?

Applying My Ethical Framework

With these benefits and risks clearly displayed, I ran this problem through my framework. With individual autonomy and respect as core tenants of my ethical framework, I strongly believe that we should design products and services to give users more power to curate their content. The strongest argument for me lies around teaching helplessness. If we never give users the power to curate, how will we ever learn how to identify biased, false, or misleading information? 

UserCuration

This artifact helped me understand where existing platforms lie, and where there are areas for opportunity. Escape Your Bubble, ConsiderIt, and Balancer were all tools we read about to counteract bias and create a more informed user. Despite the effectiveness of these tools, most of our day-to-day consumption exists in the echo chamber section. Because these platforms are personalized, it gives us a false sense of control and showcases content that feels resonant. This false sense of control keeps us from seeking more autonomy and keeps us complacent with the content that is given to us. 

How can we actually give users control while still keeping them engaged?

Ethics and Creativity

Imagine you’re the parent of a teenager, Alex.

It’s Saturday morning, 8 AM. As you walk outside to grab the mail, Alex is getting dropped off at the edge of the driveway. Surprising, because Alex was in bed when you went to sleep the night before. As Alex exits an unfamiliar car, it speeds off towards the end of the road. Avoiding eye contact while dressed in what looks like yesterday’s clothes, Alex darts inside the house.

You have a discussion and hear what Alex is saying, but you don’t necessarily believe it. You want to read Alex’s phone. Alex refuses.

Assuming the role of the parent in this situation, you are conflicted.

You want to protect your child, exercise control over your household, while maintaining trust and respect as an integral part of your relationship. You prioritize your values of order, trust, and security, but they are in direct conflict. With the goal of maintaining and upholding these values, you find yourself in an ethical dilemma.

Creative Problem-Solving

In this post, I suggest ethical dilemmas are at the root of almost all the wicked problems facing modern society. With this assumption, I aim to use creativity when attempting to solve them. Edward De Bono proposes the problem-solving method of lateral thinking. Lateral thinking is a way to suspend judgment in order to arrive at creative solutions. He argues that creativity and judgment are in opposition to one another. Where judgment forces us to maintain routine patterns of thinking, lateral thinking allows us to disrupt those patterns, so we can discover new and unexpected ideas.

Lateral Thinking

In an effort to suspend judgment, we must adopt new perspectives. In this example of parent and child, we can begin to question what values are also driving Alex when refusing to hand over their phone. These values might include self-autonomy, freedom, justice, fairness.

Alternating Thinking Patterns

De Bono also introduces tools for alternating our thinking patterns. In this example, we can weigh out the logical negatives and positives of both actions – reading through the phone or leaving it with Alex. Where negatives introduce caution and risks, positives look forward to the proposed solution and find something of value.

We can then begin to identify the benefits and risks of taking and leaving Alex’s phone. The benefits of taking the phone might include maintained control, with risks including compromised trust and retaliation. Alternatively, by leaving Alex with their phone, you, as a parent, uphold mutual trust but risk future manipulation.

Recontextualizing the Problem

Author, William Buchanan, also describes placements as a tool for establishing temporary boundaries when considering complex problems. Placements effectively build a frame around a problem to help the problem-solver see the problem in a new way. Using placements, designers can re-contextualize each problem to establish new hypotheses. Placements should be dynamic and interchangeable, so we can find the strongest frame for each unique problem space.

Viewing this example through the frame of power and privilege, we can ask questions concerning freedom, autonomy, and access. This allows us to isolate how power influences the relationship operating in this problem space.

  • Who creates the rules? Parent?
  • Who enforces rules? Parent?
  • Who must follow the rules? Child?
  • Who regulates the rules? Child and parent?

Inspecting privilege in this example, we look to find difference in privilege between parent and child. What changes if Alex is a boy, girl, or non-binary? Does it make a difference if the relationship is between father and daughter? What if Alex is 12, 15, or 18? What else might change the discussion?

After answering these questions, we can begin to generate ideas. What are actions we could take to reinforce existing positions of power? In what ways, might proposed actions be adapted? What are actions we could take to share power? What if the parent handed over their phone as a trade?

The greater the number of ideas generated, the more opportunity to discover options that better suit the problem without costing you your values.

Larger Implications

This type of thinking can be scaled to approach larger, more wicked problems. Exploring parental relationships between governments and their citizens, we can look at the recent example of surveillance and censorship in India. How do the answers in this example reflect what is happening between the Indian government, Facebook, and the crimes incited by the spread of fake news generated through WhatsApp?

A Third Option

Ethical decision-making requires a problem-solving approach with the same rigor and creativity as design problems. Viktor Papanek says, “In a fast-accelerating, increasingly complex society, the designer is faced with more and more problems that can be solved only through new basic insights.” Ethical dilemmas are interwoven into each of these complex problems. Creative problem-solving and design thinking can help us to approach these dilemmas in new and unexpected ways – without having to sacrifice our values in the process.

As designers, we can both uphold our values and solve complex problems by finding a third option, a compromise.

 “…We normally go along the main track without even noticing the side track. But if – somehow – we get across to the side track then, in hindsight, the route becomes obvious.” – De Bono

 

 

In Search of Techno-Utopia

Lately, our class has been reading about how emerging technologies have been used as tools of oppression. The most heart-rending case is what’s happening in China to the Uighur population: under the guise of fighting terrorism, the largely Muslim Uighurs are being monitored and tracked, their past and present behaviors used to identify individuals in need of “re-education.”

We are all familiar with the concept of the techno-dystopian future. Movies like Blade Runner and The Matrix are classic examples in film, and the entire genre of cyberpunk is built upon the idea. Our fear of future technologies has driven creation of countless stories exploring the potential ramifications of our technological advancement. But there are fewer ideas concerning what a techno-utopia might look like. Given that we are tasked with improving society via design, I want to explore what this might look like. I’ve created a diagram below to capture some of the tenets that I associate with techno-dystopian and techno-utopian futures.

Techno-Utopia and Techno-Dystopia

The concepts of dystopia and utopia would presumably exist in totally separate spheres, and most of these traits seem to be in diametric opposition. However, there is a tension between the two. With decentralization of power and personal autonomy comes the potential for majority rule without regard for society’s most vulnerable. As technology erases the power of gatekeepers to direct information and shape public opinion, techno-utopia risks turning into the worst corners of the Internet: driven by tribalism, fake news, and fear-mongering.

 

Risk of Mob Rule

This leaves us with the following question:

How can we protect the rights of minority populations in a world of decentralized authority?

There’s no easy answer to it, but this question must guide our ethical frameworks as designers. As we attempt to steer ourselves toward techno-utopia, this may be the most important question that we face.