Throughout this quarter, we have been tasked with creating an ethics framework that will guide us our decisions as designers. This has been no easy feat. There is no universal set of ethics, and almost every decision of importance requires trade-offs. Even when you think you are designing something that benefits everyone, there is the possibility that your design will have unintended effects or be co-opted and used for nefarious purposes by others. The field of design is strewn with such cases, whether it be Airbnb giving hosts the agency to choose their guests (resulting in discrimination) or Cambridge Analytica adopting algorithmic prediction software to sway political elections.
Ethics can be considered from various approaches, and two in particular inform this framework: the consequentialist Common Good Approach, which stipulates that good actions should consider and benefit the whole of society, with consideration for the most vulnerable; and the non-consequentialist Rights Approach, which determines good by evaluating the impact of an action on the rights of those affected by it, emphasizing that people never be treated as means to an end.
The Framework I have created was influenced by earlier models I’ve created, the Star of Good Design and the Identity Rainbow. These ethical approaches and previous studies, culled from readings and discussions on design patterns, privacy and identity, and emerging technology, inform the basis of my logic and the nature of my questions.
The dark red questions are open-ended. If you find yourself in the bottom right corner (For whom?) you should reflect and return to the Am I okay with this? box above, then proceed to the agency question. This is a work in progress.
When facing the benefits and drawbacks of an ambiguous situation (or in all scenarios, really), consider the following questions:
I’ve used this model on a few different scenarios. For example, imagine a health organization operates an app that provides reminders to take medicine and menus of daily meal options that meet nutritional value goals. The organization would now like to encourage people to exercise more.
You have been asked to design a request asking users to create an exercise regimen every time they receive their medication reminder or set up a meal plan. This notification cannot be turned off, and must be declined each time. The organization thinks that, with enough prodding, users will eventually create a plan, and that they will ultimately be thankful once they set it up.
If I follow my framework, I will eventually get to the question of whether this causes unintended harm. It very well could: annoying reminders could persuade users to stop using the app altogether, and they would no longer have access to the medication reminders and meal plans. So I would have to ask myself additional questions and determine the severity of the situation and what options I have to influence it.
I would proceed in the following way:
- I would make an argument as to why I think this is a bad idea and pitch alternatives.
- If unsuccessful in my efforts, I would ask if I could be reassigned from the project.
- If I had to, I would work on the project. There would be drawbacks to implementation, but also benefits. The drawbacks are not significant enough to quit the position.
I believe that this sort of reasoning must be employed when making ambiguous decisions. For example, if I were asked to craft a deceptive terms and conditions acceptance protocol, I would protest, but ultimately acquiesce if necessary, assuming that the terms are not more dangerous than industry standards. Since people are already used to sharing private information online, it would not be worth losing my job to try to change one company’s protocol.
But let’s return to the hypothetical health organization above. Imagine that the next initiative was to obtain users’ medical records, so that the app could make tailored meal plan recommendations based on their health conditions. I would not work on this project. Combining medical records with shopping behaviors could result in discriminatory action by health insurers and government agencies, if they were to obtain this information. By storing this data together, I would be creating existential risk for our users.
In the end, my takeaway is as follows:
Consider the ramifications of your work. Consider your responsibilities to yourself and to others. Seek outside input. Stand up for your values. Do no deliberate harm.
Again, this framework is a work in progress. If you have thoughts and would like to provide feedback or engage in conversation about the limits of this model, please email me at firstname.lastname@example.org. I would love to hear from you.