The British drama series “Black Mirror” is a quasi-sci-fi exploration of the morally ambiguous ways our relationship to (slash dependence on) technology can be exploited. In this Earthling chat, we use the show to open a conversation about the responsibility (if any) developers, as engineers of tomorrow’s technology, have to mitigate the deleterious impacts of their “creations.”
Erin:
What responsibility do you think developers have to consider the moral implications of the programs they produce, whether for personal consumption like Facebook, personal organizing apps, Instagram, etc. or for institutional purposes (data gathering, automated services)?
Simon:
Ultimately, the sense of responsibility any individual developer will have will probably depend on their cultural or generational background. Like in the second episode of season one (Fifteen Million Merits) where people are forced to view advertisements in order to continue viewing content or be penalized. That functionality (tracking eye movement and forcing a viewer to actually watch the ad) is totally conceivable when commercials that can’t be fast-forwarded are more and more common. I’m sure there will be developers who are completely comfortable with engineering this type of functionality. Chances are good that they’ll be from a generation that has grown up in an ad-saturated world, already accepting the “invasion” of advertising into their daily lives.
I don’t think that the moral responsibility rests with the developers themselves but rather with society as a whole to create laws which limit the ways technology can be abused.
Pitt:
Okay, but while ads might be annoying, there’s nothing inherently immoral about the presentation of the ad itself. The Fifteen Million Merits episode plays with the idea of choice and definitely exploits our natural anxiety about advertising, but to me, as far as a developer’s responsibility goes, it is more about the purpose of the program they are creating and whether or not the end-user depends on the accuracy of the information or execution for their well-being. If you’re building a program to dispense medical advice, prescriptions or directions there is more of an obligation to assure accuracy and transparency. Otherwise, people make a decision to engage with a screen so I don’t believe there are any moral encumbrances with disconnecting people from other people, nature, etc.
Allison:
But it’s important for producers of technology to generally be conscious on some level of their product’s cultural and environmental impact. That said, the reality is once a product is in use its creators no longer have full control over it. We need to do a better job of educating ourselves and one another as to the responsibility we take when we USE a product, rather than adopting a paradigm that limits creative production.
Josh:
I see what you’re saying, but the key for me is trying to have empathy for our product customers. This is kind of the struggle for me. When I think about these things the questions ‘Can it be done?’ and ‘would it provide value?’ usually trump ‘Should it be done?’ If I were working on a biometric product like the grain in Season 1 Ep. 3 (The Entire History of You) I probably would have thought it would have been a lot of fun to create. We are very likely to jump at opportunities to create products where we see value and would find the creation to be challenging.
So I guess I am saying maybe we should have some responsibility, but let’s just leave it up to the Product Owners 😉 (emphasis on the wink).
Erin:
Alright then, what if you were offered a huge sum of money to develop a program that you knew would be completely kick-ass and revolutionary and fun as hell to work on but there was a 5% chance that it could be used to exploit people on the scale depicted in Black Mirror if it got into the wrong hands, would you do it? Why or why not?
Simon:
To be honest, I’m really not sure. After seeing the Christmas episode (White Christmas) I have to completely rethink how I would approach this situation.
Josh:
Uh, uh. No, I wouldn’t. If I knew about the chance of exploitation I would not want to be involved. But this is usually not really the issue. The challenge is knowing if the people involved in these products have the foresight or even the empathy to understand the chance of exploitation.
Pitt:
Hmmm… I would strongly consider it if there were some societal benefit to the proposed product. That would balance strongly in favor working on the project. But, generally speaking, you cannot tell with great accuracy all of the potential uses of technology; especially something groundbreaking, so I would probably work on the project knowing that there’s no way to account for the ways that the technology could be abused.
Allison:
Absolutely! This sort of goes back to what I was saying about user responsibility. I believe that progress is never a bad thing, even if the ways in which some people take advantage of that progress is. It’s critical not only to our success as a species but to the intellectual enrichment of every member of our species that we foster an environment of creativity uninhibited by the fear of misuse. Instead, we should focus on using products responsibly and protecting ourselves from those who don’t. Should Henry Ford have been wary of pioneering the assembly line because it would be used in sweatshops? I don’t think so.
:::
Curious about Black Mirror and haven’t seen it yet? You can watch episodes on Netflix or on the Channel 4 website.