Software is eating the world. We, developers and architects, are a major force influencing software, technology, and the world it creates. We don’t have the privilege of being unaware of our actions. If we really want to create a better world, we must understand the intersection of technology and humanity. We need to open our eyes to the link between developer ethics and software.
Rotem Hermon is lead architect at SAP Customer Data Cloud, and has been building and designing back-end systems for a long time. He presented a talk at Codemotion Amsterdam 2019 about the issue of developers and ethics.
Why is ethics for software developers important now?
According to Rotem there are two trends that increase the need for coders to be mindful of developer ethics in their work:
- Software is really becoming the technology that is underlying everything that is happening in the world now. As technology advances, it’s becoming less and less noticeable – as we progress, people notice it less.
- There’s a decline in liberal arts education, especially in technical education. It’s not considered something that it’s is worth investing in. This is despite developers “working in one of the most influential professions that there is. And yet the whole education that we build to make those professionals doesn’t include any compulsory curriculum that talks about the social, cultural and ethical implications of what we do.”
Examples of issues that involve software developer ethics
Autonomous cars and ‘the trolley problem’
Imagine a train that is on the loose. And if it keeps on going on the track, it’s going to hit five people. And you’re standing near a lever that can diverge the train to another track. And if you pull that lever, it will diverge, but then it will kill one person.
So what should you do? Should you pull the lever or not? This is classical ethical dilemma. And there’s no single right answer here. It depends on the value systems or ethical consideration that you put into it, and it may bring different kind of answers.
Imagine then, an autonomous car that is going to hit a bunch of people:
We it can divert sideways, but then it will hit another person.
So what should the car do?
- What if it’s one adult on the front and a child on the side?
- What if it’s five adults and one child?
- How do we make this decision?
- How do you build an algorithm that makes the decision?
- Who is responsible? The car manufacturer or the software developer who was in charge of the algorithm?
- Or maybe there is no one that is responsible here. “And what are the social implications if we’re saying, you cannot have anyone responsible for these kinds of accidents or deaths? These are the problems that we’re going to have to deal with.”
As Rotem states: “We’re building systems to make decisions. And that’s always been what we do with computers. We program our computers in order to make all kinds of decision, but the nature of those decision decisions is changing. It’s not just straightforward computations anymore, where it’s starting to ask open-ended questions like who should the car hit?”
Loomis v Wisconsin
Machine learning algorithms are currently used in law enforcement and in sentencing and parole systems that give calculate how likely is a person to commit another crime.
Loomis v Wisconsin is a case that occured in Wisconsin in 2013. A guy named Eric Loomis was found guilty for participating in a drive-by shooting. The use of closed-source risk assessment software in the sentencing of Eric Loomis resulting in a sentence of six years in prison.
Loomis challenged his sentence on the grounds that he wasn’t actually able to understand how the algorithm is being used by the system because it’s developed by a private company and it’s closed box algorithm. The Supreme court later on ruled against Loomis. But it does bring on an interesting question about using this kind of systems in the judgmental system.
As Rotem asks:
“Are we okay with relying on privately developed algorithms in our courts? Who is responsible for that? Who should they answer to about what type of algorithm they are doing and its consequences?”
A large part of our judicial system is based on the diversity of human opinion from juries to different judges who may have different views and thus give different rulings. “If we’re saying we’re going to use software systems is part of decision making in the court.
Can we assume it’s a software system, and thus deterministic? Are we ok socially and culturally with giving up on diversity in decision making?”
Captions by Norman AI
A team from MIT wanted to show how the effect of training data has on the output of machine learning algorithms. So they took a generic AI for image captioning and trained it using images that they took from darker subreddits.
Then they took images from a Rorschach test. And they showed it to the algorithm that was trained on the darker images and the same algorithm that was trained on their regular image sets. What was evident is a difference of worldviews between those different trained algorithms:
What can we do about developer ethics?
Rotem notes, “There’s a tendency to think that we’re better off using algorithms to make decisions because there is less bias than people. But the reality is that a lot of times, these algorithms can actually bring on biases that were buried in the data and that we don’t even notice. So what can we do about it?
- Keep asking questions: “Take the time to think about what we do, what the companies we work in do, and just don’t take things for granted. We’re lucky because we’re still in a position where technical talent is scarce and therefore we have some level of influence about the direction of the companies in the industry. And we’re starting to see cases where employees actually take a stand and affect what is happening in the workplace.
- We can voice our opinion if we identify foul play. Whistleblowing has a long and respected history in tech.
- We can decide where we work and who we work for. As Roten suggests: “So we can pick companies that really are trying to do less evil, or at least just don’t work for the obviously evil ones.”
- We can decide where we invest our resources and drive the industry in that direction.