Najla Said is IBM Cloud Data Science Team Manager in Italy. She takes a look at how AI benefits come with a cost. The use of intelligent machines, indeed, not only introduces new forms of issues, but also widen our attack surface, and is triggering huge philosophical debates. This article offers a dive into some of the issues raised around AI ethics, but we also encourage you to take a look at the presentation below:
According to Najla, artificial intelligence hype started in the 60s which halted in the 1970s, called the AI winter “due to the lack of the computational power and data that are needed to build an artificial intelligence system. And then in the 1990s, we started to work again in artificial intelligence until 2016, when we faced something we didn’t expect.”
Tay bot
In 2016 machine learning enthusiasts were shaken by Tay bot, an artificial intelligence chatterbot that was originally released by Microsoft via Twitter on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. Microsoft attributed the problem to trolls who “attacked” the service as the bot made replies based on its interactions with people on Twitter.
According to Najla “We need discussions about regulation, and how to manage these kinds of issues and started to grow also in the decision-making levels. AI is complex and it has every complex technology. It comes with a lot of power but also a lot of potential issues. So we should be aware of this issue and understand how to be sure that we use technology.”
Simplification
Najla notes that not everyone can be a data scientist. Critical skills are required in mathematics, programming and statistics.
So when you start to work in a project with artificial intelligence, you have to be sure that you have the right competencies at the table that you work with a good skilled, team
Interpretation problems: Data can be misinterpreted. Data science and artificial intelligence problems usually need to contain both technological knowledge and domain knowledge. “So we need experts that work with our data scientists and guide them through the work and are there to check if the results are meaningful in this kind of domain. Domain experts should always participate in AI projects.”
Cognitive bias is a painpoint of AI ethics
According to Najla, “When we take a decision, we are affected by more than 180 cognitive biases.”
Artificial intelligence systems are built by humans and are trained on human data. AI systems can inherit this bias and thus, bias can be introduced in every step of the process. We should always check our solution for possible biases, “We can check these in pre-processing on the data sets, we can check this during by creating the bias-resistant algorithm and we can check regularly, but we have to do it.”
Robustness
AI systems based on deep learning or machine learning introduce new surfaces for attacks. As Najla notes, “So in our case, before could exploit a vulnerability in an application now has the other things to access. You can act on the data sets of the training you can act on the algorithm by proposing to the algorithm, adversarial examples in order to obtain an unwanted outcome: These are really important issues and we have to harden our data sets and our code in order to be sure that we are not vulnerable to these kinds of issues.
Fortunately, help is at hand. The Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
Explanability
The explainability of an AI system helps in testing quality and in gaining trust in its decisions. “It’s a tool in our hands in order to gain trust in our solution. There are some algorithms that are explainable by themselves, but other algorithms like for example, neural networks work like black boxes. And in this kind of situation, we should create ways to explain the outcome of these algorithms and we can do it locally so on a single outcome or generally globally. So, global explainability is a little bit more difficult than local explain abilities. However, it is worth doing because this helps in understanding the quality and in understanding possible issues of the solution.”
Value alignment
How do we make sure machines will act as we expect? Trust is a critical part of AI ethics. As Najla explains, “I want to make sure that my Artificial Intelligence makes the right decision when the decision is not only some kind of business decision but is an ethical decision.”
According to Najla, “If I take a driving licence and I take a decision when driving, I have the responsibility of this decision. When I use a self-driving car, who has responsibility? How can I insert my assets inside my code? This is a very tough problem because also ethics is not static, it changes with culture, it changes with time, so it’s very difficult to understand how to determine a global, equally valuable assets to add to our codes.
An example of this at work is MIT’s moral machine which is a platform for gathering a human perspective on moral decisions made by machine intelligence such as self driving cars. It demonstrates moral issues where a driverless car must make a decision that is killing two passengers of five pedestrians. Najla shares, “It’s a very tantalising tool to use. And you can see how your average answers are with respect to the median answers of the people so it’s quite compelling.”
Any discussions around ethics in AI will clearly require a shift in our discussion that will include not only technical people but also people that works in philosophical and ethical discussion every day. So it will take time to solve this kind of problem. We need multidisciplinary discussion and considered regulations – the ethics of everyone is required to have a real answer.