Definitions

Martin Gibert (IVADO, Centre for Research on Ethics at Université de Montréal) is a research officer specializing in the ethics of artificial intelligence (AI) and big data. As part of a process of raising awareness of current and future ethical issues relating to artificial intelligence in the broader sense, he recently engaged in a “Q&A” exercise to lay the foundations for further reflection.

Ivado's image

What is meant by the ethics of artificial intelligence?

In a nutshell, it’s ethics applied to artificial intelligence systems and, more broadly, digital systems. It’s about questioning whether these systems and their uses are good or bad, just or unjust, virtuous or vicious, and so on. It’s making a moral assessment of a robot or an application: for example, if system X is rolled out, will it generate or reproduce discrimination? Will it improve people’s lives, or be detrimental to them?

And what is the ethics of algorithms?

This is a subset of the ethics of artificial intelligence (which is itself a subset of the ethics of technology) that looks at programming from a moral perspective. How do we program a “good” robot? Asking ourselves what a self-driving car should do in the case of an unavoidable accident, or how to configure a social distancing app, comes under the ethics of algorithms, while pondering whether we should even use those vehicles, or anti-pandemic apps, is the purview of the ethics of AI.

Issues

How do we decide on the best course of action in controversial cases?

By seeking consensus. Ethics researchers are experts in reasoning, and certain principles are indispensable. Obviously, though, there are more sensitive situations where principles may come into conflict: moral reality is complex and not every dilemma has a solution that everyone will agree on. Concretely, if it’s a matter of arbitration—for example, whether a self-driving car must absolutely “choose” the life of one person over another—you can either establish a hierarchy of principles or choose at random.

But shouldn’t we simply make sure self-driving cars don’t end up in such situations?

Ideally, yes: a self-driving automobile should never have to “make a choice” between two victims. We can trust that they will be safer, that more accidents will be prevented. But we can’t rule out the possibility of them happening. In that case, even if we wouldn’t be entirely satisfied with the car’s “moral behaviour,” we can still imagine that it “wouldn’t be as bad” as with a human driver who, in any case, has no time to think in an accident situation.

Does that mean self-driving cars are ethical?

Far from it! But asking the question this way doesn’t make much sense. Sadly, we can’t just slap a label that says “ethical” on this or that AI system—and this is different from law, where it’s easier to determine whether something is legal or illegal. There are in fact many ethical arguments against self-driving cars: from an environmental standpoint, for example, it doesn’t seem wise to develop individual modes of transport. In short, the notion of an “ethical self-driving car” is inseparable from the context in which we find ourselves.

What types of problems do we encounter in the ethics of AI?

This is a relatively young field of research and things are evolving all the time. Categories of problems are emerging, however. For example, there are short-term issues (lack of diversity in data and industry, algorithmic biases), medium-term issues (job losses caused by automation, increasing inequality, the drift toward a surveillance society) and long-term issues (the emergence of a hostile superintelligence).

What about hijackings by ill-intentioned people? They’re possible, right?

Yes, that’s one category. If a self-driving car is vulnerable to a hacker who could commandeer it for a terrorist attack, for example, that’s obviously a problem. Likewise, we must ensure the reliability of a given system. For instance, with an anti-pandemic app designed to estimate the risk of the user becoming infected, we need to decide on the acceptable level of false positives (i.e., people identified as being at risk but who actually aren’t) and false negatives (people identified as not being at risk but who are nonetheless carriers of the disease).

 Is all this new?

Yes and no. Generally speaking, in improving our capacity to act, AI systems create new moral responsibilities for us. These may be novel: for example, deciding what moral criteria to use to grant rights to a robot. But in many cases, there’s nothing new under the sun. Rather, AI systems lead to variations on a set of problems that already existed. Mass propaganda, for example, has been around since at least the invention of radio and the movies. The use of big data and social networks to manipulate people can be viewed as a contemporary, compounded version of the same problem.

Coming events

We regularly organize and support initiatives aimed at raising awareness of these issues within our community. Keep an eye on our calendar, newsletters and social media feeds to stay up to date!

View our events

Our community’s involvement

Ivado's image
Ivado's image

For the past two years, in partnership with the International Observatory on the Societal Impacts of AI and Digital Technology (OBVIA), we have been funding Sylvain Munger’s postdoctoral fellowship on the subject of “Power, Inequality and Discrimination in IA.” More specifically, under the supervision of Jean-François Gagné (Université de Montréal Political Science Department), Sylvain’s research topic is “Visions of the future of artificial intelligence: Power and prestige of the socio-technical imagination among Montréal-based entrepreneurs.”

Learn more

Ivado's image

In association with the Fonds de Recherche du Québec and OBVIA, we are supporting the tech and culture podcast Humaniteq. Designed, produced and hosted by Oriane Morriet, a PhD student in the Film Studies Department at Université de Montréal, it promotes reflection on the social issues raised by new technology.

Listen to podcast episodes (in French)

Ivado's image

We’re pleased to have been among the sponsors of the Cafés de bioéthique 2019! This series of roundtables created opportunities for exchange between experts from various disciplines and members of the public to collectively study the ethical and sociopolitical challenges posed by AI, Big Data and health from various angles: responsibility, law, confidentiality, privacy, transparency, accessibility, resource allocation, responsible conduct in research, etc.

Recordings of the roundtables are available on YouTube (English translations of the topics are provided here, but the content is in French):

Citizens and Connected Objects: What Happens to our Data?
Citizens and Genetic Data: For Whom and What For?
Citizens as New Actors in Public Health: What Challenges and Opportunities?