– People know when to break the rules, but machines don’t

What are machine ethics? How can we teach a robot the differences between right and wrong? Associate Professor Marija Slavkovik explains this complicated concept, and introduces us to her upcoming seminar on the topic.
Publisert: 3. mai 2019

Marija Slavkovik is an associate professor at the University of Bergen in Norway. Her area of research is Artificial Intelligence (AI), with a focus on collective reasoning. Slavkovik is active in the AI subdisciplines of multi-agent systems, machine ethics and computational social choice.

On Monday April 29th, Slavkovik held a short talk on machine ethics as part of the extensive program at this year’s robotics-themed Christie Conference in Bergen. The conference is one of Norway’s most important meeting places for academia and business, culture and society.

On Wednesday May 8th, she will be holding her own AI Agora seminar on machine ethics at Media City Bergen. Earlier in 2019, she also gained some international attention after being quoted in an article about machine ethics in the British newspaper the Daily Mail.

I went to her office at the Department of Information Science and Media Studies to interview her about machine ethics.

Two Types of Robots

The first thing I wanted to know was: Why do we need machine ethics? I asked Associate Professor Slavkovik to explain the basics.

Sitting in her high-backed comfy chair, Slavkovik immediately started giving me the historical background. “Up until recently,” Slavkovik began, “we had two types of systems. We had systems that had great power to actually crush your skull or, you know, damage you in some way.”

Slavkovik explained that these systems were typically industrial robots, which were limited in their environment. The environment in which they operate is called a working envelope, which this is segregated from normal space. You only allow people who are professionally trained to operate this machinery in the working envelope.

“In contrast to this, you had your domestic robot, which was a toy level kind of system,” Slavkovik went on, giving newer examples such as the Roomba or the Pepper robot. These robots share the same living spaces with humans, but they do not have the physical capacity to do any physical damage. As they were not connected to the internet, they couldn’t do any other type of damage, either. So these robots share the human environment, but they have no power.

 

"What changed is that we wanted to have capable, powerful machines that share the same space as us."

- Marija Slavkovik

 

Changing Uses and New Limits

It used to be that AI systems were built by professionals, to be used by professionals to solve a very specific problem, such as making train schedules.  The professional would be able to analyze the situation and analyze the context of any problems, and then anything that could go wrong, would be engineered away.

“Whereas now we have, in contrast, programs that can anybody can download from the internet, and then they can be used in any context you want,” Slavkovik explained. “So the people who write software, for example, they don’t necessarily know how this software is going to be used.”

“What changed”, said Slavkovik, “is that we wanted to have capable, powerful machines that share the same space as us.”

Now, with the internet, there is a lot more connectivity. It used to be that your computer was your own device, and you were the only one manipulating it, while now, your browser works for somebody else. Much of your data goes to the cloud.

So the environment of how we use technology in general has changed; not just AI. Things have changed in general, such as the amount of information we create. And the way this information is changing hands is no longer trackable.

“All this, in general, says that we have to somehow impose limits to the influence and impact that this technology can have on our society and our personal lives,” explained Slavkovik.

Because we now have to have powerful machines in our living spaces, Slavkovik concluded, that means we have to either train everybody, or impose rules that make sure the machine doesn’t do anything bad.

People Know When to Break the Rules, But Machines Don’t

In your opinion, in which specific situations would it be most necessary for machines to have the ability to make ethical considerations? Can you give some examples?

“When you replace a person, or when you replace part of what a person does, with some kind of a software, you have to make sure that you take in the full impact of the activity that you are replacing,” Slavkovik answered. “And sometimes we are very much blinded to what that is, because we take a lot of things for granted.”

“Every time you replace a personal activity with a machine activity, you should do some sort of environmental analysis,” she added.

Slavkovik uses the example of where Amazon gave the HR job of reading and considering resumes to an AI, and the AI ended up recommending only men for the company to hire.

A person can adapt to the environment and to new values, but a machine can only make decisions based on the data it already possesses; meaning the past. It has inadequate data. A person deciding whom to hire, however, doesn’t make a decision based solely on past experiences.

Another factor here is that “…when you talk to a person, they know when to break the rules. A lot of activities that people do, have to do with breaking the rules.” A person interviewing you for a job, might for example break the rules by deciding you need to know the real reason why you’re not getting hired, or other times when it might be appropriate to bend the rules.

“Our society is built heavily upon knowing when the rules can be bent,” said Slavkovik, and gave a warning: “Whenever you are replacing a process where part of the process is to sometimes bend the rules, that’s something you have to be careful about.”

 

"When you replace a person, or when you replace part of what a person does, with some kind of a software, you have to make sure that you take in the full impact of the activity that you are replacing."

- Marija Slavkovik

 

Trading Personal Data for Services

Who, in your opinion, should and shouldn’t decide what is right and wrong for a machine?

“Everybody who will be impacted by the decisions, should be asked as to what the behavioral system should be like,” Slavkovik replied.

We take care of the customers all the time when we’re designing devices such as toasters and so on, she reasoned, so we should also take care of the customers when we’re designing the behavior of systems.

Slavkovik told me, emphasizing that she was speaking more as an end user than as a researcher, that she has the feeling that we’re eroding the responsibility towards the customer in the software industry. When you buy a device like a toaster, you know who’s made it, but when you download an app, you don’t necessarily know who made it. Especially when it’s a free app, with no payment received from the user, the whole chain of responsibility breaks down.

Slavkovik feels that there’s a lack of understanding about the fact that what we use for “free” is not really free. We trade our personal data for a service, and people don’t think about how intrusive info gathering is. At the same time, she thinks that things are changing, and that people are becoming more aware.

As for who shouldn’t be allowed to decide what’s right and wrong for a machine, Slavkovik replied that it’s not so much who shouldn’t decide, but as a manufacturer or a person who develops things, you should probably try to limit the abuse possibilities. “And you should also be aware what you are exporting, and which values you are hardcoding into your software.”

What would you say are abuse possibilities? Any examples?

“Well, think of an exoskeleton, right? All of a sudden it gives you more power than you normally have. You shouldn’t really use it to beat people up— this is like a caricature example, but that’s the idea.”

Another example she gave was that of a self-driving car designed to slow down every time children are detected, and the children figuring out when and why it slows down, and turning it into a game.

MACHINE ETHICS: Associate Professor Marija Slavkovik explaining why we need machine ethics in her office at the Department of Information Science and Media Studies, at the University of Bergen. Photo: Ingvild Abildgaard Jansen.

Data Mining and Personal Choices

When it comes to personal, private information, and the personal privacy of the end users, do you have any comments there about challenges? Because a lot of these machines, in order to do their jobs, have to be fed with so much information. So which information is it okay to give to the machine, and what are the possible risks?

Slavkovik explained that it’s not really about black and white rules about which information should or should not be given, although there are certain things for which we can make such rules. For example, in advertising, you shouldn’t be able to target vulnerable groups of people.

Children, teenagers with low self-esteem who are feeling isolated, and people who are in a bad marriage, for example, are vulnerable in the sense that they are people who would not necessarily be able to soberly or sensibly make decisions. Depressed people, for example, can be identified by analyzing their posts on social media.

Slavkovik then recommended an article by Zeynep Tufekci called “Think you’re discrete online? Think again”, which deals with the subject of personal data, online behavior and targeting advertizing.

If we have to trade data for services, Slavkovik argued, then we should be able to choose which data we give out. It’s up to the individual what kind of information we’re willing to give out, and what we’re comfortable doing. It’s very personal. Just like some people might be more comfortable being helped by a health care robot, while some might prefer a human, some people don’t want their devices to know where they are during the day, while some are fine with it.

Her point is that there’s a lack of choice. People should be able to have more control over what information they want to give out, and what it should be used for. “We should have the opportunity for feedback, I think that’s what’s missing,” said Slavkovik.

“It’s a case of making technology that defends us against the technology that we have already made,” Slavkovik said of finding a solution for bringing personal choice into the process of dealing with the data-mining “free” services.

 

"If we have to trade data for services, then we should be able to choose which data we give out."

- Marija Slavkovik

 

A Machine That Decides for Itself

At the end of your Christie Conference talk, you seemed to conclude that the ethical decision making of robots should still be a human endeavor, even though we do it through robots. The Daily Mail article on machine ethics mentioned the fact that The House of Lords Artificial Intelligence Committee in the UK was against making a robot a legal person, responsible for making decisions. In your talk, you seemed to be heading in the same direction?   

“Well, I’m just saying we cannot do better right now,” Slavkovik clarified. “There are people investigating how and when a robot should say no, how to design an off button– there is research going on, but we do not know how to build a machine that decides for itself.”

What Marija Slavkovik mentioned on this topic in the Christie Conference talk, was the concept of an ethical governor, a program which tells the robot what to do. You have to know, however, the context in which the robot will be used, Slavkovik told me during the interview. If the robot is used outside that context, the ethical programming might not work or be appropriate, and there might be negative consequences.

Just as with simpler devices, like a toaster, there will be limits to how and where you can use the robot. Likewise, you can’t have a robot that can make decisions about things it’s never encountered before, Slavkovik explained.

 

"There is research going on, but we do not know how to build a machine that decides for itself."

- Marija Slavkovik

 

Machine Ethics in Hawaii

In January of 2019, Slavkovik attended the ACM (Association for Computing Machinery) conference on Artificial Intelligence, Ethics and Society in Hawaii. Her presentation was based on a paper called Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders (2018). The paper was co-written by Slavkovik herself, Beishui Liao from Zhejiang University and Leendert van der Torre from the University of Luxembourg. Slavkovik is currently working on a revised, updated version of this paper.

“An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end-users,” the paper explains, and addresses the challenge of “how the moral values and views of all stakeholders can be integrated and reflected in the moral behavior of an autonomous system.” The paper proposes “an artificial moral agent architecture that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders”.

The Jiminy Advisor, the artificial moral agent architecture proposed in Slavkovik’s paper, is named after the character Jiminy Cricket, who acted as the wooden boy Pinocchio’s moral conscience in Disney’s 1940 version of the story.

A ROBOT’S VOICE OF MORALITY: The Jiminy Advisor, the artificial moral agent architecture proposed in Slavkovik’s paper, is named after the character Jiminy Cricket, who acted as the wooden boy Pinocchio’s moral conscience in Disney’s 1940 version of the story. Illustration: Wikipedia Commons.

 

Machine Ethics in the Daily Mail

When I asked Slavkovik How did you end up in the Daily Mail?, she told me it all started with the conference in Hawaii.

Slavkovik explained to me that what had happened was that somebody from the New Scientist saw her presentation at the conference in Hawaii, and wrote an article called “AIs could debate whether a smart assistant should snitch on you”.

The reason why she ended up in the Daily Mail was because the resulting New Scientist article was discovered by the Daily Mail. The Daily Mail then wrote their own article in response, with the lengthy title “Alexa, call the police! Smart assistants should come with a ‘moral AI’ to decide whether to report their owners for breaking the law, experts say”.

Consequently, the Daily Mail article was referenced and spread by many other news outlets. I was not surprised by the sudden spread of the Daily Mail article. Not only due to the dramatic title and the dramatic representation of the topic, but because, like Slavkovik commented on the topic of AI during our interview: “Everybody’s talking about AI, and it’s becoming a little bit like, oh, it’s the thing to talk about, you know? It’s like it’s modern somehow.”

These days, AI is getting a lot of attention in general.

UIB IN THE DAILY MAIL: A screenshot from the Daily Mail article “Alexa, call the police! Smart assistants should come with a ‘moral AI’ to decide whether to report their owners for breaking the law, experts say”.

How do you feel about the Daily Mail article?

“I find it remarkably un-clickbaity,” Slavkovik said, even though she thinks the research from the original academic paper was misinterpreted by the Daily Mail.

The Daily Mail article was based on the New Scientist article, instead of the paper from the presentation in Hawaii. Furthermore, since articles from other news outlets were based on the Daily Mail article, the news articles ended up getting further and further away from the original paper.

Slavkovik explained to me that, contrary to how it seems in the Daily Mail article, she and her colleagues were not saying smart assistants like Amazon’s cloud-based voice service Alexa should call the police and report its owners for breaking the law. Slavkovik and her colleagues don’t work for a company, they do theoretical research, and were just using Alexa as an example.

The example originated from Slavkovik’s colleague Louise A. Dennis, a lecturer in the Autonomy and Verification Laboratory at the University of Liverpool. Dennis has done research where she asked (among other questions) teenagers whether they thought smart assistants should call the police if they detected marijuana smoke in a teenager’s bedroom. The Daily Mail article was focused on this example that they used, rather than the actual approach that they were proposing, and Slavkovik said she wasn’t surprised by this.

Slavkovik also thinks the article gave the impression that they have a built system, rather than theoretical work. “Sometimes I think it’s because people don’t really appreciate foundational work,” Slavkovik remarked. “They don’t know what to think of it, or what role it plays. We don’t solve problems, we just find problems.”

 

"There are no killer robots, or robots stealing your job or anything."

- Marija Slavkovik

 

The AI Agora Seminar on Machine Ethics

Your lecture at the Christie Conference was only about 15 minutes long. What’s the difference between the Christie Conference lecture and the AI Agora lecture, except for the length? What will you be covering? What can we expect?   

Marija Slavkovik told me that during her AI Agora seminar, she will introduce the concepts of ethical use of AI, explainable AI and machine ethics, and explain the differences between the three.

Slavkovik will also explain what they do within the field of machine ethics, and what the motivation was for the paper from the Hawaii conference.

Why should people come to your AI Agora seminar?

Slavkovik wants to show the breadth of AI; how many different things there are that are now called AI.

“It’s very difficult for people to invest time to understand something. And so, what I’m hoping to do, is save people some time to understand what this whole noise about AI and ethics is,” said Slavkovik.

“And also, you know, give you an overview of the type of work that we do there. And the specific things— when you look at the specific things, everything becomes very mundane. There are no killer robots, or robots stealing your job or anything. Everything becomes very graspable and real. So that’s what I would like to do, to save people some time, and help them understand.”

Slavkovik’s AI Agora seminar will be held on Wednesday May 8th at 14:15 – 15:30, in Seminar Room 20 (the “Forskningslab”) at Media City Bergen.


Publisert: 3. mai 2019
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram