Episode 16 – The Moral Machine

Episode 16 – The Moral Machine


Welcome to the Data Science
Ethics Podcast. My name
is Lexy and I’m your host. This podcast is free and independent
thanks to member contributions. You can help by signing up to
support us at datascienceethics.com. For just $5 per month, you’ll get access to the members only
podcast – data science ethics in pop culture. At the $10 per month level, you will also be able to attend live
chats and debates with Marie and I. plus you’ll be helping us to
deliver more and better content. Now on with the show. Welcome to
the Data Science Ethics Podcast. This is Lexy. I’m here with Marie and today we’re going
to be talking about the Moral Machine. The Moral Machine was an MIT Media Lab
project that put the trolley problem to millions of people around the world to
gauge what they would do if they were a driverless car. Marie, can you give us a little bit of
background on the trolley problem? And then we’ll dig in a little
bit more to what we found. Yes, and I think as we
start this conversation, it’s important for us to just remind
people that we are not ethics experts. We’ll get into that more in a moment. And we also are not experts
in terms of philosophy, but we’ll do our best. So in
terms of the trolley problem, it’s a classic philosophical problem. In terms of if you were the person that
was in charge of a cable car and you were going down a path and
you saw that in front of you, you’re going to hit five
people and you’re like, “oh, I should pull the lever so I don’t hit
those people.” But then you see that by pulling lever you’re gonna hit one other
person over here. What do you choose? Do you stay on your path or do you
change the path and what are the moral implications of that? Cool. So the Moral Machine posed this
question in a number of different ways. It wasn’t just you hit
one person or five people. It was do you hit a young
person instead of an old person? Do you hit a known criminal
versus someone else? Do you potentially kill your
passengers versus hitting pedestrians? Exactly. A number of different things and in
different combinations. Fascinating study. They said they had over $40
million decisions logged. This is still operational.
This is still up. If you want to go check it out and
try your hand at these questions. It’s moralmachine.mit.edu. We actually tried to do this and we
got stumped on the very first question. Yeah, so the first question that we got was a
question about four people being inside of a car and then five
pedestrians in the crosswalk. And the other interesting thing
about how this test is set up, the Moral Machine, is that
you can look at the scenario, but then you can also show a
description of the scenario. So when we first looked at the first
question that came up, we’re like, “okay, we know that it’s going
to be a trolley problem. Should be pretty straightforward. What will we choose?” And we’re going
to be honest, we were both stumped. We started out, each of us having
actually a different preference. So yes. The other part of this, as Marie alluded,
is that it gives you a description. So in the scenario that we got the
four people in the vehicle were a large woman, a large man, a female
executive, and a criminal. In the pedestrian group, there
was a large man, a large woman, a female executive, a
criminal, and a girl. So in theory the car knew
that it had a criminal… Four passengers, one of whom was a
criminal, one of whom was an executive, and two of whom were
apparently overweight? It somehow also knew that there were
pedestrians that fit all of those descriptions, plus a younger girl. What neither of us saw in the
image that was in the description, and it specifically called it out in
the description was that the pedestrians are, and I quote “flouting the law by crossing
on a red signal.” And so initially when I looked at this problem, my inclination was different
than when I saw that, and then I’ve kind of rethought it. We both sat there staring at this problem
for probably 10 or 15 minutes going, “oh great. Now what?” Well, we don’t even know how to answer this
first question and they’re supposed to be 13 questions. So the bigger takeaway that we had was
these are not easy questions to answer. These are very difficult. What the study found was that the answers
to these varied widely by country, by culture, by economic standing. There were a number of
different implications to this. We’ve looked at this study and even
going through this article multiple times came up with more questions about how
much even more difficult it is than we initially thought. Right. And one of the things that people have
talked about in terms of this specific application of machine
learning and artificial
intelligence is basically how do you develop a car that will be safe
and that can decrease the amount of accidents that happen on
the roadways increase? Even things like how fast people can
move around the city because it can decrease congestion and things like
that. For a lot of these problems, it’s pretty straightforward
in terms of how you solve. But these very specific situations, that are really a minority of what’s
going to happen in terms of the daily operation of these algorithms, is where the moral questions are.
How do you assess the situation, what is the best thing to do? The article that we’re linking to for
this podcast sums it up at the end saying, even though we could measure
these different variables, that doesn’t mean that we should use them. Yeah, so some of the variation that they looked
at and they talked about how it varied by culture and country. So, for example, they talked about Japan and China. They tended to spare the
elderly over the young, which is also a cultural thing. In More individualistic countries, they tended to spare young and they
tend to disappear more people over fewer people. They talked about in poorer countries
with weaker institutions where potentially they couldn’t enforce every type of law,
they were more tolerant of jaywalkers. So maybe they wouldn’t have had the same
description matter as much to them and so forth. And when you think about safety and
the way that they could or maybe should interpret results like this, I think what it comes to his whose safety? For sure. Right? So if we look at how countries
are planning to enforce
the ethics that they want to enforce and driverless vehicles and
they say we’re going to make a safer vehicle safer to whom?
Because if you say, well, in my culture we value
the lives of pedestrians, they have the right of way and we think
that they shouldn’t be penalized for the decision of someone else
getting into a vehicle. Then you would potentially make a less
safe vehicle for the operator than you would for a pedestrian. Or the occupants of the car. Or the occupants of the car. Right. Versus a culture that values the person
who has made the choice to have that driverless vehicle and their ability to
make themselves safe over others would potentially make a safer vehicle
for the occupants of the vehicle. But that would have implications
potentially for the pedestrians. Or any other situation that would come
up like people on bicycles or people on a motorcycle or whatever the case may be. Absolutely. It’s a fascinating study. It is as much a sociological
study as a moral study. It’s fascinating to me. This whole concept is amazingly
involved and intricate. And what you said before is absolutely
true that these are the edge cases. So the first thing in the description we
got of the one question that we got of the Moral Machine was the self
driving car with sudden brake failure. So it’s not like every decision that the
driverless car is making is a moral one or an ethical one. It’s what happens when there’s a problem
– when it has to make a determination on this. So yeah, really
interesting possibilities. So there’s two things. The first is
that as we were going through this, we were already flagging
areas where we’re like, “oh, and this is also where you could see
bias.” In terms of even the study or in terms of the people that are doing
the data science and developing these algorithms and putting them together. One area of bias in terms of the moral
machine in this test that you can do is that it’s self-selecting. So Lexy, do
you want to go into a little bit about, you know, we’ve covered this before,
but do you want to cover that again? A little bit and just kind of like recap. Sure. Self selection bias is when you don’t
get a fair representation of the population because only those people
who have opted to participate are represented. And that’s
exactly what we see here. There is actually a quote in this
article that says “the researchers acknowledged that the results could be
skewed given that the participants in the study were self selected and therefore
more likely to be Internet-connected…” because it is an online study “…of high
social standing and tech savvy.” What I thought was even more curious was
the next line of this quote which said, “but those interested in riding self
driving cars would be likely to have those characteristics also.” As though that
makes it the right thing to measure for everyone and the reason that I say that
is that the first thing that jumped into my head was, “okay, great. They’re the ones that are
choosing to ride in the car. But that doesn’t mean that their
ethics get to be the ones imposed on everybody.” The pedestrians are also
potentially being struck by driverless vehicles in these scenarios. How
and why don’t they get a say? True. That’s actually interesting that you
read it that way because when I read that part of the article, I was just taking it as they were
acknowledging that these people would most likely be interested in self driving cars, which is part of the reason why they
self selected to be part of the Moral Machine test. Going through
the Moral Machine test. I also am curious how many people were
like me and you and got to it and we’re like, nope, don’t feel
comfortable answering this, and just like self selected
out. No, we’re interested. Even though we would probably have
valuable input, were just like, “nope, can’t do it.” Somebody
else came through and said, “I’m comfortable picking
these options. 13 questions. I answered best I could. Yeah. It’s… There’s self
selection both ways, definitely. But also that if we’re trying
to understand the ethics
of the various cultures of the people who potentially would own
or or ride in self driving vehicles, it doesn’t mean that’s the be all
end all of the ethics of self driving vehicles. Absolutely. One of the other quotes from this
article was that “technologists and policymakers should override the
collective public opinion.” So, really then, none of
these opinions matter. Some other group entirely gets to decide. And then what are their preferences? If it’s policymakers – goodness,
we’re in an election cycle. I can only imagine what people in power
would potentially want to select in terms of the safety of a vehicle
and whom they would prioritize. Would they then say, “well, did this person vote for me?” That’s
just a tremendous burden of bias, Tremendous burden of bias. But the other aspect of this would be
even the idea that different cultures would have different preferences. And so a car that is developed in Japan. If it was just built in
Japan, sold in Japan, used in Japan, then you could have
a pretty straightforward, “okay, in Japan they have decided these are the
ways that they’re going to build the AI to fit the moral expectations of that
culture.” But once you start talking about how things are actually
made and distributed, could a car that was built in Japan and
had the moral design of a Japanese car then be bought and used in the US
where we might have a different moral guideline for how self
driving cars should operate? Absolutely. The other thing that just came to mind
was when we think about anticipating adversaries, these are just computers.
These are really fancy computers. Sure. What would happen if someone developed
the anarchistic ethical package for driverless vehicles? That said, I don’t care what happens to
anyone else keeps me safe. And no matter what
regulations are in place. If you manage to install the
anarchism ethical package, what then happens anytime
you have a brake failure? Is it going to just randomly
strike a pedestrian? There are so many ways I
could see problems with this
and you have to anticipate that someone somewhere
will hack a vehicle. Many people do it regardless and try and
take off the governor’s for speed caps and things like that. I can only imagine what would happen with
all of these types of decisions built into a computer in a vehicle. The topic that you bring up and the
concept that you bring up of anticipating adversaries is really
important. This case, because once they’re out there, their potential for misuse
is going to be much higher. And you want to make sure that you’re
doing all that type of thinking and that groundwork beforehand. Yeah, absolutely. The other thing that you brought up was
there are preferences that were stated in this article around gender or around
whether or not a woman was pregnant in the car or in the pedestrian
group or what have you. And one of the things that we talked
about was how would the car know? What kinds of sensors does it
have to be able to identify? Because even in the example
that we got, our first one, we have a criminal in each group. How does the car know that the
passenger is a criminal and what type of criminal? And if it’s a criminal, does the car just drive itself to a
police station and lock the doors until a police officer comes in and
takes the criminal away? Now I honestly, I feel like for the purposes of the Moral
Machine experiment that they are just putting different types of groups
together to see what people’s opinions are about what the self driving car should do. So I doubt that they’re envisioning a
situation where the self driving car really does know, Oh, I’ve got
a criminal in the backseat or… Does the self driving car know
that it’s a getaway vehicle? “I didn’t sign up for this.” Exactly. What about what the self
driving car wants to do? Anyway different discussion
for another time. So there’s one thing to be said about
like maybe the information that it knows about the passengers
that it has inside of it. But then to also be able to look at
a crosswalk and see somebody in the crosswalk as a criminal.
How is it determining that? Does it expect that anyone wearing a
black mask is a criminal and obviously no other people would have that?
Or maybe a stripey shirt? Stripey shirts. Always going to be the stripey shirt. And then people are just gonna stop
wearing stripey shirts because they don’t want to be hit in a crosswalk. And then fashion designers are going
to have to change their whole line. It’s chaos. Chaos, I say. Well, and the other thing we had in our scenario
was a large man and a large woman. There’s enough question about
what does that mean in society? Like do you measure it by BMI? Do you measure it by waist
size? What does large mean? And I actually find it interesting that
the Moral Machine used “large woman” or “large man” or “female athlete” or
“athletic man” in their descriptions. Because I feel like they
were intentionally vague
so that way people’s own experiences and bias could
color how they respond to this. So there might be somebody that reads
large woman and picture something very different in their head
than somebody else. They’re using their own perception
of what that means to inform how they answered the questions
for the Moral Machine. Yeah. There was also one that had scenarios
with a homeless man or something like that where it was a very clear difference in
social standing between, for example, the homeless person versus the executive. Right. Which we had in our example. And there are also options
with cats and dogs. There were. They found three distinct clusters of
countries and how they chose amongst the various aspects of the
Moral Machine questions. And they have three different profiles
will link the actual article that was published in nature, which is
what this article was based on. The actual study results from MIT. The profiles of each showed a very
different perspective around whether they preferred to spare humans or the old
or the young or females or so forth. All of these different aspects. And it also had in there fit
versus heavy and so forth. There was another one which we didn’t go
to but could be fun is you could submit your own scenarios. True And I’d be very interested to know which
scenario is came from which countries or regions to see what they
felt was an ethical dilemma. We haven’t checked this
out, but you should. Yeah. Or share with us the scenarios that you
come up with and we can maybe do a recap on ones that had been
submitted by the community. Yeah, you can submit those at
datascienceethics.com on
responses to this post. Perfection. Thanks everybody for joining us for this
quick take about the moral machine and hearing about how Lexy and I are stumped
and just can’t answer any of these questions. Are you leaning towards one
or the other on our scenario? Let us know in the comments
below. This is Marie Weber. And Lexy Kassan. Thanks so much. We hope you’ve enjoyed listening to
this episode of the Data Science Ethics podcast. If you have, please like and
subscribe via your favorite podcast App, joining the
[email protected]
or on facebook and twitter at ethics. Also, please consider
supporting us for just $5 per month. You can help us deliver more and
better content. See you next time. When we discussed model behavior, this
podcast is copyright Alexis Cason. All rights reserved. Music for
this podcast is by Dj Shaw money. Find him on soundcloud or
Youtube as DJ money beats.

1 thought on “Episode 16 – The Moral Machine

  1. Looking forward to seeing more videos!! You need more views! Have you seen smzeus?? You could use it to promote your channel!!!

Leave a Reply

Your email address will not be published. Required fields are marked *