I'm gonna start by asking you a very profound and important question.
How do you feel about blue cheese?
Many of us have very strong opinions one way or the other.
I'd like you to think of someone who does not share your opinion on blue cheese.
and consider whether you'd be happy to have this person
as a neighbour,
as a close friend,
or as a romantic partner.
Keep these feelings in mind.
Here's another controversial issue:
abortion.
Should it be legal or illegal?
Think about your own opinion and now
tell me if you'd be happy to have someone who does not
share your opinion on abortion
as a neighbor,
a close friend or a romantic partner.
The questions I've just asked you
are adapted from research by Linda Skitka
at the University of Illinois.
Her research has shown that there's something special about moral attitudes,
like opposition to abortion
relative to non-moral attitudes, like your feelings on blue cheese.
And as you might expect,
disagreeing on moral issues
is much more damaging to social relationships
than disagreeing on non-moral issues.
And why is it, that a blue cheese lover is perfectly happy
to befriend, marry, have kids with a blue cheese hater,
but there exist anti abortion extremists
who think it's justifiable
to kill another human being,
just because he disagrees with them.
We have to figure this out,
because we now live in a world where extremists,
powered by their moral convictions,
can do a lot of damage.
We can start by asking how it is we know what is right, what is wrong?
And actually, this question doesn't even make sense to a lot of us,
because people often experience moral beliefs
as if they are objective facts about the world.
We have some evidence for this
from research by Geoffrey Goodwin and colleagues
he's now at the University of Pennsylvania.
He presented subjects with a series of statements in the following categories:
facts, like "Boston is further north than Los Angeles",
ethics, like "Opening gun fire on a crowded city street is wrong",
norms, like "Wearing pyjamas to a TED Talk is wrong",
and tastes, like "Classical music is better than rock music".
For each of these statements subject had to answer
yes or no to the following question:
"Does this statement have an objectively correct answer?"
And here's what they found: not surprisingly,
people felt most strongly that facts had a correct answer, while tastes did not.
But notice, that the statements of ethics
looked more like facts than like tastes.
And we see this overlap between facts and values in the brain as well.
Sam Harris and colleagues scanned people's brains
while they evaluated the truthfulness
of factual statements, ethical statements
and religious statements.
They found that a brain region called the medial prefrontal cortex, shown here,
was more active when people believed the statement
to be true rather than false,
but importantly, this region did not differentiate
between the different categories of beliefs.
So mathematical beliefs, like 2+2 = 4,
showed a similar pattern of activity
to ethical beliefs, like
"It's wrong to take pleasure at another's suffering."
The upshot of all this is that we think
that there's a right answer to moral questions.
And here's the rub.
If you and I disagree
and we both can't be right,
well, clearly it's me who's right.
My facts trump your facts.
And therefore you must be stupid or unreasonable.
Of course this kind of language is all too common in politics these days.
But there's an important and dangerous difference
between disagreeing on facts
and disagreeing on moral values.
Because you see, if you think that 1+1=3,
I might think you're stupid or a little strange.
But if you and I disagree on a moral issue,
not only do I think you are stupid and unreasonable,
but also a bad person,
maybe even less than human.
Moral values are like facts on steroids.
They have really strong emotions attached to them.
And unfortunately these emotions often come
with a motivation to harm or eliminate the other side.
And this is a big problem.
Because while we readily accept
that tastes and opinions can change,
facts are facts.
You have your facts and I have my facts.
And we're both so committed to those realities
that it's senseless to expect that either of us
will ever change.
Imagine trying to convince someone who is red-green color blind
that these 2 circles are different colors.
There is nothing you can say to convince this person
to see the world the way you see it.
And the same unfortunately appears to be true
with differences in moral viewpoints.
Values seem like facts
and facts are fixed properties of reality.
So where do we go from here?
I wanted to understand how and why it is
we hold on so tightly to our moral convictions,
myself included.
And I'm a neuroscientist, so naturally
I started poking around in people's brains.
And I found out that our moral values
are a lot less stable than they appear to be.
What if I told you that a pill could change
your judgment of what's right and what's wrong?
Or what if I told you that your sense of fairness
could depend in part on what you had for breakfast this morning?
You're probably thinking that
this sounds like science fiction. Right?
Neurons in the brain use chemicals called neurotransmitters to talk to each other.
Here we have two neurons. The gap between them is called a synapse.
To transmit a message across the synapse,
one neuron must release neurotransmitters into the synapse
where they bind to receptors on the other side
and propagate the message.
Our brains produce and release these chemicals
in response to various situations.
My colleagues and I wanted to know
whether manipulating people's neurotransmitters
could change the way people respond to moral situations.
In one study, we presented our subjects
with a series of moral dilemmas like the following.
There's a trolley and it's headed out of control
towards five workers on the tracks
who will die if you do nothing.
However, you can
stop the trolley by pushing a man,
who is carrying a heavy briefcase, onto the tracks.
And he will die, but the 5 others will be saved.
The question is: is it morally acceptable
to harm this one person in order to save the others?
Now of course there's no objectively correct answer to this question,
but there are 2 schools of moral thought
that take opposing views.
The utilitarian school, rooted in the works of philosopher David Hume,
judges the merit of actions
based on the outcomes they produce.
So morally appropriate actions are those that result in the greatest good for the greatest number.
In contrast, the deontological school, grounded in the works of philosopher Immanuel Kant,
judges the actions themselves.
So there are right actions and wrong actions and outcomes are irrelevant.
In the example I just described to you,
utilitarians would say it's appropriate
to push the man onto the tracks
because more lives are saved in the end,
whereas deontologists would say it's inappropriate
because harming is just fundamentally wrong.
My colleagues and I asked 30 people
to judge right or wrong in moral dilemmas
like the one I described.
And we wanted to know whether tinkering
with a specific brain chemical called serotonin
would change people's judgments of right and wrong.
We used a drug called a selective serotonin reuptake inhibitor or SRRI,
similar to the antidepressant Prozac,
and these drugs basically work by enhancing
the actions of serotonin in the brain.
On one session our volunteers answered,
made judgments in moral dilemmas
while under the influence of the SSRI
and on another session they made moral judgments
while on a placebo pill.
Here's what we found: on placebo our volunteers said
it was appropriate to harm one to save many others
in about 40% of the cases that we presented to them.
And when we gave them the SSRI,
they were significantly less likely to say
it was acceptable to harm one to save many.
Now take a second to think about these results.
The debate between utilitarians and deontologists
has been raging for hundreds of years.
And we gave people a pill
and without their even knowing it,
they gave different answers to this question
of whether it's okay to harm one to save many others.
Could the difference between Hume and Kant
all boil down to a couple of chemicals in their brains?
And on a more serious note,
what are the implications of this for other ethical questions?
So taking this idea further, my colleagues and I wanted to know
whether changing serotonin levels could influence the way
people respond to being treated unfairly.
We used a game from economics
called the ultimatum game.
There are two players, a proposer and a responder.
The proposer suggests a way to split
a sum of money with the responder.
And the responder can either accept,
in which case both players are paid accordingly,
or he can reject,
in which case neither player gets any money.
Many studies have now shown that responders
will typically reject offers
that they perceive to be unfair.
Which makes sense. I think a lot of us
are willing to give up some money in order to punish
someone else who's treated us unfairly.
The question was: can we shift around
people's responses to unfairness
by changing their serotonin levels?
We did this by manipulating people's diet.
So we get the raw ingredient for serotonin.
It's called tryptophan and it's an amino acid.
We must constantly replenish our supply of tryptophan
by eating protein rich foods.
In the lab we can lower people's serotonin levels in their brains
by giving them a protein shake that lacks tryptophan.
And on the placebo control treatment
we give them a protein shake that looks and tastes the same.
The only difference is that it does contain
2.5 grams of tryptophan.
So we gave these drinks to our volunteers
and had them play the ultimatum game
while in the role of responder.
We measured rejection rates for unfair, medium and fair offers.
And here's the placebo data.
As you can see, people reject a lot of the unfair offers
and they hardly ever reject the fair 50/50 splits,
but when we lower their serotonin levels,
rejection rates go up for the unfair offers.
So again just take a second and consider these results.
The only difference between placebo and the depletion conditions
is 2.5 grams of tryptophan in the diet.
That's it. Our volunteers didn't feel any difference
between the two treatments,
and they didn't notice any changes in their behavior.
And yet the subtle difference in the diet
was enough to change the amount of money
people were willing to give up
to punish someone who treated them unfairly.
Now in these experiments we artificially manipulated
people's serotonin levels,
but out in the real world,
serotonin levels fluctuate naturally
in response to changes in things like diets and stress levels.
What this means is that our moral values
are probably shifting a little bit all the time
without us even knowing it.
And we do have some evidence that this kind of thing is happening
out in the real world.
Shai Danziger and colleagues looked at judges' decisions
of whether or not to grant parole to prisoners.
Here on the vertical axis we have the proportion
of cases where they did grant parole
and on the bottom we have basically time of day,
the order in which cases were heard.
These vertical dotted lines here,
those are the judges' meal breaks.
It turns out if you are coming up for parole,
all things considered,
you are more likely to be granted parole
if your hearing takes place after the judge had a snack.
This is a huge effect and it survives even if you control for other important factors,
like whether or not it's a repeat offence
or whether or not the prisoner's involved in a rehab program.
Now I hope that this worries you, at least a little bit.
And more seriously,
I hope that I have convinced you
that our moral values are a lot less stable
than they appear to be.
And this is important because it turns out
that simply believing
that moral values are changeable
as opposed to fixed
can have dramatic effects on our willingness
to compromise and cooperate with each other.
The Israel Palestine conflict is one of the biggest
ideological clashes of our time.
It's resulted in thousands of deaths on both sides,
huge cost in quality of life.
Eran Halperin, Carol Dweck and colleagues recently reported
that beliefs about whether groups have a changeable versus a fixed nature
can influence Israelian and Palestinian attitudes towards each other
and their willingness to compromise for peace.
In their experiment, they randomly assigned
Israelis and Palestinians to read one of two articles.
One article suggested that aggressive groups have a fixed nature
and the other article suggested that aggressive groups have a changeable nature.
Those who read the article about changeable groups
were more willing to meet with the other side and hear their point of view
and more willing to negotiate and compromise
on issues like the status of Jerusalem
and settlements in the West Bank.
What this means, is
if we can wrap our heads around the idea
that moral values are not fixed but can change,
we are more likely to listen to each other.
And here's a kind of crazy idea.
If pills can shift our moral values,
what if negotiators popped a few moral enhancers
before going to the table?
Such an intervention might make it easier
for opponents to see each other's side.
Now of course we have a long way to go
before we fully understand which neurotransmitters
shape which kinds of ethical beliefs.
But I do think it's plausible that one day we will have the expertise
to identify brain systems driving preferences
for conflicting ethical principles.
As long as we believe
that moral values are unshakeable,
we will continue to invest our resources
into fighting with each other
rather than searching for a middle way.
Instead, can't we cultivate
a healthy skepticism of our own sense of right and wrong?
Because, you know, once we accept
that our values can be shifted
by factors beyond our awareness and control,
maybe we'll become a little less attached to them.
And the sooner we can let go of this attachment,
the better, because we've got some scary problems
threatening our collective survival.
But we're not fixing them because
we're so caught up in bickering amongst ourselves.
So, you know, I just hope that we can realize that
we're caught in this ocean of hate and fear
and it's blinding us to our common humanity
and the amazing things that we can achieve
if we can put our differences aside and our heads and hearts together.
It's time to wake up. Thank you.
(Applause)