How You Really Make Decisions Horizon


How You Really Make Decisions

Similar Content

Browse content similar to How You Really Make Decisions. Check below for episodes and series from the same categories and more!

Transcript


LineFromTo

We like to think that as a species, we are pretty smart.

0:00:040:00:08

We like to think we are wise, rational creatures.

0:00:110:00:15

I think we all like to think of ourselves as Mr Spock to some degree.

0:00:150:00:18

You know, we make rational, conscious decisions.

0:00:180:00:20

But we may have to think again.

0:00:210:00:22

It's mostly delusion, and we should just wake up to that fact.

0:00:240:00:28

In every decision you make,

0:00:310:00:32

there's a battle in your mind between intuition and logic.

0:00:320:00:36

It's a conflict that plays out in every aspect of your life.

0:00:380:00:41

What you eat.

0:00:430:00:44

What you believe.

0:00:450:00:46

Who you fall in love with.

0:00:470:00:49

And most powerfully, in decisions you make about money.

0:00:500:00:54

The moment money enters the picture, the rules change.

0:00:540:00:58

Scientists now have a new way

0:00:580:01:00

to understand this battle in your mind,

0:01:000:01:03

how it shapes the decisions you take, what you believe...

0:01:030:01:07

And how it has transformed our understanding

0:01:090:01:11

of human nature itself.

0:01:110:01:13

TAXI HORN BLARES

0:01:260:01:27

Sitting in the back of this New York cab is Professor Danny Kahneman.

0:01:340:01:38

He's regarded as one of the most influential psychologists alive today.

0:01:420:01:46

Over the last 40 years,

0:01:480:01:50

he's developed some extraordinary insights

0:01:500:01:52

into the way we make decisions.

0:01:520:01:54

I think it can't hurt to have a realistic view of human nature

0:01:580:02:02

and of how the mind works.

0:02:020:02:03

His insights come largely from puzzles.

0:02:070:02:09

Take, for instance, the curious puzzle of New York cab drivers

0:02:130:02:17

and their highly illogical working habits.

0:02:170:02:20

Business varies according to the weather.

0:02:230:02:26

On rainy days, everyone wants a cab.

0:02:270:02:29

But on sunny days, like today, fares are hard to find.

0:02:300:02:34

Logically, they should spend a lot of time driving on rainy days,

0:02:340:02:39

because it's very easy to find passengers on rainy days,

0:02:390:02:43

and if they are going to take leisure, it should be on sunny days

0:02:430:02:46

but it turns out this is not what many of them do.

0:02:460:02:50

Many do the opposite, working long hours on slow, sunny days

0:02:520:02:57

and knocking off early when it's rainy and busy.

0:02:570:02:59

Instead of thinking logically, the cabbies are driven by an urge

0:03:010:03:05

to earn a set amount of cash each day, come rain or shine.

0:03:050:03:10

Once they hit that target, they go home.

0:03:110:03:13

They view being below the target as a loss

0:03:170:03:19

and being above the target as a gain, and they care

0:03:190:03:22

more about preventing the loss than about achieving the gain.

0:03:220:03:26

So when they reach their goal on a rainy day, they stop...

0:03:280:03:31

..which really doesn't make sense.

0:03:350:03:37

If they were trying to maximise their income,

0:03:370:03:39

they would take their leisure on sunny days

0:03:390:03:42

and they would drive all day on rainy days.

0:03:420:03:44

It was this kind of glitch in thinking

0:03:490:03:51

that Kahneman realised could reveal something profound

0:03:510:03:54

about the inner workings of the mind.

0:03:540:03:56

Anyone want to take part in an experiment?

0:03:580:04:00

And he began to devise a series of puzzles and questions

0:04:000:04:03

which have become classic psychological tests.

0:04:030:04:06

-It's a simple experiment.

-It's a very attractive game...

0:04:100:04:13

Don't worry, sir. Nothing strenuous.

0:04:130:04:14

..Posing problems where you can recognise in yourself

0:04:140:04:17

that your intuition is going the wrong way.

0:04:170:04:21

The type of puzzle where the answer that intuitively springs to mind,

0:04:210:04:25

and that seems obvious, is, in fact, wrong.

0:04:250:04:27

Here is one that I think works on just about everybody.

0:04:300:04:34

I want you to imagine a guy called Steve.

0:04:340:04:36

You tell people that Steve, you know, is a meek and tidy soul

0:04:360:04:42

with a passion for detail and very little interest in people.

0:04:420:04:48

He's got a good eye for detail.

0:04:480:04:49

And then you tell people he was drawn at random.

0:04:490:04:52

From a census of the American population.

0:04:520:04:54

What's the probability that he is a farmer or a librarian?

0:04:540:04:59

So do you think it's more likely

0:04:590:05:01

that Steve's going to end up working as a librarian or a farmer?

0:05:010:05:03

What's he more likely to be?

0:05:030:05:06

Maybe a librarian.

0:05:060:05:07

Librarian.

0:05:070:05:09

Probably a librarian.

0:05:090:05:10

A librarian.

0:05:100:05:11

Immediately, you know, the thought pops to mind

0:05:110:05:15

that it's a librarian, because he resembled the prototype of librarian.

0:05:150:05:20

Probably a librarian.

0:05:200:05:22

In fact, that's probably the wrong answer,

0:05:220:05:25

because, at least in the United States,

0:05:250:05:28

there are 20 times as many male farmers as male librarians.

0:05:280:05:32

-Librarian.

-Librarian.

0:05:320:05:33

So there are probably more meek and tidy souls you know

0:05:330:05:37

who are farmers than meek and tidy souls who are librarians.

0:05:370:05:40

This type of puzzle seemed to reveal a discrepancy

0:05:450:05:49

between intuition and logic.

0:05:490:05:51

Another example is...

0:05:510:05:52

Imagine a dictionary. I'm going to pull a word out of it at random.

0:05:520:05:55

Which is more likely,

0:05:550:05:56

that a word that you pick out at random has the letter

0:05:560:06:02

R in the first position or has the letter R in the third position?

0:06:020:06:07

-Erm, start with the letter R.

-OK.

0:06:070:06:09

People think the first position,

0:06:100:06:12

because it's easy to think of examples.

0:06:120:06:15

-Start with it.

-First.

0:06:150:06:16

In fact, there are nearly three times as many words with

0:06:170:06:20

R as the third letter, than words that begin with R,

0:06:200:06:24

but that's not what our intuition tells us.

0:06:240:06:27

So we have examples like that. Like, many of them.

0:06:270:06:31

Kahneman's interest in human error was first sparked in the 1970s

0:06:340:06:38

when he and his colleague, Amos Taverski,

0:06:380:06:41

began looking at their own mistakes.

0:06:410:06:43

It was all in ourselves.

0:06:450:06:47

That is, all the mistakes that we studied were mistakes

0:06:470:06:50

that we were prone to make.

0:06:500:06:52

In my hand here, I've got £100.

0:06:520:06:55

Kahneman and Taverski found a treasure trove of these puzzles.

0:06:550:06:59

Which would you prefer?

0:06:590:07:00

They unveiled a catalogue of human error.

0:07:000:07:03

Would you rather go to Rome with a free breakfast?

0:07:040:07:07

And opened a Pandora's Box of mistakes.

0:07:070:07:10

A year and a day.

0:07:100:07:12

25?

0:07:120:07:14

But the really interesting thing about these mistakes is,

0:07:140:07:17

they're not accidents.

0:07:170:07:18

75?

0:07:180:07:20

They have a shape, a structure.

0:07:200:07:22

I think Rome.

0:07:220:07:23

Skewing our judgment.

0:07:230:07:25

20.

0:07:250:07:26

What makes them interesting is that they are not random errors.

0:07:260:07:29

They are biases, so the difference between a bias

0:07:290:07:32

and a random error is that a bias is predictable.

0:07:320:07:36

It's a systematic error that is predictable.

0:07:360:07:38

Kahneman's puzzles prompt the wrong reply again.

0:07:400:07:44

More likely.

0:07:440:07:45

And again.

0:07:450:07:46

-More likely.

-More likely?

0:07:460:07:48

And again.

0:07:480:07:49

Probably more likely?

0:07:490:07:50

It's a pattern of human error that affects every single one of us.

0:07:510:07:55

On their own, they may seem small.

0:07:570:07:59

Ah, that seems to be the right drawer.

0:08:010:08:04

But by rummaging around in our everyday mistakes...

0:08:040:08:07

That's very odd.

0:08:070:08:09

..Kahneman started a revolution in our understanding of human thinking.

0:08:090:08:13

A revolution so profound

0:08:150:08:17

and far-reaching that he was awarded a Nobel prize.

0:08:170:08:20

So if you want to see the medal, that's what it looks like.

0:08:220:08:27

That's it.

0:08:300:08:31

Psychologists have long strived to pick apart the moments

0:08:380:08:42

when people make decisions.

0:08:420:08:43

Much of the focus has been on our rational mind,

0:08:450:08:48

our capacity for logic.

0:08:480:08:51

But Kahneman saw the mind differently.

0:08:510:08:55

He saw a much more powerful role for the other side of our minds,

0:08:550:08:58

intuition.

0:08:580:09:00

And at the heart of human thinking,

0:09:020:09:04

there's a conflict between logic and intuition that leads to mistakes.

0:09:040:09:08

Kahneman and Taverski started this trend

0:09:090:09:11

of seeing the mind differently.

0:09:110:09:13

They found these decision-making illusions,

0:09:150:09:17

these spots where our intuitions just make us decide

0:09:170:09:20

these things that just don't make any sense.

0:09:200:09:23

The work of Kahneman and Taverski has really been revolutionary.

0:09:230:09:26

It kicked off a flurry of experimentation

0:09:290:09:31

and observation to understand the meaning of these mistakes.

0:09:310:09:35

People didn't really appreciate, as recently as 40 years ago,

0:09:370:09:40

that the mind didn't really work like a computer.

0:09:400:09:45

We thought that we were very deliberative, conscious creatures

0:09:450:09:48

who weighed up the costs and benefits of action,

0:09:480:09:51

just like Mr Spock would do.

0:09:510:09:52

By now, it's a fairly coherent body of work

0:09:560:10:00

about ways in which

0:10:000:10:01

intuition departs from the rules, if you will.

0:10:010:10:07

And the body of evidence is growing.

0:10:110:10:13

Some of the best clues to the working of our minds come not

0:10:160:10:19

when we get things right, but when we get things wrong.

0:10:190:10:22

In a corner of this otherwise peaceful campus,

0:10:350:10:38

Professor Chris Chabris is about to start a fight.

0:10:380:10:42

All right, so what I want you guys to do is stay in this area over here.

0:10:420:10:49

The two big guys grab you

0:10:490:10:51

and sort of like start pretending to punch you, make some sound effects.

0:10:510:10:54

All right, this looks good.

0:10:540:10:56

All right, that seemed pretty good to me.

0:10:580:10:59

It's part of an experiment

0:11:010:11:03

that shows a pretty shocking mistake that any one of us could make.

0:11:030:11:06

A mistake where you don't notice

0:11:080:11:10

what's happening right in front of your eyes.

0:11:100:11:12

As well as a fight, the experiment also involves a chase.

0:11:190:11:22

It was inspired by an incident in Boston in 1995,

0:11:260:11:29

when a young police officer, Kenny Conley,

0:11:290:11:32

was in hot pursuit of a murder suspect.

0:11:320:11:34

It turned out that this police officer,

0:11:370:11:39

while he was chasing the suspect,

0:11:390:11:41

had run right past some other police officers

0:11:410:11:43

who were beating up another suspect, which, of course,

0:11:430:11:46

police officers are not supposed to do under any circumstances.

0:11:460:11:49

When the police tried to investigate this case of police brutality,

0:11:500:11:54

he said, "I didn't see anything going on there,

0:11:540:11:57

"all I saw was the suspect I was chasing."

0:11:570:11:59

And nobody could believe this

0:11:590:12:00

and he was prosecuted for perjury and obstruction of justice.

0:12:000:12:04

Everyone was convinced that Conley was lying.

0:12:050:12:08

We don't want you to be, like, closer than about...

0:12:080:12:11

Everyone, that is, apart from Chris Chabris.

0:12:110:12:13

He wondered if our ability to pay attention

0:12:170:12:19

is so limited that any one of us could run past a vicious fight

0:12:190:12:23

without even noticing.

0:12:230:12:24

And it's something he's putting to the test.

0:12:270:12:29

Now, when you see someone jogging across the footbridge,

0:12:290:12:32

then you should get started.

0:12:320:12:34

Jackie, you can go.

0:12:340:12:35

In the experiment,

0:12:400:12:41

the subjects are asked to focus carefully on a cognitive task.

0:12:410:12:44

They must count the number of times

0:12:450:12:47

the runner taps her head with each hand.

0:12:470:12:49

Would they, like the Boston police officer,

0:12:560:12:58

be so blinded by their limited attention

0:12:580:13:00

that they would completely fail to notice the fight?

0:13:000:13:03

About 45 seconds or a minute into the run, there was the fight.

0:13:070:13:10

And they could actually see the fight from a ways away,

0:13:100:13:14

and it was about 20 feet away from them when they got closest to them.

0:13:140:13:17

The fight is right in their field of view,

0:13:200:13:22

and at least partially visible from as far back as the footbridge.

0:13:220:13:26

It seems incredible that anyone would fail to notice something

0:13:340:13:37

so apparently obvious.

0:13:370:13:38

They completed the three-minute course

0:13:420:13:44

and then we said, "Did you notice anything unusual?"

0:13:440:13:47

Yes.

0:13:470:13:48

-What was it?

-It was a fight.

0:13:480:13:50

Sometimes they would have noticed the fight and they would say,

0:13:500:13:53

"Yeah, I saw some guys fighting", but a large percentage of people said

0:13:530:13:57

"We didn't see anything unusual at all."

0:13:570:13:59

And when we asked them specifically

0:13:590:14:00

about whether they saw anybody fighting, they still said no.

0:14:000:14:03

In fact, nearly 50% of people in the experiment

0:14:100:14:12

completely failed to notice the fight.

0:14:120:14:15

Did you see anything unusual during the run?

0:14:170:14:20

No.

0:14:200:14:21

OK. Did you see some people fighting?

0:14:210:14:23

No.

0:14:230:14:25

We did at night time and we did it in the daylight.

0:14:280:14:31

Even when we did it in daylight,

0:14:320:14:34

many people ran right past the fight and didn't notice it at all.

0:14:340:14:38

Did you see anything unusual during the run?

0:14:410:14:43

No, not really.

0:14:430:14:45

OK, did you see some people fighting?

0:14:450:14:47

No.

0:14:470:14:48

-You really didn't see anyone fighting?

-No.

0:14:480:14:50

Does it surprise you that you would have missed that?

0:14:500:14:52

They were about 20 feet off the path.

0:14:520:14:55

-Oh!

-You ran right past them.

0:14:550:14:56

-Completely missed that, then.

-OK.

0:14:560:14:58

Maybe what happened to Conley was,

0:15:000:15:02

when you're really paying attention to one thing

0:15:020:15:04

and focusing a lot of mental energy on it, you can miss things

0:15:040:15:07

that other people are going to think are completely obvious, and in fact,

0:15:070:15:10

that's what the jurors said after Conley's trial.

0:15:100:15:12

They said "We couldn't believe that he could miss something like that".

0:15:120:15:15

It didn't make any sense. He had to have been lying.

0:15:150:15:17

It's an unsettling phenomenon called inattentional blindness

0:15:200:15:24

that can affect us all.

0:15:240:15:26

Some people have said things like

0:15:270:15:29

"This shatters my faith in my own mind",

0:15:290:15:30

or, "Now I don't know what to believe",

0:15:300:15:32

or, "I'm going to be confused from now on."

0:15:320:15:35

But I'm not sure that that feeling really stays with them very long.

0:15:350:15:38

They are going to go out from the experiment, you know,

0:15:380:15:40

walk to the next place they're going or something like that

0:15:400:15:43

and they're going to have just as much inattentional blindness

0:15:430:15:45

when they're walking down the street that afternoon as they did before.

0:15:450:15:48

This experiment reveals a powerful quandary about our minds.

0:15:530:15:56

We glide through the world blissfully unaware

0:15:580:16:00

of most of what we do and how little we really know our minds.

0:16:000:16:04

For all its brilliance,

0:16:070:16:08

the part of our mind we call ourselves is extremely limited.

0:16:080:16:12

So how do we manage to navigate our way through

0:16:150:16:17

the complexity of daily life?

0:16:170:16:19

Every day, each one of us

0:16:310:16:32

makes somewhere between two and 10,000 decisions.

0:16:320:16:35

When you think about our daily lives,

0:16:400:16:42

it's really a long, long sequence of decisions.

0:16:420:16:45

We make decisions probably at a frequency

0:16:470:16:50

that is close to the frequency we breathe.

0:16:500:16:52

Every minute, every second, you're deciding where to move your legs,

0:16:520:16:56

and where to move your eyes, and where to move your limbs,

0:16:560:16:58

and when you're eating a meal,

0:16:580:17:00

you're making all kinds of decisions.

0:17:000:17:01

And yet the vast majority of these decisions,

0:17:030:17:05

we make without even realising.

0:17:050:17:07

It was Danny Kahneman's insight that we have two systems

0:17:130:17:17

in the mind for making decisions.

0:17:170:17:19

Two ways of thinking:

0:17:210:17:23

fast and slow.

0:17:230:17:25

You know, our mind has really two ways of operating,

0:17:300:17:33

and one is sort of fast-thinking, an automatic effortless mode,

0:17:330:17:39

and that's the one we're in most of the time.

0:17:390:17:41

This fast, automatic mode of thinking, he called System 1.

0:17:440:17:48

It's powerful, effortless and responsible for most of what we do.

0:17:510:17:54

And System 1 is, you know, that's what happens most of the time.

0:17:560:18:00

You're there, the world around you provides all kinds of stimuli

0:18:000:18:05

and you respond to them.

0:18:050:18:07

Everything that you see and that you understand, you know,

0:18:070:18:11

this is a tree, that's a helicopter back there,

0:18:110:18:14

that's the Statue of Liberty.

0:18:140:18:16

All of this visual perception, all of this comes through System 1.

0:18:160:18:20

The other mode is slow, deliberate, logical and rational.

0:18:210:18:26

This is System 2 and it's the bit you think of as you,

0:18:270:18:32

the voice in your head.

0:18:320:18:34

The simplest example of the two systems is

0:18:340:18:36

really two plus two is on one side, and 17 times 24 is on the other.

0:18:360:18:42

What is two plus two?

0:18:420:18:43

-Four.

-Four.

-Four.

0:18:430:18:45

Fast System 1 is always in gear, producing instant answers.

0:18:450:18:49

-And what's two plus two?

-Four.

0:18:490:18:51

A number comes to your mind.

0:18:510:18:53

-Four.

-Four.

-Four.

0:18:530:18:55

It is automatic. You do not intend for it to happen.

0:18:550:18:59

It just happens to you. It's almost like a reflex.

0:18:590:19:01

And what's 22 times 17?

0:19:010:19:04

That's a good one.

0:19:040:19:05

But when we have to pay attention to a tricky problem,

0:19:100:19:13

we engage slow-but-logical System 2.

0:19:130:19:16

If you can do that in your head,

0:19:170:19:19

you'll have to follow some rules and to do it sequentially.

0:19:190:19:22

And that is not automatic at all.

0:19:240:19:26

That involves work, it involves effort, it involves concentration.

0:19:260:19:30

22 times 17?

0:19:300:19:32

There will be physiological symptoms.

0:19:360:19:38

Your heart rate will accelerate, your pupils will dilate,

0:19:380:19:41

so many changes will occur while you're performing this computation.

0:19:410:19:45

Three's...oh, God!

0:19:460:19:49

That's, so...

0:19:520:19:53

220 and seven times 22 is...

0:19:530:19:56

..54. 374?

0:20:000:20:02

-OK. And can I get you to just walk with me for a second?

-OK.

0:20:020:20:06

Who's the current...?

0:20:060:20:07

System 2 may be clever, but it's also slow, limited and lazy.

0:20:070:20:13

I live in Berkeley during summers and I walk a lot.

0:20:130:20:16

And when I walk very fast, I cannot think.

0:20:160:20:19

-Can I get you to count backwards from 100 by 7?

-Sure.

0:20:190:20:24

193, 80... It's hard when you're walking.

0:20:240:20:28

It takes up, interestingly enough,

0:20:280:20:31

the same kind of executive function as...as thinking.

0:20:310:20:36

Fourty...four?

0:20:360:20:39

If you are expected to do something that demands a lot of effort,

0:20:410:20:46

you will stop even walking.

0:20:460:20:48

Eighty...um...six?

0:20:480:20:52

51.

0:20:520:20:53

Uh....

0:20:550:20:56

16,

0:20:570:20:59

9, 2?

0:20:590:21:00

Everything that you're aware of in your own mind

0:21:030:21:06

is part of this slow, deliberative System 2.

0:21:060:21:09

As far as you're concerned, it is the star of the show.

0:21:090:21:13

Actually, I describe System 2 as not the star.

0:21:140:21:19

I describe it as, as a minor character who thinks he is the star,

0:21:190:21:23

because, in fact,

0:21:230:21:25

most of what goes on in our mind is automatic.

0:21:250:21:28

You know, it's in the domain that I call System 1.

0:21:280:21:31

System 1 is an old, evolved bit of our brain, and it's remarkable.

0:21:310:21:34

We couldn't survive without it because System 2 would explode.

0:21:340:21:38

If Mr Spock had to make every decision for us,

0:21:380:21:41

it would be very slow and effortful and our heads would explode.

0:21:410:21:45

And this vast, hidden domain is responsible

0:21:450:21:48

for far more than you would possibly believe.

0:21:480:21:51

Having an opinion, you have an opinion immediately,

0:21:510:21:55

whether you like it or not, whether you like something or not,

0:21:550:21:58

whether you're for something or not, liking someone or not liking them.

0:21:580:22:02

That, quite often, is something you have no control over.

0:22:020:22:04

Later, when you're asked for reasons, you will invent reasons.

0:22:040:22:08

And a lot of what System 2 does is, it provides reason.

0:22:080:22:11

It provides rationalisations which are not necessarily the true reasons

0:22:110:22:17

for our beliefs and our emotions and our intentions and what we do.

0:22:170:22:22

You have two systems of thinking that steer you through life...

0:22:280:22:31

Fast, intuitive System 1 that is incredibly powerful

0:22:340:22:38

and does most of the driving.

0:22:380:22:40

And slow, logical System 2

0:22:420:22:45

that is clever, but a little lazy.

0:22:450:22:48

Trouble is, there's a bit of a battle between them

0:22:480:22:52

as to which one is driving your decisions.

0:22:520:22:54

And this is where the mistakes creep in,

0:23:020:23:06

when we use the wrong system to make a decision.

0:23:060:23:09

Just going to ask you a few questions.

0:23:090:23:10

We're interested in what you think.

0:23:100:23:12

This question concerns this nice bottle of champagne I have here.

0:23:120:23:15

Millesime 2005, it's a good year, genuinely nice, vintage bottle.

0:23:150:23:19

These people think they're about to use slow, sensible System 2

0:23:200:23:25

to make a rational decision about how much they would pay

0:23:250:23:28

for a bottle of champagne.

0:23:280:23:30

But what they don't know is that their decision

0:23:300:23:33

will actually be taken totally

0:23:330:23:35

without their knowledge by their hidden, fast auto-pilot, System 1.

0:23:350:23:40

And with the help of a bag of ping-pong balls,

0:23:420:23:45

we can influence that decision.

0:23:450:23:46

I've got a set of numbered balls here from 1 to 100 in this bag.

0:23:480:23:52

I'd like you to reach in and draw one out at random for me,

0:23:520:23:55

if you would.

0:23:550:23:56

First, they've got to choose a ball.

0:23:560:23:58

-The number says ten.

-Ten.

-Ten.

-Ten.

0:23:580:24:00

They think it's a random number, but in fact, it's rigged.

0:24:000:24:04

All the balls are marked with the low number ten.

0:24:040:24:07

This experiment is all about the thoughtless creation of habits.

0:24:070:24:13

It's about how we make one decision, and then other decisions follow it

0:24:130:24:17

as if the first decision was actually meaningful.

0:24:170:24:20

What we do is purposefully,

0:24:220:24:24

we give people a first decision that is clearly meaningless.

0:24:240:24:28

-Ten.

-Ten, OK. Would you be willing to pay ten pounds

0:24:280:24:30

for this nice bottle of vintage champagne?

0:24:300:24:33

I would, yes.

0:24:330:24:34

-No.

-Yeah, I guess.

-OK.

0:24:340:24:37

This first decision is meaningless,

0:24:370:24:40

based as it is on a seemingly random number.

0:24:400:24:43

But what it does do is lodge the low number ten in their heads.

0:24:440:24:49

-Would you buy it for ten pounds?

-Yes, I would. Yes.

-You would? OK.

0:24:490:24:53

Now for the real question where we ask them

0:24:540:24:57

how much they'd actually pay for the champagne.

0:24:570:25:00

What's the maximum amount you think you'd be willing to pay?

0:25:000:25:03

-20?

-OK.

0:25:030:25:05

-Seven pounds.

-Seven pounds, OK.

0:25:050:25:07

Probably ten pound.

0:25:070:25:08

A range of fairly low offers.

0:25:080:25:10

But what happens if we prime people with a much higher number,

0:25:120:25:15

65 instead of 10?

0:25:150:25:17

-What does that one say?

-65.

-65, OK.

0:25:200:25:22

-65.

-OK.

0:25:220:25:23

It says 65.

0:25:230:25:25

How will this affect the price people are prepared to pay?

0:25:270:25:31

What's the maximum you would be willing to pay for this

0:25:310:25:33

bottle of champagne?

0:25:330:25:35

-40?

-£45.

-45, OK.

0:25:350:25:37

50.

0:25:370:25:39

-40 quid?

-OK.

0:25:390:25:41

-£50?

-£50?

-Yeah, I'd pay between 50 and £80.

0:25:410:25:45

-Between 50 and 80?

-Yeah.

0:25:450:25:48

Logic has gone out of the window.

0:25:480:25:50

The price people are prepared to pay is influenced by nothing more

0:25:520:25:55

than a number written on a ping-pong ball.

0:25:550:25:58

It suggests that when we can't make decisions,

0:26:010:26:04

we don't evaluate the decision in itself.

0:26:040:26:06

Instead what we do is,

0:26:060:26:08

we try to look at other similar decisions we've made in the past

0:26:080:26:12

and we take those decisions as if they were good decisions

0:26:120:26:16

and we say to ourselves,

0:26:160:26:17

"Oh, I've made this decision before.

0:26:170:26:19

"Clearly, I don't need to go ahead and solve this decision.

0:26:190:26:23

"Let me just use what I did before and repeat it,

0:26:230:26:25

"maybe with some modifications."

0:26:250:26:27

This anchoring effect comes from the conflict

0:26:290:26:32

between our two systems of thinking.

0:26:320:26:35

Fast System 1 is a master of taking short cuts

0:26:370:26:41

to bring about the quickest possible decision.

0:26:410:26:45

What happens is, they ask you a question

0:26:450:26:48

and if the question is difficult

0:26:480:26:50

but there is a related question that is a lot...that is somewhat simpler,

0:26:500:26:54

you're just going to answer the other question and...

0:26:540:26:58

and not even notice.

0:26:580:27:00

So the system does all kinds of short cuts to feed us

0:27:000:27:02

the information in a faster way and we can make actions,

0:27:020:27:06

and the system is accepting some mistakes.

0:27:060:27:08

We make decisions using fast System 1

0:27:110:27:13

when we really should be using slow System 2.

0:27:130:27:17

And this is why we make the mistakes we do,

0:27:180:27:22

systematic mistakes known as cognitive biases.

0:27:220:27:26

Nice day.

0:27:280:27:30

Since Kahneman first began investigating the glitches in our thinking,

0:27:360:27:41

more than 150 cognitive biases have been identified.

0:27:410:27:45

We are riddled with these systematic mistakes

0:27:460:27:49

and they affect every aspect of our daily lives.

0:27:490:27:52

Wikipedia has a very big list of biases

0:27:540:27:57

and we are finding new ones all the time.

0:27:570:28:00

One of the biases that I think is the most important

0:28:000:28:03

is what's called the present bias focus.

0:28:030:28:05

It's the fact that we focus on now

0:28:050:28:07

and don't think very much about the future.

0:28:070:28:09

That's the bias that causes things like overeating and smoking,

0:28:090:28:13

and texting and driving, and having unprotected sex.

0:28:130:28:16

Another one is called the halo effect,

0:28:160:28:19

and this is the idea

0:28:190:28:21

that if you like somebody or an organisation,

0:28:210:28:25

you're biased to think that all of its aspects are good,

0:28:250:28:29

that everything is good about it.

0:28:290:28:30

If you dislike it, everything is bad.

0:28:300:28:33

People really are quite uncomfortable, you know,

0:28:330:28:35

by the idea that Hitler loved children, you know.

0:28:350:28:39

He did.

0:28:390:28:40

Now, that doesn't make him a good person,

0:28:400:28:43

but we feel uncomfortable to see an attractive trait

0:28:430:28:47

in a person that we consider, you know, the epitome of evil.

0:28:470:28:51

We are prone to think that what we like is all good

0:28:510:28:54

and what we dislike is all bad.

0:28:540:28:56

That's a bias.

0:28:560:28:57

Another particular favourite of mine is the bias to get attached

0:28:570:29:01

to things that we ourselves have created.

0:29:010:29:04

We call it the IKEA effect.

0:29:040:29:06

Well, you've got loss aversion, risk aversion, present bias.

0:29:060:29:10

Spotlight effect, and the spotlight effect is the idea

0:29:100:29:13

that we think that other people pay a lot of attention to us

0:29:130:29:16

when in fact, they don't.

0:29:160:29:17

Confirmation bias. Overconfidence is a big one.

0:29:170:29:20

But what's clear is that there's lots of them.

0:29:200:29:23

There's lots of ways for us to get things wrong.

0:29:230:29:25

You know, there's one way to do things right

0:29:250:29:27

and many ways to do things wrong, and we're capable of many of them.

0:29:270:29:31

These biases explain so many things that we get wrong.

0:29:370:29:41

Our impulsive spending.

0:29:420:29:45

Trusting the wrong people.

0:29:450:29:48

Not seeing the other person's point of view.

0:29:480:29:51

Succumbing to temptation.

0:29:510:29:53

We are so riddled with these biases,

0:29:560:29:59

it's hard to believe we ever make a rational decision.

0:29:590:30:02

But it's not just our everyday decisions that are affected.

0:30:100:30:13

What happens if you're an expert,

0:30:280:30:30

trained in making decisions that are a matter of life and death?

0:30:300:30:34

Are you still destined to make these systematic mistakes?

0:30:380:30:41

On the outskirts of Washington DC,

0:30:440:30:47

Horizon has been granted access to spy on the spooks.

0:30:470:30:52

Welcome to Analytical Exercise Number Four.

0:30:560:31:00

Former intelligence analyst Donald Kretz

0:31:010:31:04

is running an ultra-realistic spy game.

0:31:040:31:07

This exercise will take place in the fictitious city of Vastopolis.

0:31:080:31:13

Taking part are a mixture of trained intelligence analysts

0:31:150:31:19

and some novices.

0:31:190:31:20

Due to an emerging threat...

0:31:230:31:25

..a terrorism taskforce has been stood up.

0:31:260:31:30

I will be the terrorism taskforce lead

0:31:300:31:31

and I have recruited all of you to be our terrorism analysts.

0:31:310:31:36

The challenge facing the analysts

0:31:390:31:41

is to thwart a terrorist threat against a US city.

0:31:410:31:45

The threat at this point has not been determined.

0:31:470:31:50

It's up to you to figure out the type of terrorism...

0:31:500:31:53

..and who's responsible for planning it.

0:31:540:31:56

The analysts face a number of tasks.

0:31:590:32:02

They must first investigate any groups who may pose a threat.

0:32:020:32:06

Your task is to write a report.

0:32:060:32:08

The subject in this case is the Network of Dread.

0:32:080:32:11

The mayor has asked for this 15 minutes from now.

0:32:110:32:14

Just like in the real world, the analysts have access

0:32:170:32:21

to a huge amount of data streaming in, from government agencies,

0:32:210:32:25

social media, mobile phones and emergency services.

0:32:250:32:29

The Network of Dread turns out to be

0:32:330:32:35

a well-known international terror group.

0:32:350:32:38

They have the track record, the capability

0:32:380:32:41

and the personnel to carry out an attack.

0:32:410:32:45

The scenario that's emerging

0:32:450:32:46

is a bio-terror event,

0:32:460:32:48

meaning it's a biological terrorism attack

0:32:480:32:51

that's going to take place against the city.

0:32:510:32:53

If there is an emerging threat, they are the likely candidate.

0:32:560:33:00

We need to move onto the next task.

0:33:000:33:02

It's now 9th April.

0:33:040:33:06

This is another request for information,

0:33:060:33:09

this time on something or someone called the Masters of Chaos.

0:33:090:33:13

The Masters of Chaos are a group of cyber-hackers,

0:33:160:33:21

a local bunch of misfits with no history of violence.

0:33:210:33:24

And while the analysts continue to sift through the incoming data,

0:33:270:33:31

behind the scenes, Kretz is watching their every move.

0:33:310:33:36

In this room, we're able to monitor what the analysts are doing

0:33:360:33:39

throughout the entire exercise.

0:33:390:33:41

We have set up a knowledge base

0:33:420:33:44

into which we have been inserting data

0:33:440:33:46

throughout the course of the day.

0:33:460:33:48

Some of them are related to our terrorist threat,

0:33:480:33:50

many of them are not.

0:33:500:33:52

Amidst the wealth of data on the known terror group,

0:33:530:33:56

there's also evidence coming in

0:33:560:33:58

of a theft at a university biology lab

0:33:580:34:00

and someone has hacked into the computers of a local freight firm.

0:34:000:34:05

Each of these messages represents, essentially, a piece of the puzzle,

0:34:060:34:10

but it's a puzzle that you don't have the box top to,

0:34:100:34:14

so you don't have the picture in advance,

0:34:140:34:17

so you don't know what pieces go where.

0:34:170:34:20

Furthermore, what we have is a bunch of puzzle pieces that don't

0:34:200:34:23

even go with this puzzle.

0:34:230:34:25

The exercise is part of a series of experiments to investigate

0:34:270:34:31

whether expert intelligence agents

0:34:310:34:33

are just as prone to mistakes from cognitive bias as the rest of us,

0:34:330:34:37

or whether their training and expertise makes them immune.

0:34:370:34:43

I have a sort of insider's point of view of this problem.

0:34:440:34:47

I worked a number of years as an intelligence analyst.

0:34:470:34:51

The stakes are incredibly high.

0:34:510:34:53

Mistakes can often be life and death.

0:34:530:34:57

We roll ahead now. The date is 21st May.

0:35:000:35:02

If the analysts are able to think rationally,

0:35:040:35:07

they should be able to solve the puzzle.

0:35:070:35:09

But the danger is, they will fall into the trap set by Kretz

0:35:100:35:15

and only pay attention to the established terror group,

0:35:150:35:18

the Network of Dread.

0:35:180:35:20

Their judgment may be clouded by a bias called confirmation bias.

0:35:210:35:27

Confirmation bias is the most prevalent bias of all,

0:35:270:35:30

and it's where we tend to search for information

0:35:300:35:33

that supports what we already believe.

0:35:330:35:35

Confirmation bias can easily lead people

0:35:370:35:39

to ignore the evidence in front of their eyes.

0:35:390:35:42

And Kretz is able to monitor if the bias kicks in.

0:35:440:35:47

We still see that they're searching for Network of Dread.

0:35:480:35:51

That's an indication that we may have a confirmation bias operating.

0:35:510:35:55

The Network of Dread are the big guys -

0:35:570:35:59

they've done it before, so you would expect they'd do it again.

0:35:590:36:04

And I think we're starting to see some biases here.

0:36:040:36:07

Analysts desperately want to get to the correct answer,

0:36:080:36:11

but they're affected by the same biases as the rest of us.

0:36:110:36:15

So far, most of our analysts seem to believe

0:36:150:36:19

that the Network of Dread is responsible for planning this attack,

0:36:190:36:24

and that is completely wrong.

0:36:240:36:26

How are we doing?

0:36:300:36:31

It's time for the analysts to put themselves on the line

0:36:330:36:36

and decide who the terrorists are and what they're planning.

0:36:360:36:40

So what do you think?

0:36:420:36:43

It was a bio-terrorist attack.

0:36:430:36:47

-I had a different theory.

-What's your theory?

0:36:470:36:49

Cos I may be missing something here, too.

0:36:490:36:51

They know that the Network of Dread is a terrorist group.

0:36:510:36:54

They know that the Masters of Chaos is a cyber-hacking group.

0:36:540:36:58

Either to the new factory or in the water supply.

0:36:580:37:01

Lots of dead fish floating up in the river.

0:37:010:37:05

The question is, did any of the analysts manage to dig out

0:37:060:37:09

the relevant clues and find the true threat?

0:37:090:37:12

In this case, the actual threat is due to the cyber-group,

0:37:140:37:18

the Masters of Chaos,

0:37:180:37:20

who become increasingly radicalised throughout the scenario

0:37:200:37:23

and decide to take out their anger on society, essentially.

0:37:230:37:27

Who convinced them to switch from cyber-crime to bio-terrorism?

0:37:270:37:31

Or did they succumb to confirmation bias

0:37:310:37:34

and simply pin the blame on the usual suspects?

0:37:340:37:38

Will they make that connection?

0:37:380:37:40

Will they process that evidence and assess it accordingly,

0:37:400:37:43

or will their confirmation bias drive them

0:37:430:37:46

to believe that it's a more traditional type of terrorist group?

0:37:460:37:50

I believe that the Masters of Chaos are actually the ones behind it.

0:37:500:37:53

It's either a threat or not a threat, but the Network of Dread?

0:37:530:37:56

And time's up. Please go ahead and save those reports.

0:37:560:37:59

At the end of the exercise,

0:38:020:38:03

Kretz reveals the true identity of the terrorists.

0:38:030:38:08

We have a priority message from City Hall.

0:38:080:38:10

The terrorist attack was thwarted,

0:38:120:38:15

the planned bio-terrorist attack by the Masters of Chaos

0:38:150:38:20

against Vastopolis was thwarted.

0:38:200:38:22

The mayor expresses his thanks for a job well done.

0:38:220:38:25

Show of hands, who...who got it?

0:38:250:38:29

Yeah.

0:38:300:38:32

Out of 12 subjects, 11 of them got the wrong answer.

0:38:320:38:37

The only person to spot the true threat was in fact a novice.

0:38:380:38:42

All the trained experts fell prey to confirmation bias.

0:38:440:38:48

It is not typically the case

0:38:530:38:55

that simply being trained as an analyst

0:38:550:38:57

gives you the tools you need to overcome cognitive bias.

0:38:570:39:00

You can learn techniques for memory improvement.

0:39:020:39:05

You can learn techniques for better focus,

0:39:050:39:08

but techniques to eliminate cognitive bias

0:39:080:39:11

just simply don't work.

0:39:110:39:13

And for intelligence analysts in the real world,

0:39:190:39:21

the implications of making mistakes from these biases are drastic.

0:39:210:39:26

Government reports and studies over the past decade or so

0:39:280:39:32

have cited experts as believing that cognitive bias

0:39:320:39:35

may have played a role

0:39:350:39:37

in a number of very significant intelligence failures,

0:39:370:39:41

and yet it remains an understudied problem.

0:39:410:39:44

Heads.

0:39:510:39:53

Heads.

0:39:540:39:55

But the area of our lives

0:39:580:39:59

in which these systematic mistakes have the most explosive impact

0:39:590:40:04

is in the world of money.

0:40:040:40:06

The moment money enters the picture, the rules change.

0:40:070:40:11

Many of us think that we're at our most rational

0:40:130:40:16

when it comes to decisions about money.

0:40:160:40:18

We like to think we know how to spot a bargain, to strike a good deal,

0:40:200:40:24

sell our house at the right time, invest wisely.

0:40:240:40:28

Thinking about money the right way

0:40:290:40:32

is one of the most challenging things for human nature.

0:40:320:40:36

But if we're not as rational as we like to think,

0:40:380:40:41

and there is a hidden force at work shaping our decisions,

0:40:410:40:45

are we deluding ourselves?

0:40:450:40:47

Money brings with it a mode of thinking.

0:40:470:40:51

It changes the way we react to the world.

0:40:510:40:54

When it comes to money,

0:40:550:40:57

cognitive biases play havoc with our best intentions.

0:40:570:41:00

There are many mistakes that people make when it comes to money.

0:41:020:41:05

Kahneman's insight into our mistakes with money

0:41:180:41:21

were to revolutionise our understanding of economics.

0:41:210:41:25

It's all about a crucial difference in how we feel

0:41:270:41:29

when we win or lose

0:41:290:41:32

and our readiness to take a risk.

0:41:320:41:35

I would like to take a risk.

0:41:350:41:37

Take a risk, OK?

0:41:370:41:39

Let's take a risk.

0:41:390:41:40

Our willingness to take a gamble is very different depending on

0:41:400:41:44

whether we are faced with a loss or a gain.

0:41:440:41:47

Excuse me, guys, can you spare two minutes to help us

0:41:470:41:49

with a little experiment? Try and win as much money as you can, OK?

0:41:490:41:52

-OK.

-OK?

0:41:520:41:54

In my hands here I have £20, OK?

0:41:540:41:56

Here are two scenarios.

0:41:560:41:58

And I'm going to give you ten.

0:41:580:41:59

In the first case, you are given ten pounds.

0:41:590:42:03

That's now yours. Put it in your pocket, take it away,

0:42:030:42:06

spend it on a drink on the South Bank later.

0:42:060:42:08

-OK.

-OK?

0:42:080:42:09

OK.

0:42:090:42:10

Then you have to make a choice about how much more you could gain.

0:42:100:42:15

You can either take the safe option, in which case,

0:42:150:42:17

I give you an additional five,

0:42:170:42:19

or you can take a risk.

0:42:190:42:21

If you take a risk, I'm going to flip this coin.

0:42:210:42:24

If it comes up heads, you win ten,

0:42:240:42:28

but if it comes up tails,

0:42:280:42:30

you're not going to win any more.

0:42:300:42:32

Would you choose the safe option and get an extra five pounds

0:42:320:42:36

or take a risk and maybe win an extra ten or nothing?

0:42:360:42:40

Which is it going to be?

0:42:400:42:41

I'd go safe.

0:42:450:42:47

-Safe, five?

-Yeah.

0:42:470:42:49

-Take five.

-You'd take five?

-Yeah, man.

-Sure? There we go.

0:42:490:42:52

Most people presented with this choice

0:42:520:42:54

go for the certainty of the extra fiver.

0:42:540:42:56

-Thank you very much.

-Told you it was easy.

0:42:560:42:59

In a winning frame of mind, people are naturally rather cautious.

0:42:590:43:03

That's yours, too.

0:43:030:43:04

That was it?

0:43:040:43:06

-That was it.

-Really?

-Yes.

-Eh?

0:43:060:43:09

But what about losing?

0:43:110:43:13

Are we similarly cautious when faced with a potential loss?

0:43:130:43:17

In my hands, I've got £20 and I'm going to give that to you.

0:43:170:43:20

That's now yours.

0:43:200:43:21

-OK.

-You can put it in your handbag.

0:43:210:43:23

This time, you're given £20.

0:43:230:43:26

And again, you must make a choice.

0:43:280:43:30

Would you choose to accept a safe loss of £5 or would you take a risk?

0:43:320:43:37

If you take a risk, I'm going to flip this coin.

0:43:390:43:42

If it comes up heads, you don't lose anything,

0:43:420:43:45

but if it comes up tails, then you lose ten pounds.

0:43:450:43:48

In fact, it's exactly the same outcome.

0:43:490:43:52

In both cases, you face a choice between ending up with

0:43:520:43:55

a certain £15 or tossing a coin to get either ten or twenty.

0:43:550:44:00

-I will risk losing ten or nothing.

-OK.

0:44:010:44:06

But the crucial surprise here is that when the choice is framed

0:44:060:44:09

in terms of a loss, most people take a risk.

0:44:090:44:12

-Take a risk.

-Take a risk, OK.

0:44:130:44:16

-I'll risk it.

-You'll risk it? OK.

0:44:160:44:18

Our slow System 2 could probably work out

0:44:180:44:20

that the outcome is the same in both cases.

0:44:200:44:23

And that's heads, you win.

0:44:230:44:24

But it's too limited and too lazy.

0:44:240:44:27

That's the easiest £20 you'll ever make.

0:44:270:44:29

Instead, fast System 1 makes a rough guess based on change.

0:44:290:44:34

-And that's all there is to it, thank you very much.

-Oh, no! Look.

0:44:340:44:38

And System 1 doesn't like losing.

0:44:380:44:41

If you were to lose £10 in the street today

0:44:470:44:49

and then find £10 tomorrow, you would be financially unchanged

0:44:490:44:53

but actually we respond to changes,

0:44:530:44:55

so the pain of the loss of £10 looms much larger, it feels more painful.

0:44:550:45:00

In fact, you'd probably have to find £20

0:45:000:45:02

to offset the pain that you feel by losing ten.

0:45:020:45:05

Heads.

0:45:050:45:06

At the heart of this, is a bias called loss aversion,

0:45:060:45:10

which affects many of our financial decisions.

0:45:100:45:13

People think in terms of gains and losses.

0:45:130:45:17

-Heads.

-It's tails.

-Oh!

0:45:170:45:20

And in their thinking, typically, losses loom larger than gains.

0:45:200:45:25

We even have an idea by...by how much,

0:45:260:45:29

by roughly a factor of two or a little more than two.

0:45:290:45:33

That is loss aversion,

0:45:340:45:36

and it certainly was the most important thing

0:45:360:45:38

that emerged from our work.

0:45:380:45:40

It's a vital insight into human behaviour,

0:45:430:45:46

so important that it led to a Nobel prize

0:45:460:45:49

and the founding of an entirely new branch of economics.

0:45:490:45:53

When we think we're winning, we don't take risks.

0:45:550:45:58

But when we're faced with a loss, frankly, we're a bit reckless.

0:45:590:46:04

But loss aversion doesn't just affect people

0:46:140:46:16

making casual five pound bets.

0:46:160:46:19

It can affect anyone at any time,

0:46:190:46:23

including those who work in the complex system of high finance,

0:46:230:46:27

in which trillions of dollars are traded.

0:46:270:46:30

In our current complex environments,

0:46:320:46:35

we now have the means as well as the motive

0:46:350:46:39

to make very serious mistakes.

0:46:390:46:42

The bedrock of economics is that people think rationally.

0:46:440:46:48

They calculate risks, rewards and decide accordingly.

0:46:480:46:52

But we're not always rational.

0:46:550:46:57

We rarely behave like Mr Spock.

0:46:570:47:00

For most of our decisions, we use fast, intuitive,

0:47:010:47:05

but occasionally unreliable System 1.

0:47:050:47:08

And in a global financial market,

0:47:110:47:13

that can lead to very serious problems.

0:47:130:47:16

I think what the financial crisis did was, it simply said,

0:47:180:47:23

"You know what?

0:47:230:47:25

"People are a lot more vulnerable to psychological pitfalls

0:47:250:47:30

"than we really understood before."

0:47:300:47:33

Basically, human psychology is just too flawed

0:47:330:47:38

to expect that we could avert a crisis.

0:47:380:47:41

Understanding these pitfalls has led to a new branch of economics.

0:47:480:47:53

Behavioural economics.

0:47:540:47:56

Thanks to psychologists like Hersh Shefrin,

0:47:580:48:01

it's beginning to establish a toehold in Wall Street.

0:48:010:48:04

It takes account of the way we actually make decisions

0:48:050:48:09

rather than how we say we do.

0:48:090:48:11

Financial crisis, I think, was as large a problem as it was

0:48:170:48:21

because certain psychological traits like optimism,

0:48:210:48:24

over-confidence and confirmation bias

0:48:240:48:27

played a very large role among a part of the economy

0:48:270:48:31

where serious mistakes could be made, and were.

0:48:310:48:36

But for as long as our financial system assumes we are rational,

0:48:440:48:48

our economy will remain vulnerable.

0:48:480:48:50

I'm quite certain

0:48:520:48:53

that if the regulators listened to behavioural economists early on,

0:48:530:48:58

we would have designed a very different financial system

0:48:580:49:01

and we wouldn't have had the incredible increase

0:49:010:49:05

in housing market and we wouldn't have this financial catastrophe.

0:49:050:49:10

And so when Kahneman collected his Nobel prize,

0:49:110:49:15

it wasn't for psychology,

0:49:150:49:17

it was for economics.

0:49:170:49:19

The big question is, what can we do about these systematic mistakes?

0:49:250:49:29

Can we hope to find a way round our fast-thinking biases

0:49:300:49:34

and make better decisions?

0:49:340:49:36

To answer this,

0:49:390:49:40

we need to know the evolutionary origins of our mistakes.

0:49:400:49:43

Just off the coast of Puerto Rico

0:49:460:49:48

is probably the best place in the world to find out.

0:49:480:49:52

The tiny island of Cayo Santiago.

0:49:590:50:03

So we're now in the boat, heading over to Cayo Santiago.

0:50:050:50:08

This is an island filled with a thousand rhesus monkeys.

0:50:080:50:12

Once you pull in, it looks

0:50:120:50:13

a little bit like you're going to Jurassic Park. You're not sure

0:50:130:50:16

what you're going to see. Then you'll see your first monkey,

0:50:160:50:19

and it'll be comfortable, like,

0:50:190:50:21

"Ah, the monkeys are here, everything's great."

0:50:210:50:24

It's an island devoted to monkey research.

0:50:300:50:33

You see the guys hanging out on the cliff up there?

0:50:340:50:37

Pretty cool.

0:50:370:50:38

The really special thing about Cayo Santiago

0:50:410:50:43

is that the animals here,

0:50:430:50:44

because they've grown up over the last seven years around humans,

0:50:440:50:47

they're completely habituated,

0:50:470:50:49

and that means we can get up close to them, show them stuff,

0:50:490:50:52

look at how they make decisions.

0:50:520:50:53

We're able to do this here

0:50:530:50:54

in a way that we'd never be able to do it anywhere else, really.

0:50:540:50:57

It's really unique.

0:50:570:50:58

Laurie Santos is here to find out

0:51:020:51:04

if monkeys make the same mistakes in their decisions that we do.

0:51:040:51:08

Most of the work we do is comparing humans and other primates,

0:51:110:51:14

trying to ask what's special about humans.

0:51:140:51:16

But really, what we want to understand is,

0:51:160:51:18

what's the evolutionary origin of some of our dumber strategies,

0:51:180:51:21

some of those spots where we get things wrong?

0:51:210:51:23

If we could understand where those came from,

0:51:230:51:26

that's where we'll get some insight.

0:51:260:51:28

If Santos can show us

0:51:310:51:32

that monkeys have the same cognitive biases as us,

0:51:320:51:35

it would suggest that they evolved a long time ago.

0:51:350:51:38

And a mental strategy that old would be almost impossible to change.

0:51:430:51:49

We started this work around the time of the financial collapse.

0:51:500:51:53

So, when we were thinking about

0:51:560:51:57

what dumb strategies could we look at in monkeys, it was pretty obvious

0:51:570:52:01

that some of the human economic strategies

0:52:010:52:03

which were in the news might be the first thing to look at.

0:52:030:52:06

And one of the particular things we wanted to look at was

0:52:060:52:09

whether or not the monkeys are loss averse.

0:52:090:52:12

But monkeys, smart as they are, have yet to start using money.

0:52:120:52:16

And so that was kind of where we started.

0:52:160:52:18

We said, "Well, how can we even ask this question

0:52:180:52:21

"of if monkeys make financial mistakes?"

0:52:210:52:24

And so we decided to do it by introducing the monkeys

0:52:240:52:27

to their own new currency and just let them buy their food.

0:52:270:52:31

So I'll show you some of this stuff we've been up to with the monkeys.

0:52:320:52:36

Back in her lab at Yale,

0:52:370:52:39

she introduced a troop of monkeys to their own market,

0:52:390:52:42

giving them round shiny tokens they could exchange for food.

0:52:420:52:46

So here's Holly.

0:52:480:52:50

She comes in, hands over a token

0:52:500:52:52

and you can see, she just gets to grab the grape there.

0:52:520:52:55

One of the first things we wondered was just,

0:52:550:52:57

can they in some sense learn

0:52:570:52:59

that a different store sells different food at different prices?

0:52:590:53:02

So what we did was, we presented the monkeys with situations

0:53:020:53:06

where they met traders who sold different goods at different rates.

0:53:060:53:10

So what you'll see in this clip is the monkeys meeting a new trader.

0:53:100:53:13

She's actually selling grapes for three grapes per one token.

0:53:130:53:17

And what we found is that in this case,

0:53:210:53:23

the monkeys are pretty rational.

0:53:230:53:24

so when they get a choice of a guy who sells, you know,

0:53:240:53:26

three goods for one token, they actually shop more at that guy.

0:53:260:53:30

Having taught the monkeys the value of money,

0:53:360:53:38

the next step was to see if monkeys, like humans,

0:53:380:53:42

suffer from that most crucial bias, loss aversion.

0:53:420:53:46

And so what we did was, we introduced the monkeys to traders

0:53:480:53:52

who either gave out losses or gains relative to what they should.

0:53:520:53:56

So I could make the monkey think he's getting a bonus

0:53:570:54:00

simply by having him trade

0:54:000:54:01

with a trader who's starting with a single grape

0:54:010:54:04

but then when the monkey pays this trader,

0:54:040:54:06

she actually gives him an extra, so she gives him a bonus.

0:54:060:54:08

At the end, the monkey gets two,

0:54:080:54:10

but he thinks he got that second one as a bonus.

0:54:100:54:13

We can then compare what the monkeys do with that guy

0:54:130:54:16

versus a guy who gives the monkey losses.

0:54:160:54:18

This is a guy who shows up,

0:54:180:54:19

who pretends he's going to sell three grapes,

0:54:190:54:22

but then when the monkey actually pays this trader,

0:54:220:54:25

he'll take one of the grapes away and give the monkeys only two.

0:54:250:54:28

The big question then is how the monkeys react

0:54:300:54:33

when faced with a choice between a loss and a gain.

0:54:330:54:36

So she'll come in, she's met these two guys before.

0:54:370:54:40

You can see she goes with the bonus option,

0:54:400:54:42

even waits patiently for her additional piece to be added here,

0:54:420:54:46

and then takes the bonus, avoiding the person who gives her losses.

0:54:470:54:51

So monkeys hate losing just as much as people.

0:54:570:55:00

And crucially, Santos found that monkeys, as well,

0:55:080:55:11

are more likely to take risks when faced with a loss.

0:55:110:55:15

This suggests to us

0:55:190:55:20

that the monkeys seem to frame their decisions

0:55:200:55:23

in exactly the same way we do.

0:55:230:55:24

They're not thinking just about the absolute,

0:55:240:55:27

they're thinking relative to what they expect.

0:55:270:55:29

And when they're getting less than they expect,

0:55:290:55:32

when they're getting losses, they too become more risk-seeking.

0:55:320:55:35

The fact that we share this bias with these monkeys suggests

0:55:380:55:42

that it's an ancient strategy etched into our DNA

0:55:420:55:45

more than 35 million years ago.

0:55:450:55:47

And what we learn from the monkeys is that

0:55:500:55:53

if this bias is really that old,

0:55:530:55:55

if we really have had this strategy for the last 35 million years,

0:55:550:55:59

simply deciding to overcome it is just not going to work.

0:55:590:56:02

We need better ways to make ourselves avoid some of these pitfalls.

0:56:020:56:06

Making mistakes, it seems, is just part of what it is to be human.

0:56:080:56:13

We are stuck with our intuitive inner stranger.

0:56:190:56:23

The challenge this poses is profound.

0:56:260:56:29

If it's human nature to make these predictable mistakes

0:56:310:56:34

and we can't change that, what, then, can we do?

0:56:340:56:38

We need to accept ourselves as we are.

0:56:400:56:43

The cool thing about being a human versus a monkey

0:56:430:56:46

is that we have a deliberative self that can reflect on our biases.

0:56:460:56:50

System 2 in us has for the first time realised

0:56:500:56:53

that there's a System 1, and with that realisation,

0:56:530:56:56

we can shape the way we set up policies.

0:56:560:56:59

We can shape the way we set up situations to allow ourselves

0:56:590:57:03

to make better decisions.

0:57:030:57:04

This is the first time in evolution that this has happened.

0:57:040:57:07

If we want to avoid mistakes, we have to reshape the environment

0:57:120:57:16

we've built around us rather than hope to change ourselves.

0:57:160:57:20

We've achieved a lot despite all of these biases.

0:57:270:57:31

If we are aware of them, we can probably do things

0:57:310:57:34

like design our institutions and our regulations

0:57:340:57:38

and our own personal environments and working lives to minimise

0:57:380:57:43

the effect of those biases

0:57:430:57:45

and help us think about how to overcome them.

0:57:450:57:49

We are limited, we're not perfect.

0:57:540:57:56

We're irrational in all kinds of ways,

0:57:560:57:58

but we can build a world that is compatible with this

0:57:580:58:01

and get us to make better decisions rather than worse decisions.

0:58:010:58:05

That's my hope.

0:58:050:58:06

And by accepting our inner stranger,

0:58:100:58:13

we may come to a better understanding of our own minds.

0:58:130:58:18

I think it is important, in general,

0:58:180:58:21

to be aware of where beliefs come from.

0:58:210:58:24

And if we think that we have reasons for what

0:58:270:58:30

we believe, that is often a mistake,

0:58:300:58:34

that our beliefs and our wishes and our hopes

0:58:340:58:38

are not always anchored in reasons.

0:58:380:58:41

They're anchored in something else

0:58:410:58:43

that comes from within and is different.

0:58:430:58:46

Download Subtitles

SRT

ASS