Professor Alan Winfield - Bristol Robotics Laboratory HARDtalk


Professor Alan Winfield - Bristol Robotics Laboratory

Similar Content

Browse content similar to Professor Alan Winfield - Bristol Robotics Laboratory. Check below for episodes and series from the same categories and more!

Transcript


LineFromTo

of developing some cancers.

0:00:000:00:00

Now it's time for HardTalk.

0:00:000:00:08

Welcome to HARDtalk

with me, Stephen Sackur.

0:00:080:00:19

For now, the ABC employs human

beings like me to question the way

0:00:190:00:23

the world works. For how much

longer? -- the BBC employs human

0:00:230:00:28

beings.

0:00:280:00:29

As research into artificial

intelligence intensifies,

0:00:290:00:30

is there any sphere of human

activity that won't be

0:00:300:00:33

revolutionised by AI and robotics?

0:00:330:00:34

My guest today is Alan Winfield,

a world-renowned professor

0:00:350:00:37

of robot ethics.

0:00:370:00:38

From driving, to education,

to work and warfare,

0:00:380:00:40

are we unleashing machines

which could turn the dark visions

0:00:400:00:43

of science fiction

into science fact?

0:00:430:01:11

Alan Winfield, welcome to HARDtalk.

0:01:110:01:17

Delighted to be here, Stephen.

0:01:170:01:18

You do have a fascinating title,

director of robot ethics,

0:01:190:01:21

I'm tempted to ask you,

what's more important to you,

0:01:210:01:25

the engineering, robotics

or the ethics, being an ethicist?

0:01:250:01:31

Both are equally important.

0:01:310:01:33

I'm fundamentally an engineer

so I bring an engineering

0:01:330:01:35

perspective to robotics but more

than half of my work now

0:01:350:01:38

is thinking about...

0:01:380:01:39

I'm kind of a professional

warrior now.

0:01:390:01:54

-- worrier now.

0:01:540:02:00

Would you say the balance has

shifted over the course

0:02:010:02:03

of your career?

0:02:030:02:04

You started out very much

in computers and engineering

0:02:040:02:06

but increasingly as you have dug

deeply into the subject,

0:02:060:02:09

in a sense the more philosophical

side of it has been writ

0:02:090:02:12

large for you?

0:02:120:02:13

Absolutely right.

0:02:130:02:14

It was really getting involved

in public engagement,

0:02:140:02:16

robotics public engagement, 15 years

ago that if you like alerted me

0:02:160:02:19

and sensitised me to the ethical

questions around robotics and AI.

0:02:190:02:22

Let's take this phrase,

artificial intelligence.

0:02:220:02:24

It raises an immediate

question in my mind,

0:02:240:02:26

how we define intelligence.

0:02:260:02:28

I wonder if you could

do that for me?

0:02:290:02:31

It's really difficult,

one of the fundamental philosophical

0:02:310:02:33

problems with AI is we don't

have a satisfactory definition

0:02:330:02:36

for natural intelligence.

0:02:360:02:37

Here's a simple definition,

it's doing the right thing

0:02:370:02:40

at the right time, but that's not

very helpful from a scientific

0:02:400:02:43

point of view.

0:02:430:02:44

One thing we can say

about intelligence is it's not one

0:02:440:02:47

thing we all have more or less of.

0:02:470:02:55

What about thinking?

0:02:550:02:58

Are we really, as in the course

of this conversation,

0:02:580:03:00

talking about the degree

to which human beings can make

0:03:000:03:03

machines that think?

0:03:030:03:04

I think thinking

is a dangerous word.

0:03:040:03:07

It's an anthropomorphisation

and in gact more than that it's

0:03:070:03:12

a humanisation of

the term intelligence.

0:03:120:03:14

A lot of the intelligence

you and I have is nothing to do

0:03:140:03:18

with conscious reflective thought.

0:03:180:03:22

One of the curious things about AI

is that what we thought would be

0:03:220:03:26

very difficult 60 years ago,

like playing board games,

0:03:260:03:28

chess, Go as it happens,

has turned out to be not easy

0:03:280:03:32

but relatively easy,

whereas what we thought would be

0:03:320:03:34

very easy 60 years ago,

like making a cup of tea in somebody

0:03:340:03:38

else's kitchen, has turned out to be

enormously difficult.

0:03:380:03:52

It's interesting you light

upon board games so quickly

0:03:520:03:55

because in the news

0:03:550:03:56

in the last few days we've seen

something quite interesting,

0:03:560:03:58

Google's DeepMind department has

this machine, computer,

0:03:590:04:00

call it what you will,

the AlphaGo Zero I think they call

0:04:000:04:04

it, which has achieved I think

astounding results playing this

0:04:040:04:06

game, I'm not familiar with it,

a game called Go, mainly played

0:04:060:04:10

in China, extremely complex,

more moves in it, more complexity

0:04:100:04:12

than chess, and this machine is now

capable of beating it seems any

0:04:120:04:16

human Grandmaster and the real thing

about it is it's a machine that

0:04:160:04:19

appears to learn unsupervised.

0:04:190:04:35

That's right.

0:04:350:04:36

I must admit, I'm somewhat baffled

by this, you said don't think

0:04:360:04:39

about thinking but it seems this

is a machine that thinks.

0:04:390:04:42

It's a machine that does

an artificial analogue of thinking,

0:04:420:04:45

it doesn't do it in

the way you and I do.

0:04:450:04:49

The technology is based

on what are called artificial neural

0:04:490:04:52

networks and they are if you like

an abstract model of biological

0:04:520:04:55

networks, neural networks,

brains in other words,

0:04:550:04:59

which actually we don't understand

very well curiously but we can

0:04:590:05:03

still make very simple

abstract models, and that's

0:05:030:05:05

what the technology is.

0:05:050:05:14

The way to think about the way it

learns, and it is a remarkable

0:05:140:05:17

breakthrough, I don't

want to over-hype it because it only

0:05:170:05:20

plays Go, it can't make

a cup of tea for you,

0:05:200:05:23

but the very interesting thing

is the early generations effectively

0:05:230:05:26

had to be trained on data

that was gleaned from human experts

0:05:260:05:29

and many, many games of Go.

0:05:290:05:34

It had to be loaded

with external information?

0:05:340:05:36

Essentially, that's right.

0:05:370:05:38

That's what we call supervised

learning, whereas the new version,

0:05:380:05:42

and again, if I understand it

correctly, I only scanned the nature

0:05:420:05:45

paper this morning, is doing

unsupervised learning.

0:05:450:05:51

We actually technically call it

reinforcement learning.

0:05:510:05:55

The idea is that the machine

is given nothing else

0:05:550:06:00

than if you like the rules

of the game and its world

0:06:000:06:03

is the board, the Go

board and the pieces,

0:06:040:06:06

and then it just essentially plays

against itself millions

0:06:060:06:08

and millions of times.

0:06:080:06:15

It's a bit like, you know,

a human infant learning how to,

0:06:150:06:18

I don't know, play with building

blocks, Lego, entirely

0:06:190:06:21

on his or her own by just learning

over and over again.

0:06:210:06:37

Of course, humans don't

actually learn like that,

0:06:370:06:39

mostly we learn with supervision,

with parents, teachers,

0:06:390:06:41

brothers and sisters,

family and so on.

0:06:410:06:43

You're prepared to use

a word like learning,

0:06:430:06:45

thinking you don't like,

learning you're prepared to apply

0:06:450:06:48

to a machine?

0:06:480:06:49

Yes.

0:06:490:06:51

What I want to get to,

before we go into the specifics

0:06:510:06:54

of driverless cars and autonomous

fighting machines and all of that,

0:06:540:06:58

I still want to stay

with big-picture stuff.

0:06:580:07:00

The human brain, you've already

mentioned the human brain,

0:07:000:07:03

it's the most complex mechanism

we know on this planet.

0:07:030:07:05

In the universe in fact.

0:07:050:07:11

Is it possible, talking

about the way that Google Mind

0:07:110:07:14

and others are developing

artificial intelligence,

0:07:140:07:15

that we can all look to create

machines that are as complex

0:07:160:07:19

with the billions and trillions

of moving parts, if I can put it

0:07:190:07:22

that way, that the human

brain possesses?

0:07:220:07:29

I would say in principle, yes,

but not for a very long time.

0:07:290:07:33

I think the problem of making an AI

or robot if you like,

0:07:330:07:36

a robot is just AI in a physical

body, that is comparable

0:07:360:07:40

in intelligence to a human being,

an average human being if you like,

0:07:400:07:44

averagely intelligent human being,

is extraordinary difficult and part

0:07:440:07:47

of the problem, part of the reason

it's so difficult is we don't

0:07:470:07:54

actually have the design,

if you like, the architecture

0:07:540:07:56

of human minds.

0:07:560:08:06

But in principle you

think we can get it?

0:08:060:08:09

What I'm driving at really is this

principle philosophical question

0:08:090:08:11

of what the brain is.

0:08:110:08:13

To you, Professor, is the brain

in the end chemistry?

0:08:130:08:15

Is it material?

0:08:150:08:16

Is it a lump of matter?

0:08:160:08:21

Yes.

0:08:210:08:22

Does it have any spiritual

or any other tangible thing?

0:08:220:08:26

It is chemistry?

0:08:260:08:29

I'm a materialist, yes,

the brain is thinking meat.

0:08:290:08:32

That is a bit of a copout.

0:08:320:08:37

You said thinking meat,

it is meat and the way that meat

0:08:370:08:40

is arranged means it could think,

so you could create something

0:08:400:08:43

artificial where, if it was

as complex and well arranged

0:08:430:08:46

as human capacity could make it one

day, it could also think?

0:08:460:08:54

I believe in principle, yes.

0:08:540:08:56

But the key thing is architecture.

0:08:560:08:59

In a sense, the way to think

about the current work on artificial

0:08:590:09:03

intelligence, we have these

artificial neural networks

0:09:030:09:05

which are almost like

the building blocks.

0:09:050:09:11

It's a bit like having marble,

but just having a lot of wonderful

0:09:110:09:14

Italian marble doesn't mean

you can make a cathedral,

0:09:140:09:17

you need to have the design,

you need to have the architecture

0:09:170:09:20

and the know-how to build that

cathedral and we don't have

0:09:200:09:23

anything like that.

0:09:230:09:26

One more general point and then

I want to get down to the specifics.

0:09:260:09:30

Nick Bostrom at Oxford University,

you know him, I know you do

0:09:300:09:33

because he works in the same field

as you, you have to think of AI

0:09:330:09:39

as a fundamental game

changer for humanity.

0:09:390:09:43

It could be the last invention that

human intelligence ever needs

0:09:430:09:47

to make, he says, because it's

the beginning of a completely

0:09:470:09:50

new era, the machine intelligence

era and in a sense we are a bit

0:09:500:09:53

like children playing with something

we have picked up and it happens

0:09:530:09:57

to be an unexploded bomb

and we don't even know

0:09:570:09:59

the consequences that

could come with it.

0:09:590:10:01

Do you share that vision?

0:10:010:10:05

I partially share it.

0:10:050:10:08

Where I disagree with Nick is that

I don't think we are under threat

0:10:080:10:12

from a kind of runaway super

intelligence, which is the thesis

0:10:120:10:15

of his book of that subject,

Superintelligence, but I do think

0:10:150:10:19

we need to be ever so careful.

0:10:190:10:24

In a way, I alluded to this earlier,

we don't understand what natural

0:10:240:10:27

intelligence is, we don't have any

general scientific theory

0:10:270:10:30

of intelligence so trying to build

artificial general intelligence

0:10:300:10:33

is a bit like trying to do particle

physics at CERN without any theory,

0:10:330:10:37

without any underlying

scientific theory.

0:10:370:10:44

It seems to me that we need both

some serious theory,

0:10:450:10:49

which we don't have yet,

we have some but it isn't unified,

0:10:490:10:52

there isn't a single

theory if you like like

0:10:530:10:55

the standard model physics.

0:10:550:11:02

We also need to do responsible

research and innovation.

0:11:020:11:05

In other words we need to innovate

ethically to make sure any as it

0:11:050:11:09

were unintended consequences

are foreseen and we had them off.

0:11:090:11:15

Let's talk in a more practical

sense, unintended consequences may

0:11:150:11:18

well come up.

0:11:180:11:20

Let's start with something I think

most of us are aware of now,

0:11:200:11:24

and regard as one of the most both

challenging and perhaps exciting

0:11:240:11:27

specific AI achievements,

that is the driverless car.

0:11:270:11:37

Yes.

0:11:370:11:39

It seems to me all sorts of issues

are raised by a world

0:11:390:11:42

in which cars are driverless.

0:11:420:11:44

A lot of moral and ethical issues

as well as practical ones.

0:11:440:11:47

You work with people in this field,

are you excited by driverless cars?

0:11:480:11:51

I am, yes.

0:11:510:11:52

I think driverless cars have

tremendous potential

0:11:520:11:54

for two things...

0:11:540:11:59

Do you see them as robots?

0:11:590:12:00

I do, yes, a driverless

car is a robot.

0:12:000:12:03

Typically once a robot becomes part

of normal life we stop calling it

0:12:030:12:06

a robot, like a vacuum cleaner.

0:12:060:12:12

I think there are two tremendous

advances from driverless cars we can

0:12:120:12:15

look forward to, one is reducing

the number of people killed in road

0:12:150:12:19

traffic accidents significantly,

if we can achieve that,

0:12:190:12:21

so I'm going to be cautious

when I speak more on this.

0:12:210:12:24

The other is giving mobility

to people, elderly people,

0:12:240:12:26

disabled people who

currently don't have that.

0:12:260:12:39

Both of those are very practical

but Science Magazine last year

0:12:390:12:42

studied a group of almost

2,000 people, ask them

0:12:430:12:46

about what they wanted to see

in terms of the morale at the almost

0:12:460:12:50

of using driverless cars,

how the programming of the car

0:12:500:12:53

would be developed to ensure that,

for example, in a hypothetical,

0:12:530:12:56

if a car was on the road

and it was about to crash but if it

0:12:560:13:00

veered off the road to avoid a crash

it would hit a group

0:13:000:13:09

of schoolchildren being led

by a teacher down the road.

0:13:090:13:12

The public in this survey wanted

to know that the car

0:13:120:13:16

would in the end accept its own

destruction and that of its driver,

0:13:160:13:19

human passenger rather,

as opposed to saving itself

0:13:200:13:22

and ploughing into the children

on the side of the road.

0:13:220:13:25

How do you as a robot ethicist cope

with this sort of challenge?

0:13:250:13:32

The first thing I'd say is let's not

get it out of proportion.

0:13:330:13:39

You have to ask yourself as a human

driver, probably like me you've got

0:13:390:13:43

many years of experience

of driving, have you ever

0:13:430:13:45

encountered that situation?

0:13:450:13:47

Not in my case, but I want to know

if I ever step into a driverless car

0:13:470:13:52

someone has thought about this.

0:13:520:13:56

I think you're right.

0:13:560:13:58

The ethicists and the

lawyers are not clear.

0:13:580:14:01

The point is we need

to have a conversation.

0:14:010:14:03

I think it's really important that

if we have driverless cars that make

0:14:030:14:07

those kinds of ethical decisions,

you know, that essentially decide

0:14:070:14:10

whether to potentially

harm the occupants...

0:14:100:14:15

You're doing what you told me

off for doing, you're

0:14:150:14:18

anthropomorphising, it wouldn't be

making an ethical decision,

0:14:180:14:20

it would be reflecting

the values of the programmer.

0:14:200:14:24

Those rules need to be decided

by the whole of society.

0:14:240:14:30

The fact is, whatever those rules

are, there will be occasions

0:14:300:14:33

when the rules result

in consequences that we don't

0:14:330:14:36

like and therefore I think

the whole of society needs

0:14:360:14:38

to if you like own the

responsibility for those cases.

0:14:380:14:46

So you are making a call,

be it driverless cars or any other

0:14:460:14:50

examples we're thinking

about with AI, of the technological

0:14:500:14:53

developments in lockstep

with a new approach to monitoring,

0:14:530:14:55

regulation, universal

standardisation.

0:14:550:14:59

And a conversation, a big

conversation in society

0:14:590:15:09

so that we own the ethics

that we decide should be embedded.

0:15:090:15:14

But that will not help.

0:15:140:15:16

Much of the development here,

I mean, you work at Bristol

0:15:160:15:21

in a robotics lab but a lot

of cutting-edge work is being done

0:15:210:15:24

in the private sector.

0:15:240:15:25

Some of it is done by secretive

defence establishments.

0:15:250:15:28

There is no standardisation,

there is no cooperation.

0:15:280:15:30

It is a deeply competitive world.

0:15:300:15:35

It jolly well needs to be.

0:15:350:15:43

My view is simple.

0:15:430:15:47

The autopilot of a driverless car

should be subject to the same levels

0:15:470:15:50

of compliance with safety standards

as the autopilot of an aircraft.

0:15:500:15:56

We all accept...

0:15:560:16:03

You and I would not get

into an aircraft if we thought

0:16:030:16:07

that the autopilot had not met those

very high standards.

0:16:070:16:10

It is inconceivable that we could

allow driverless cars on our roads

0:16:100:16:13

that have not passed those kind

of safety certification processes.

0:16:130:16:24

Leaving driverless cars

and going to areas that are,

0:16:240:16:27

perhaps, more problematic

for human beings.

0:16:270:16:30

They developed the idea

of the machine, the future

0:16:300:16:32

intelligent machine,

taking jobs and roles that have

0:16:330:16:35

traditionally always been done

by human beings because they involve

0:16:350:16:37

things like empathy and care.

0:16:370:16:41

And compassion.

0:16:410:16:46

I'm thinking about roles

of social care and education.

0:16:460:16:50

Even, frankly, a sexual partner

because we all now read

0:16:500:16:53

about the sexbots that

are being developed.

0:16:530:16:55

In these roles, do you feel

comfortable with the notion that

0:16:550:16:59

machines will take over

from human beings?

0:16:590:17:04

No.

0:17:040:17:06

And I do not think they will.

0:17:060:17:08

But they already are...

0:17:080:17:10

Japan has carers.

0:17:100:17:11

A care robot may well be

able to care for you,

0:17:110:17:14

for example, for your physical

needs, but it cannot care about you.

0:17:140:17:17

Only humans can care about either

humans or any other animal.

0:17:180:17:25

Object, robots cannot care

about people or things,

0:17:250:17:29

for that matter.

0:17:290:17:34

And the same is true for teachers.

0:17:340:17:38

Teachers typically care

about their classes.

0:17:380:17:42

You think some people are getting

way overheated about this?

0:17:420:17:46

One of the most well-known teachers

here in Britain he now says that

0:17:460:17:49

in his vision of a future education

system, many children will be taught

0:17:490:17:53

one-on-one in a spectacular

new way by machines.

0:17:530:18:00

He says it is like giving

every child access to

0:18:000:18:03

the best private school.

0:18:030:18:08

Ultimately, there may well be...

0:18:080:18:14

And we are talking

about into the future,

0:18:140:18:17

some combination of machine

teaching and human teaching.

0:18:170:18:23

You cannot take the human out.

0:18:230:18:26

An important thing to remember

here is particularly human

0:18:260:18:28

characteristics of

empathy, sympathy...

0:18:280:18:32

Theory of mind, the ability

to anticipate, to read each other.

0:18:320:18:37

These are uniquely human

characteristics as is our creativity

0:18:370:18:40

and innovation and intuition.

0:18:400:18:41

These are things we have no idea how

to build artificially.

0:18:410:18:50

Jobs that involve

those things are safe.

0:18:500:18:58

Interesting.

0:18:580:18:58

People nowadays are looking

at doomsday scenarios

0:18:580:19:00

with the development of robotics

and AI where frankly,

0:19:000:19:04

most jobs one can think of -

and I was being flippant earlier

0:19:040:19:11

about being replaced by a robot -

but you suggest to me that so many

0:19:110:19:15

different jobs, not just blue-collar

but white-collar as well will be

0:19:150:19:18

done by machines...

0:19:180:19:19

Again, are we overstating it?

0:19:190:19:21

I think we are.

0:19:210:19:22

Yes.

0:19:230:19:23

I am not saying that will not happen

eventually but I think that

0:19:230:19:28

what we have is much more time

than people suppose to find

0:19:280:19:33

a harmonious, if you like,

accommodation between

0:19:330:19:35

human and machine.

0:19:350:19:39

That actually allows us to exploit

the qualities of humans

0:19:390:19:43

and the skills, the things

that humans want to do

0:19:430:19:46

which is, you know...

0:19:460:19:55

If you don't mind me saying,

you seem both extraordinarily

0:19:550:19:57

sanguine and comfortable

and optimistic about the way

0:19:570:20:00

in which AI is developing under

the control of human beings,

0:20:000:20:04

and your faith in humanity's ability

to co-operate on this and establish

0:20:040:20:08

standards seems to run

in the face of facts.

0:20:080:20:11

In one area is weaponisation.

0:20:110:20:13

The notion that AI and robotics

will revolutionise warfare

0:20:130:20:16

and war fighting.

0:20:160:20:21

You were one of 1000 senior

scientists who signed an appeal

0:20:210:20:24

for a ban on AI weaponry in 2015.

0:20:240:20:26

That that will not happen, will it?

0:20:260:20:35

The ban...

0:20:350:20:35

It may...

0:20:350:20:36

Did you see what

Vladimir Putin said?

0:20:360:20:38

He said that artificial intelligence

is the future for Russia

0:20:380:20:40

and all of humankind -

and this is the key bit,

0:20:400:20:44

"Whoever becomes the leader in this

sphere will become the ruler

0:20:440:20:46

of the world."

0:20:460:20:51

I am an optimist but I am also very

worried about exactly this thing.

0:20:510:20:55

We have already seen, if you like,

the political weaponisation of AI.

0:20:550:20:58

It is clear, isn't it,

that the evidence is mounting that

0:20:580:21:01

AI was used in recent elections?

0:21:010:21:04

You are talking about the hacking?

0:21:040:21:05

And some of that

believe is from Russia?

0:21:060:21:11

That is political weaponisation

and we do need to be worried

0:21:110:21:14

about these things.

0:21:140:21:19

We do need to have ethical

standards and we need

0:21:200:21:22

to have worldwide agreement.

0:21:220:21:29

I am optimistic about a ban

on lethal autonomous weapons

0:21:290:21:32

systems.

0:21:320:21:35

The campaign is gaining traction,

there have been all kinds

0:21:350:21:38

of discussions and I know some

of the people involved quite well

0:21:380:21:41

in the United Nations.

0:21:410:21:45

But we know the limitations

of the United Nations

0:21:450:21:48

and the limitations of politics

and we know that human nature

0:21:480:21:51

usually leads to the striving

to compete and to win,

0:21:510:21:54

whether it be in politics

or in the battlefield.

0:21:540:21:57

Can I leave you with this thought?

0:21:570:21:58

It seems that there is a debate

within science and you are on one

0:21:590:22:02

side being sanguinary

and optimistic.

0:22:020:22:04

Perhaps on the other side,

Stephen Hawking recently said

0:22:040:22:06

that the development of full

artificial intelligence could spell

0:22:060:22:09

the end of the human race.

0:22:090:22:10

Do you find that kind of thought

helpful or deeply unhelpful?

0:22:100:22:16

Deeply unhelpful.

0:22:160:22:17

The problem is that

it is not inevitable.

0:22:170:22:22

What he is talking about here

is a very small probability.

0:22:220:22:26

If you like it is a very long

series of if this happens,

0:22:260:22:30

then if that happens,

then this and so on and on.

0:22:300:22:33

I wrote about this in

the Observer back in 2014.

0:22:330:22:35

Before Stephen Hawking got

involved in this debate.

0:22:350:22:42

My view is that we are worrying

about an extraordinarily unlikely

0:22:420:22:46

event, that is the

intelligence explosion,...

0:22:460:22:55

Do you think we have actually been

too conditioned by science fiction

0:22:550:22:58

and by the Terminator concept?

0:22:580:23:01

We have.

0:23:010:23:02

There is no doubt.

0:23:020:23:03

The problem is that

we are fascinated.

0:23:030:23:05

It is a combination of fear

and fascination and that is why

0:23:050:23:08

we love science-fiction.

0:23:080:23:14

But in your view it is fiction?

0:23:140:23:16

That scenario is fiction that

you are quite right,

0:23:160:23:18

there are things we

should worry about now.

0:23:190:23:21

We need to worry about jobs,

we need to worry about weaponisation

0:23:210:23:24

of AI and we need to worry

about standards in driverless cars

0:23:240:23:27

and care robots, in

medical diagnosis AIs.

0:23:270:23:29

There are many things that

are here and now problems

0:23:290:23:32

in the sense that are kind of more

to do with the fact that AI is not

0:23:320:23:36

very intelligent so we need to worry

about artificial stupidity.

0:23:370:23:46

That is a neat way of ending.

0:23:460:23:48

Alan Winfield, thank you very much.

0:23:480:23:51

Download Subtitles

SRT

ASS