Browse content similar to Professor Alan Winfield - Bristol Robotics Laboratory. Check below for episodes and series from the same categories and more!
Line | From | To | |
---|---|---|---|
of developing some cancers. | 0:00:00 | 0:00:00 | |
Now it's time for HardTalk. | 0:00:00 | 0:00:08 | |
Welcome to HARDtalk
with me, Stephen Sackur. | 0:00:08 | 0:00:19 | |
For now, the ABC employs human
beings like me to question the way | 0:00:19 | 0:00:23 | |
the world works. For how much
longer? -- the BBC employs human | 0:00:23 | 0:00:28 | |
beings. | 0:00:28 | 0:00:29 | |
As research into artificial
intelligence intensifies, | 0:00:29 | 0:00:30 | |
is there any sphere of human
activity that won't be | 0:00:30 | 0:00:33 | |
revolutionised by AI and robotics? | 0:00:33 | 0:00:34 | |
My guest today is Alan Winfield,
a world-renowned professor | 0:00:35 | 0:00:37 | |
of robot ethics. | 0:00:37 | 0:00:38 | |
From driving, to education,
to work and warfare, | 0:00:38 | 0:00:40 | |
are we unleashing machines
which could turn the dark visions | 0:00:40 | 0:00:43 | |
of science fiction
into science fact? | 0:00:43 | 0:01:11 | |
Alan Winfield, welcome to HARDtalk. | 0:01:11 | 0:01:17 | |
Delighted to be here, Stephen. | 0:01:17 | 0:01:18 | |
You do have a fascinating title,
director of robot ethics, | 0:01:19 | 0:01:21 | |
I'm tempted to ask you,
what's more important to you, | 0:01:21 | 0:01:25 | |
the engineering, robotics
or the ethics, being an ethicist? | 0:01:25 | 0:01:31 | |
Both are equally important. | 0:01:31 | 0:01:33 | |
I'm fundamentally an engineer
so I bring an engineering | 0:01:33 | 0:01:35 | |
perspective to robotics but more
than half of my work now | 0:01:35 | 0:01:38 | |
is thinking about... | 0:01:38 | 0:01:39 | |
I'm kind of a professional
warrior now. | 0:01:39 | 0:01:54 | |
-- worrier now. | 0:01:54 | 0:02:00 | |
Would you say the balance has
shifted over the course | 0:02:01 | 0:02:03 | |
of your career? | 0:02:03 | 0:02:04 | |
You started out very much
in computers and engineering | 0:02:04 | 0:02:06 | |
but increasingly as you have dug
deeply into the subject, | 0:02:06 | 0:02:09 | |
in a sense the more philosophical
side of it has been writ | 0:02:09 | 0:02:12 | |
large for you? | 0:02:12 | 0:02:13 | |
Absolutely right. | 0:02:13 | 0:02:14 | |
It was really getting involved
in public engagement, | 0:02:14 | 0:02:16 | |
robotics public engagement, 15 years
ago that if you like alerted me | 0:02:16 | 0:02:19 | |
and sensitised me to the ethical
questions around robotics and AI. | 0:02:19 | 0:02:22 | |
Let's take this phrase,
artificial intelligence. | 0:02:22 | 0:02:24 | |
It raises an immediate
question in my mind, | 0:02:24 | 0:02:26 | |
how we define intelligence. | 0:02:26 | 0:02:28 | |
I wonder if you could
do that for me? | 0:02:29 | 0:02:31 | |
It's really difficult,
one of the fundamental philosophical | 0:02:31 | 0:02:33 | |
problems with AI is we don't
have a satisfactory definition | 0:02:33 | 0:02:36 | |
for natural intelligence. | 0:02:36 | 0:02:37 | |
Here's a simple definition,
it's doing the right thing | 0:02:37 | 0:02:40 | |
at the right time, but that's not
very helpful from a scientific | 0:02:40 | 0:02:43 | |
point of view. | 0:02:43 | 0:02:44 | |
One thing we can say
about intelligence is it's not one | 0:02:44 | 0:02:47 | |
thing we all have more or less of. | 0:02:47 | 0:02:55 | |
What about thinking? | 0:02:55 | 0:02:58 | |
Are we really, as in the course
of this conversation, | 0:02:58 | 0:03:00 | |
talking about the degree
to which human beings can make | 0:03:00 | 0:03:03 | |
machines that think? | 0:03:03 | 0:03:04 | |
I think thinking
is a dangerous word. | 0:03:04 | 0:03:07 | |
It's an anthropomorphisation
and in gact more than that it's | 0:03:07 | 0:03:12 | |
a humanisation of
the term intelligence. | 0:03:12 | 0:03:14 | |
A lot of the intelligence
you and I have is nothing to do | 0:03:14 | 0:03:18 | |
with conscious reflective thought. | 0:03:18 | 0:03:22 | |
One of the curious things about AI
is that what we thought would be | 0:03:22 | 0:03:26 | |
very difficult 60 years ago,
like playing board games, | 0:03:26 | 0:03:28 | |
chess, Go as it happens,
has turned out to be not easy | 0:03:28 | 0:03:32 | |
but relatively easy,
whereas what we thought would be | 0:03:32 | 0:03:34 | |
very easy 60 years ago,
like making a cup of tea in somebody | 0:03:34 | 0:03:38 | |
else's kitchen, has turned out to be
enormously difficult. | 0:03:38 | 0:03:52 | |
It's interesting you light
upon board games so quickly | 0:03:52 | 0:03:55 | |
because in the news | 0:03:55 | 0:03:56 | |
in the last few days we've seen
something quite interesting, | 0:03:56 | 0:03:58 | |
Google's DeepMind department has
this machine, computer, | 0:03:59 | 0:04:00 | |
call it what you will,
the AlphaGo Zero I think they call | 0:04:00 | 0:04:04 | |
it, which has achieved I think
astounding results playing this | 0:04:04 | 0:04:06 | |
game, I'm not familiar with it,
a game called Go, mainly played | 0:04:06 | 0:04:10 | |
in China, extremely complex,
more moves in it, more complexity | 0:04:10 | 0:04:12 | |
than chess, and this machine is now
capable of beating it seems any | 0:04:12 | 0:04:16 | |
human Grandmaster and the real thing
about it is it's a machine that | 0:04:16 | 0:04:19 | |
appears to learn unsupervised. | 0:04:19 | 0:04:35 | |
That's right. | 0:04:35 | 0:04:36 | |
I must admit, I'm somewhat baffled
by this, you said don't think | 0:04:36 | 0:04:39 | |
about thinking but it seems this
is a machine that thinks. | 0:04:39 | 0:04:42 | |
It's a machine that does
an artificial analogue of thinking, | 0:04:42 | 0:04:45 | |
it doesn't do it in
the way you and I do. | 0:04:45 | 0:04:49 | |
The technology is based
on what are called artificial neural | 0:04:49 | 0:04:52 | |
networks and they are if you like
an abstract model of biological | 0:04:52 | 0:04:55 | |
networks, neural networks,
brains in other words, | 0:04:55 | 0:04:59 | |
which actually we don't understand
very well curiously but we can | 0:04:59 | 0:05:03 | |
still make very simple
abstract models, and that's | 0:05:03 | 0:05:05 | |
what the technology is. | 0:05:05 | 0:05:14 | |
The way to think about the way it
learns, and it is a remarkable | 0:05:14 | 0:05:17 | |
breakthrough, I don't
want to over-hype it because it only | 0:05:17 | 0:05:20 | |
plays Go, it can't make
a cup of tea for you, | 0:05:20 | 0:05:23 | |
but the very interesting thing
is the early generations effectively | 0:05:23 | 0:05:26 | |
had to be trained on data
that was gleaned from human experts | 0:05:26 | 0:05:29 | |
and many, many games of Go. | 0:05:29 | 0:05:34 | |
It had to be loaded
with external information? | 0:05:34 | 0:05:36 | |
Essentially, that's right. | 0:05:37 | 0:05:38 | |
That's what we call supervised
learning, whereas the new version, | 0:05:38 | 0:05:42 | |
and again, if I understand it
correctly, I only scanned the nature | 0:05:42 | 0:05:45 | |
paper this morning, is doing
unsupervised learning. | 0:05:45 | 0:05:51 | |
We actually technically call it
reinforcement learning. | 0:05:51 | 0:05:55 | |
The idea is that the machine
is given nothing else | 0:05:55 | 0:06:00 | |
than if you like the rules
of the game and its world | 0:06:00 | 0:06:03 | |
is the board, the Go
board and the pieces, | 0:06:04 | 0:06:06 | |
and then it just essentially plays
against itself millions | 0:06:06 | 0:06:08 | |
and millions of times. | 0:06:08 | 0:06:15 | |
It's a bit like, you know,
a human infant learning how to, | 0:06:15 | 0:06:18 | |
I don't know, play with building
blocks, Lego, entirely | 0:06:19 | 0:06:21 | |
on his or her own by just learning
over and over again. | 0:06:21 | 0:06:37 | |
Of course, humans don't
actually learn like that, | 0:06:37 | 0:06:39 | |
mostly we learn with supervision,
with parents, teachers, | 0:06:39 | 0:06:41 | |
brothers and sisters,
family and so on. | 0:06:41 | 0:06:43 | |
You're prepared to use
a word like learning, | 0:06:43 | 0:06:45 | |
thinking you don't like,
learning you're prepared to apply | 0:06:45 | 0:06:48 | |
to a machine? | 0:06:48 | 0:06:49 | |
Yes. | 0:06:49 | 0:06:51 | |
What I want to get to,
before we go into the specifics | 0:06:51 | 0:06:54 | |
of driverless cars and autonomous
fighting machines and all of that, | 0:06:54 | 0:06:58 | |
I still want to stay
with big-picture stuff. | 0:06:58 | 0:07:00 | |
The human brain, you've already
mentioned the human brain, | 0:07:00 | 0:07:03 | |
it's the most complex mechanism
we know on this planet. | 0:07:03 | 0:07:05 | |
In the universe in fact. | 0:07:05 | 0:07:11 | |
Is it possible, talking
about the way that Google Mind | 0:07:11 | 0:07:14 | |
and others are developing
artificial intelligence, | 0:07:14 | 0:07:15 | |
that we can all look to create
machines that are as complex | 0:07:16 | 0:07:19 | |
with the billions and trillions
of moving parts, if I can put it | 0:07:19 | 0:07:22 | |
that way, that the human
brain possesses? | 0:07:22 | 0:07:29 | |
I would say in principle, yes,
but not for a very long time. | 0:07:29 | 0:07:33 | |
I think the problem of making an AI
or robot if you like, | 0:07:33 | 0:07:36 | |
a robot is just AI in a physical
body, that is comparable | 0:07:36 | 0:07:40 | |
in intelligence to a human being,
an average human being if you like, | 0:07:40 | 0:07:44 | |
averagely intelligent human being,
is extraordinary difficult and part | 0:07:44 | 0:07:47 | |
of the problem, part of the reason
it's so difficult is we don't | 0:07:47 | 0:07:54 | |
actually have the design,
if you like, the architecture | 0:07:54 | 0:07:56 | |
of human minds. | 0:07:56 | 0:08:06 | |
But in principle you
think we can get it? | 0:08:06 | 0:08:09 | |
What I'm driving at really is this
principle philosophical question | 0:08:09 | 0:08:11 | |
of what the brain is. | 0:08:11 | 0:08:13 | |
To you, Professor, is the brain
in the end chemistry? | 0:08:13 | 0:08:15 | |
Is it material? | 0:08:15 | 0:08:16 | |
Is it a lump of matter? | 0:08:16 | 0:08:21 | |
Yes. | 0:08:21 | 0:08:22 | |
Does it have any spiritual
or any other tangible thing? | 0:08:22 | 0:08:26 | |
It is chemistry? | 0:08:26 | 0:08:29 | |
I'm a materialist, yes,
the brain is thinking meat. | 0:08:29 | 0:08:32 | |
That is a bit of a copout. | 0:08:32 | 0:08:37 | |
You said thinking meat,
it is meat and the way that meat | 0:08:37 | 0:08:40 | |
is arranged means it could think,
so you could create something | 0:08:40 | 0:08:43 | |
artificial where, if it was
as complex and well arranged | 0:08:43 | 0:08:46 | |
as human capacity could make it one
day, it could also think? | 0:08:46 | 0:08:54 | |
I believe in principle, yes. | 0:08:54 | 0:08:56 | |
But the key thing is architecture. | 0:08:56 | 0:08:59 | |
In a sense, the way to think
about the current work on artificial | 0:08:59 | 0:09:03 | |
intelligence, we have these
artificial neural networks | 0:09:03 | 0:09:05 | |
which are almost like
the building blocks. | 0:09:05 | 0:09:11 | |
It's a bit like having marble,
but just having a lot of wonderful | 0:09:11 | 0:09:14 | |
Italian marble doesn't mean
you can make a cathedral, | 0:09:14 | 0:09:17 | |
you need to have the design,
you need to have the architecture | 0:09:17 | 0:09:20 | |
and the know-how to build that
cathedral and we don't have | 0:09:20 | 0:09:23 | |
anything like that. | 0:09:23 | 0:09:26 | |
One more general point and then
I want to get down to the specifics. | 0:09:26 | 0:09:30 | |
Nick Bostrom at Oxford University,
you know him, I know you do | 0:09:30 | 0:09:33 | |
because he works in the same field
as you, you have to think of AI | 0:09:33 | 0:09:39 | |
as a fundamental game
changer for humanity. | 0:09:39 | 0:09:43 | |
It could be the last invention that
human intelligence ever needs | 0:09:43 | 0:09:47 | |
to make, he says, because it's
the beginning of a completely | 0:09:47 | 0:09:50 | |
new era, the machine intelligence
era and in a sense we are a bit | 0:09:50 | 0:09:53 | |
like children playing with something
we have picked up and it happens | 0:09:53 | 0:09:57 | |
to be an unexploded bomb
and we don't even know | 0:09:57 | 0:09:59 | |
the consequences that
could come with it. | 0:09:59 | 0:10:01 | |
Do you share that vision? | 0:10:01 | 0:10:05 | |
I partially share it. | 0:10:05 | 0:10:08 | |
Where I disagree with Nick is that
I don't think we are under threat | 0:10:08 | 0:10:12 | |
from a kind of runaway super
intelligence, which is the thesis | 0:10:12 | 0:10:15 | |
of his book of that subject,
Superintelligence, but I do think | 0:10:15 | 0:10:19 | |
we need to be ever so careful. | 0:10:19 | 0:10:24 | |
In a way, I alluded to this earlier,
we don't understand what natural | 0:10:24 | 0:10:27 | |
intelligence is, we don't have any
general scientific theory | 0:10:27 | 0:10:30 | |
of intelligence so trying to build
artificial general intelligence | 0:10:30 | 0:10:33 | |
is a bit like trying to do particle
physics at CERN without any theory, | 0:10:33 | 0:10:37 | |
without any underlying
scientific theory. | 0:10:37 | 0:10:44 | |
It seems to me that we need both
some serious theory, | 0:10:45 | 0:10:49 | |
which we don't have yet,
we have some but it isn't unified, | 0:10:49 | 0:10:52 | |
there isn't a single
theory if you like like | 0:10:53 | 0:10:55 | |
the standard model physics. | 0:10:55 | 0:11:02 | |
We also need to do responsible
research and innovation. | 0:11:02 | 0:11:05 | |
In other words we need to innovate
ethically to make sure any as it | 0:11:05 | 0:11:09 | |
were unintended consequences
are foreseen and we had them off. | 0:11:09 | 0:11:15 | |
Let's talk in a more practical
sense, unintended consequences may | 0:11:15 | 0:11:18 | |
well come up. | 0:11:18 | 0:11:20 | |
Let's start with something I think
most of us are aware of now, | 0:11:20 | 0:11:24 | |
and regard as one of the most both
challenging and perhaps exciting | 0:11:24 | 0:11:27 | |
specific AI achievements,
that is the driverless car. | 0:11:27 | 0:11:37 | |
Yes. | 0:11:37 | 0:11:39 | |
It seems to me all sorts of issues
are raised by a world | 0:11:39 | 0:11:42 | |
in which cars are driverless. | 0:11:42 | 0:11:44 | |
A lot of moral and ethical issues
as well as practical ones. | 0:11:44 | 0:11:47 | |
You work with people in this field,
are you excited by driverless cars? | 0:11:48 | 0:11:51 | |
I am, yes. | 0:11:51 | 0:11:52 | |
I think driverless cars have
tremendous potential | 0:11:52 | 0:11:54 | |
for two things... | 0:11:54 | 0:11:59 | |
Do you see them as robots? | 0:11:59 | 0:12:00 | |
I do, yes, a driverless
car is a robot. | 0:12:00 | 0:12:03 | |
Typically once a robot becomes part
of normal life we stop calling it | 0:12:03 | 0:12:06 | |
a robot, like a vacuum cleaner. | 0:12:06 | 0:12:12 | |
I think there are two tremendous
advances from driverless cars we can | 0:12:12 | 0:12:15 | |
look forward to, one is reducing
the number of people killed in road | 0:12:15 | 0:12:19 | |
traffic accidents significantly,
if we can achieve that, | 0:12:19 | 0:12:21 | |
so I'm going to be cautious
when I speak more on this. | 0:12:21 | 0:12:24 | |
The other is giving mobility
to people, elderly people, | 0:12:24 | 0:12:26 | |
disabled people who
currently don't have that. | 0:12:26 | 0:12:39 | |
Both of those are very practical
but Science Magazine last year | 0:12:39 | 0:12:42 | |
studied a group of almost
2,000 people, ask them | 0:12:43 | 0:12:46 | |
about what they wanted to see
in terms of the morale at the almost | 0:12:46 | 0:12:50 | |
of using driverless cars,
how the programming of the car | 0:12:50 | 0:12:53 | |
would be developed to ensure that,
for example, in a hypothetical, | 0:12:53 | 0:12:56 | |
if a car was on the road
and it was about to crash but if it | 0:12:56 | 0:13:00 | |
veered off the road to avoid a crash
it would hit a group | 0:13:00 | 0:13:09 | |
of schoolchildren being led
by a teacher down the road. | 0:13:09 | 0:13:12 | |
The public in this survey wanted
to know that the car | 0:13:12 | 0:13:16 | |
would in the end accept its own
destruction and that of its driver, | 0:13:16 | 0:13:19 | |
human passenger rather,
as opposed to saving itself | 0:13:20 | 0:13:22 | |
and ploughing into the children
on the side of the road. | 0:13:22 | 0:13:25 | |
How do you as a robot ethicist cope
with this sort of challenge? | 0:13:25 | 0:13:32 | |
The first thing I'd say is let's not
get it out of proportion. | 0:13:33 | 0:13:39 | |
You have to ask yourself as a human
driver, probably like me you've got | 0:13:39 | 0:13:43 | |
many years of experience
of driving, have you ever | 0:13:43 | 0:13:45 | |
encountered that situation? | 0:13:45 | 0:13:47 | |
Not in my case, but I want to know
if I ever step into a driverless car | 0:13:47 | 0:13:52 | |
someone has thought about this. | 0:13:52 | 0:13:56 | |
I think you're right. | 0:13:56 | 0:13:58 | |
The ethicists and the
lawyers are not clear. | 0:13:58 | 0:14:01 | |
The point is we need
to have a conversation. | 0:14:01 | 0:14:03 | |
I think it's really important that
if we have driverless cars that make | 0:14:03 | 0:14:07 | |
those kinds of ethical decisions,
you know, that essentially decide | 0:14:07 | 0:14:10 | |
whether to potentially
harm the occupants... | 0:14:10 | 0:14:15 | |
You're doing what you told me
off for doing, you're | 0:14:15 | 0:14:18 | |
anthropomorphising, it wouldn't be
making an ethical decision, | 0:14:18 | 0:14:20 | |
it would be reflecting
the values of the programmer. | 0:14:20 | 0:14:24 | |
Those rules need to be decided
by the whole of society. | 0:14:24 | 0:14:30 | |
The fact is, whatever those rules
are, there will be occasions | 0:14:30 | 0:14:33 | |
when the rules result
in consequences that we don't | 0:14:33 | 0:14:36 | |
like and therefore I think
the whole of society needs | 0:14:36 | 0:14:38 | |
to if you like own the
responsibility for those cases. | 0:14:38 | 0:14:46 | |
So you are making a call,
be it driverless cars or any other | 0:14:46 | 0:14:50 | |
examples we're thinking
about with AI, of the technological | 0:14:50 | 0:14:53 | |
developments in lockstep
with a new approach to monitoring, | 0:14:53 | 0:14:55 | |
regulation, universal
standardisation. | 0:14:55 | 0:14:59 | |
And a conversation, a big
conversation in society | 0:14:59 | 0:15:09 | |
so that we own the ethics
that we decide should be embedded. | 0:15:09 | 0:15:14 | |
But that will not help. | 0:15:14 | 0:15:16 | |
Much of the development here,
I mean, you work at Bristol | 0:15:16 | 0:15:21 | |
in a robotics lab but a lot
of cutting-edge work is being done | 0:15:21 | 0:15:24 | |
in the private sector. | 0:15:24 | 0:15:25 | |
Some of it is done by secretive
defence establishments. | 0:15:25 | 0:15:28 | |
There is no standardisation,
there is no cooperation. | 0:15:28 | 0:15:30 | |
It is a deeply competitive world. | 0:15:30 | 0:15:35 | |
It jolly well needs to be. | 0:15:35 | 0:15:43 | |
My view is simple. | 0:15:43 | 0:15:47 | |
The autopilot of a driverless car
should be subject to the same levels | 0:15:47 | 0:15:50 | |
of compliance with safety standards
as the autopilot of an aircraft. | 0:15:50 | 0:15:56 | |
We all accept... | 0:15:56 | 0:16:03 | |
You and I would not get
into an aircraft if we thought | 0:16:03 | 0:16:07 | |
that the autopilot had not met those
very high standards. | 0:16:07 | 0:16:10 | |
It is inconceivable that we could
allow driverless cars on our roads | 0:16:10 | 0:16:13 | |
that have not passed those kind
of safety certification processes. | 0:16:13 | 0:16:24 | |
Leaving driverless cars
and going to areas that are, | 0:16:24 | 0:16:27 | |
perhaps, more problematic
for human beings. | 0:16:27 | 0:16:30 | |
They developed the idea
of the machine, the future | 0:16:30 | 0:16:32 | |
intelligent machine,
taking jobs and roles that have | 0:16:33 | 0:16:35 | |
traditionally always been done
by human beings because they involve | 0:16:35 | 0:16:37 | |
things like empathy and care. | 0:16:37 | 0:16:41 | |
And compassion. | 0:16:41 | 0:16:46 | |
I'm thinking about roles
of social care and education. | 0:16:46 | 0:16:50 | |
Even, frankly, a sexual partner
because we all now read | 0:16:50 | 0:16:53 | |
about the sexbots that
are being developed. | 0:16:53 | 0:16:55 | |
In these roles, do you feel
comfortable with the notion that | 0:16:55 | 0:16:59 | |
machines will take over
from human beings? | 0:16:59 | 0:17:04 | |
No. | 0:17:04 | 0:17:06 | |
And I do not think they will. | 0:17:06 | 0:17:08 | |
But they already are... | 0:17:08 | 0:17:10 | |
Japan has carers. | 0:17:10 | 0:17:11 | |
A care robot may well be
able to care for you, | 0:17:11 | 0:17:14 | |
for example, for your physical
needs, but it cannot care about you. | 0:17:14 | 0:17:17 | |
Only humans can care about either
humans or any other animal. | 0:17:18 | 0:17:25 | |
Object, robots cannot care
about people or things, | 0:17:25 | 0:17:29 | |
for that matter. | 0:17:29 | 0:17:34 | |
And the same is true for teachers. | 0:17:34 | 0:17:38 | |
Teachers typically care
about their classes. | 0:17:38 | 0:17:42 | |
You think some people are getting
way overheated about this? | 0:17:42 | 0:17:46 | |
One of the most well-known teachers
here in Britain he now says that | 0:17:46 | 0:17:49 | |
in his vision of a future education
system, many children will be taught | 0:17:49 | 0:17:53 | |
one-on-one in a spectacular
new way by machines. | 0:17:53 | 0:18:00 | |
He says it is like giving
every child access to | 0:18:00 | 0:18:03 | |
the best private school. | 0:18:03 | 0:18:08 | |
Ultimately, there may well be... | 0:18:08 | 0:18:14 | |
And we are talking
about into the future, | 0:18:14 | 0:18:17 | |
some combination of machine
teaching and human teaching. | 0:18:17 | 0:18:23 | |
You cannot take the human out. | 0:18:23 | 0:18:26 | |
An important thing to remember
here is particularly human | 0:18:26 | 0:18:28 | |
characteristics of
empathy, sympathy... | 0:18:28 | 0:18:32 | |
Theory of mind, the ability
to anticipate, to read each other. | 0:18:32 | 0:18:37 | |
These are uniquely human
characteristics as is our creativity | 0:18:37 | 0:18:40 | |
and innovation and intuition. | 0:18:40 | 0:18:41 | |
These are things we have no idea how
to build artificially. | 0:18:41 | 0:18:50 | |
Jobs that involve
those things are safe. | 0:18:50 | 0:18:58 | |
Interesting. | 0:18:58 | 0:18:58 | |
People nowadays are looking
at doomsday scenarios | 0:18:58 | 0:19:00 | |
with the development of robotics
and AI where frankly, | 0:19:00 | 0:19:04 | |
most jobs one can think of -
and I was being flippant earlier | 0:19:04 | 0:19:11 | |
about being replaced by a robot -
but you suggest to me that so many | 0:19:11 | 0:19:15 | |
different jobs, not just blue-collar
but white-collar as well will be | 0:19:15 | 0:19:18 | |
done by machines... | 0:19:18 | 0:19:19 | |
Again, are we overstating it? | 0:19:19 | 0:19:21 | |
I think we are. | 0:19:21 | 0:19:22 | |
Yes. | 0:19:23 | 0:19:23 | |
I am not saying that will not happen
eventually but I think that | 0:19:23 | 0:19:28 | |
what we have is much more time
than people suppose to find | 0:19:28 | 0:19:33 | |
a harmonious, if you like,
accommodation between | 0:19:33 | 0:19:35 | |
human and machine. | 0:19:35 | 0:19:39 | |
That actually allows us to exploit
the qualities of humans | 0:19:39 | 0:19:43 | |
and the skills, the things
that humans want to do | 0:19:43 | 0:19:46 | |
which is, you know... | 0:19:46 | 0:19:55 | |
If you don't mind me saying,
you seem both extraordinarily | 0:19:55 | 0:19:57 | |
sanguine and comfortable
and optimistic about the way | 0:19:57 | 0:20:00 | |
in which AI is developing under
the control of human beings, | 0:20:00 | 0:20:04 | |
and your faith in humanity's ability
to co-operate on this and establish | 0:20:04 | 0:20:08 | |
standards seems to run
in the face of facts. | 0:20:08 | 0:20:11 | |
In one area is weaponisation. | 0:20:11 | 0:20:13 | |
The notion that AI and robotics
will revolutionise warfare | 0:20:13 | 0:20:16 | |
and war fighting. | 0:20:16 | 0:20:21 | |
You were one of 1000 senior
scientists who signed an appeal | 0:20:21 | 0:20:24 | |
for a ban on AI weaponry in 2015. | 0:20:24 | 0:20:26 | |
That that will not happen, will it? | 0:20:26 | 0:20:35 | |
The ban... | 0:20:35 | 0:20:35 | |
It may... | 0:20:35 | 0:20:36 | |
Did you see what
Vladimir Putin said? | 0:20:36 | 0:20:38 | |
He said that artificial intelligence
is the future for Russia | 0:20:38 | 0:20:40 | |
and all of humankind -
and this is the key bit, | 0:20:40 | 0:20:44 | |
"Whoever becomes the leader in this
sphere will become the ruler | 0:20:44 | 0:20:46 | |
of the world." | 0:20:46 | 0:20:51 | |
I am an optimist but I am also very
worried about exactly this thing. | 0:20:51 | 0:20:55 | |
We have already seen, if you like,
the political weaponisation of AI. | 0:20:55 | 0:20:58 | |
It is clear, isn't it,
that the evidence is mounting that | 0:20:58 | 0:21:01 | |
AI was used in recent elections? | 0:21:01 | 0:21:04 | |
You are talking about the hacking? | 0:21:04 | 0:21:05 | |
And some of that
believe is from Russia? | 0:21:06 | 0:21:11 | |
That is political weaponisation
and we do need to be worried | 0:21:11 | 0:21:14 | |
about these things. | 0:21:14 | 0:21:19 | |
We do need to have ethical
standards and we need | 0:21:20 | 0:21:22 | |
to have worldwide agreement. | 0:21:22 | 0:21:29 | |
I am optimistic about a ban
on lethal autonomous weapons | 0:21:29 | 0:21:32 | |
systems. | 0:21:32 | 0:21:35 | |
The campaign is gaining traction,
there have been all kinds | 0:21:35 | 0:21:38 | |
of discussions and I know some
of the people involved quite well | 0:21:38 | 0:21:41 | |
in the United Nations. | 0:21:41 | 0:21:45 | |
But we know the limitations
of the United Nations | 0:21:45 | 0:21:48 | |
and the limitations of politics
and we know that human nature | 0:21:48 | 0:21:51 | |
usually leads to the striving
to compete and to win, | 0:21:51 | 0:21:54 | |
whether it be in politics
or in the battlefield. | 0:21:54 | 0:21:57 | |
Can I leave you with this thought? | 0:21:57 | 0:21:58 | |
It seems that there is a debate
within science and you are on one | 0:21:59 | 0:22:02 | |
side being sanguinary
and optimistic. | 0:22:02 | 0:22:04 | |
Perhaps on the other side,
Stephen Hawking recently said | 0:22:04 | 0:22:06 | |
that the development of full
artificial intelligence could spell | 0:22:06 | 0:22:09 | |
the end of the human race. | 0:22:09 | 0:22:10 | |
Do you find that kind of thought
helpful or deeply unhelpful? | 0:22:10 | 0:22:16 | |
Deeply unhelpful. | 0:22:16 | 0:22:17 | |
The problem is that
it is not inevitable. | 0:22:17 | 0:22:22 | |
What he is talking about here
is a very small probability. | 0:22:22 | 0:22:26 | |
If you like it is a very long
series of if this happens, | 0:22:26 | 0:22:30 | |
then if that happens,
then this and so on and on. | 0:22:30 | 0:22:33 | |
I wrote about this in
the Observer back in 2014. | 0:22:33 | 0:22:35 | |
Before Stephen Hawking got
involved in this debate. | 0:22:35 | 0:22:42 | |
My view is that we are worrying
about an extraordinarily unlikely | 0:22:42 | 0:22:46 | |
event, that is the
intelligence explosion,... | 0:22:46 | 0:22:55 | |
Do you think we have actually been
too conditioned by science fiction | 0:22:55 | 0:22:58 | |
and by the Terminator concept? | 0:22:58 | 0:23:01 | |
We have. | 0:23:01 | 0:23:02 | |
There is no doubt. | 0:23:02 | 0:23:03 | |
The problem is that
we are fascinated. | 0:23:03 | 0:23:05 | |
It is a combination of fear
and fascination and that is why | 0:23:05 | 0:23:08 | |
we love science-fiction. | 0:23:08 | 0:23:14 | |
But in your view it is fiction? | 0:23:14 | 0:23:16 | |
That scenario is fiction that
you are quite right, | 0:23:16 | 0:23:18 | |
there are things we
should worry about now. | 0:23:19 | 0:23:21 | |
We need to worry about jobs,
we need to worry about weaponisation | 0:23:21 | 0:23:24 | |
of AI and we need to worry
about standards in driverless cars | 0:23:24 | 0:23:27 | |
and care robots, in
medical diagnosis AIs. | 0:23:27 | 0:23:29 | |
There are many things that
are here and now problems | 0:23:29 | 0:23:32 | |
in the sense that are kind of more
to do with the fact that AI is not | 0:23:32 | 0:23:36 | |
very intelligent so we need to worry
about artificial stupidity. | 0:23:37 | 0:23:46 | |
That is a neat way of ending. | 0:23:46 | 0:23:48 | |
Alan Winfield, thank you very much. | 0:23:48 | 0:23:51 |