0:00:00 > 0:00:00of developing some cancers.
0:00:00 > 0:00:08Now it's time for HardTalk.
0:00:08 > 0:00:19Welcome to HARDtalk with me, Stephen Sackur.
0:00:19 > 0:00:23For now, the ABC employs human beings like me to question the way
0:00:23 > 0:00:28the world works. For how much longer? -- the BBC employs human
0:00:28 > 0:00:29beings.
0:00:29 > 0:00:30As research into artificial intelligence intensifies,
0:00:30 > 0:00:33is there any sphere of human activity that won't be
0:00:33 > 0:00:34revolutionised by AI and robotics?
0:00:35 > 0:00:37My guest today is Alan Winfield, a world-renowned professor
0:00:37 > 0:00:38of robot ethics.
0:00:38 > 0:00:40From driving, to education, to work and warfare,
0:00:40 > 0:00:43are we unleashing machines which could turn the dark visions
0:00:43 > 0:01:11of science fiction into science fact?
0:01:11 > 0:01:17Alan Winfield, welcome to HARDtalk.
0:01:17 > 0:01:18Delighted to be here, Stephen.
0:01:19 > 0:01:21You do have a fascinating title, director of robot ethics,
0:01:21 > 0:01:25I'm tempted to ask you, what's more important to you,
0:01:25 > 0:01:31the engineering, robotics or the ethics, being an ethicist?
0:01:31 > 0:01:33Both are equally important.
0:01:33 > 0:01:35I'm fundamentally an engineer so I bring an engineering
0:01:35 > 0:01:38perspective to robotics but more than half of my work now
0:01:38 > 0:01:39is thinking about...
0:01:39 > 0:01:54I'm kind of a professional warrior now.
0:01:54 > 0:02:00-- worrier now.
0:02:01 > 0:02:03Would you say the balance has shifted over the course
0:02:03 > 0:02:04of your career?
0:02:04 > 0:02:06You started out very much in computers and engineering
0:02:06 > 0:02:09but increasingly as you have dug deeply into the subject,
0:02:09 > 0:02:12in a sense the more philosophical side of it has been writ
0:02:12 > 0:02:13large for you?
0:02:13 > 0:02:14Absolutely right.
0:02:14 > 0:02:16It was really getting involved in public engagement,
0:02:16 > 0:02:19robotics public engagement, 15 years ago that if you like alerted me
0:02:19 > 0:02:22and sensitised me to the ethical questions around robotics and AI.
0:02:22 > 0:02:24Let's take this phrase, artificial intelligence.
0:02:24 > 0:02:26It raises an immediate question in my mind,
0:02:26 > 0:02:28how we define intelligence.
0:02:29 > 0:02:31I wonder if you could do that for me?
0:02:31 > 0:02:33It's really difficult, one of the fundamental philosophical
0:02:33 > 0:02:36problems with AI is we don't have a satisfactory definition
0:02:36 > 0:02:37for natural intelligence.
0:02:37 > 0:02:40Here's a simple definition, it's doing the right thing
0:02:40 > 0:02:43at the right time, but that's not very helpful from a scientific
0:02:43 > 0:02:44point of view.
0:02:44 > 0:02:47One thing we can say about intelligence is it's not one
0:02:47 > 0:02:55thing we all have more or less of.
0:02:55 > 0:02:58What about thinking?
0:02:58 > 0:03:00Are we really, as in the course of this conversation,
0:03:00 > 0:03:03talking about the degree to which human beings can make
0:03:03 > 0:03:04machines that think?
0:03:04 > 0:03:07I think thinking is a dangerous word.
0:03:07 > 0:03:12It's an anthropomorphisation and in gact more than that it's
0:03:12 > 0:03:14a humanisation of the term intelligence.
0:03:14 > 0:03:18A lot of the intelligence you and I have is nothing to do
0:03:18 > 0:03:22with conscious reflective thought.
0:03:22 > 0:03:26One of the curious things about AI is that what we thought would be
0:03:26 > 0:03:28very difficult 60 years ago, like playing board games,
0:03:28 > 0:03:32chess, Go as it happens, has turned out to be not easy
0:03:32 > 0:03:34but relatively easy, whereas what we thought would be
0:03:34 > 0:03:38very easy 60 years ago, like making a cup of tea in somebody
0:03:38 > 0:03:52else's kitchen, has turned out to be enormously difficult.
0:03:52 > 0:03:55It's interesting you light upon board games so quickly
0:03:55 > 0:03:56because in the news
0:03:56 > 0:03:58in the last few days we've seen something quite interesting,
0:03:59 > 0:04:00Google's DeepMind department has this machine, computer,
0:04:00 > 0:04:04call it what you will, the AlphaGo Zero I think they call
0:04:04 > 0:04:06it, which has achieved I think astounding results playing this
0:04:06 > 0:04:10game, I'm not familiar with it, a game called Go, mainly played
0:04:10 > 0:04:12in China, extremely complex, more moves in it, more complexity
0:04:12 > 0:04:16than chess, and this machine is now capable of beating it seems any
0:04:16 > 0:04:19human Grandmaster and the real thing about it is it's a machine that
0:04:19 > 0:04:35appears to learn unsupervised.
0:04:35 > 0:04:36That's right.
0:04:36 > 0:04:39I must admit, I'm somewhat baffled by this, you said don't think
0:04:39 > 0:04:42about thinking but it seems this is a machine that thinks.
0:04:42 > 0:04:45It's a machine that does an artificial analogue of thinking,
0:04:45 > 0:04:49it doesn't do it in the way you and I do.
0:04:49 > 0:04:52The technology is based on what are called artificial neural
0:04:52 > 0:04:55networks and they are if you like an abstract model of biological
0:04:55 > 0:04:59networks, neural networks, brains in other words,
0:04:59 > 0:05:03which actually we don't understand very well curiously but we can
0:05:03 > 0:05:05still make very simple abstract models, and that's
0:05:05 > 0:05:14what the technology is.
0:05:14 > 0:05:17The way to think about the way it learns, and it is a remarkable
0:05:17 > 0:05:20breakthrough, I don't want to over-hype it because it only
0:05:20 > 0:05:23plays Go, it can't make a cup of tea for you,
0:05:23 > 0:05:26but the very interesting thing is the early generations effectively
0:05:26 > 0:05:29had to be trained on data that was gleaned from human experts
0:05:29 > 0:05:34and many, many games of Go.
0:05:34 > 0:05:36It had to be loaded with external information?
0:05:37 > 0:05:38Essentially, that's right.
0:05:38 > 0:05:42That's what we call supervised learning, whereas the new version,
0:05:42 > 0:05:45and again, if I understand it correctly, I only scanned the nature
0:05:45 > 0:05:51paper this morning, is doing unsupervised learning.
0:05:51 > 0:05:55We actually technically call it reinforcement learning.
0:05:55 > 0:06:00The idea is that the machine is given nothing else
0:06:00 > 0:06:03than if you like the rules of the game and its world
0:06:04 > 0:06:06is the board, the Go board and the pieces,
0:06:06 > 0:06:08and then it just essentially plays against itself millions
0:06:08 > 0:06:15and millions of times.
0:06:15 > 0:06:18It's a bit like, you know, a human infant learning how to,
0:06:19 > 0:06:21I don't know, play with building blocks, Lego, entirely
0:06:21 > 0:06:37on his or her own by just learning over and over again.
0:06:37 > 0:06:39Of course, humans don't actually learn like that,
0:06:39 > 0:06:41mostly we learn with supervision, with parents, teachers,
0:06:41 > 0:06:43brothers and sisters, family and so on.
0:06:43 > 0:06:45You're prepared to use a word like learning,
0:06:45 > 0:06:48thinking you don't like, learning you're prepared to apply
0:06:48 > 0:06:49to a machine?
0:06:49 > 0:06:51Yes.
0:06:51 > 0:06:54What I want to get to, before we go into the specifics
0:06:54 > 0:06:58of driverless cars and autonomous fighting machines and all of that,
0:06:58 > 0:07:00I still want to stay with big-picture stuff.
0:07:00 > 0:07:03The human brain, you've already mentioned the human brain,
0:07:03 > 0:07:05it's the most complex mechanism we know on this planet.
0:07:05 > 0:07:11In the universe in fact.
0:07:11 > 0:07:14Is it possible, talking about the way that Google Mind
0:07:14 > 0:07:15and others are developing artificial intelligence,
0:07:16 > 0:07:19that we can all look to create machines that are as complex
0:07:19 > 0:07:22with the billions and trillions of moving parts, if I can put it
0:07:22 > 0:07:29that way, that the human brain possesses?
0:07:29 > 0:07:33I would say in principle, yes, but not for a very long time.
0:07:33 > 0:07:36I think the problem of making an AI or robot if you like,
0:07:36 > 0:07:40a robot is just AI in a physical body, that is comparable
0:07:40 > 0:07:44in intelligence to a human being, an average human being if you like,
0:07:44 > 0:07:47averagely intelligent human being, is extraordinary difficult and part
0:07:47 > 0:07:54of the problem, part of the reason it's so difficult is we don't
0:07:54 > 0:07:56actually have the design, if you like, the architecture
0:07:56 > 0:08:06of human minds.
0:08:06 > 0:08:09But in principle you think we can get it?
0:08:09 > 0:08:11What I'm driving at really is this principle philosophical question
0:08:11 > 0:08:13of what the brain is.
0:08:13 > 0:08:15To you, Professor, is the brain in the end chemistry?
0:08:15 > 0:08:16Is it material?
0:08:16 > 0:08:21Is it a lump of matter?
0:08:21 > 0:08:22Yes.
0:08:22 > 0:08:26Does it have any spiritual or any other tangible thing?
0:08:26 > 0:08:29It is chemistry?
0:08:29 > 0:08:32I'm a materialist, yes, the brain is thinking meat.
0:08:32 > 0:08:37That is a bit of a copout.
0:08:37 > 0:08:40You said thinking meat, it is meat and the way that meat
0:08:40 > 0:08:43is arranged means it could think, so you could create something
0:08:43 > 0:08:46artificial where, if it was as complex and well arranged
0:08:46 > 0:08:54as human capacity could make it one day, it could also think?
0:08:54 > 0:08:56I believe in principle, yes.
0:08:56 > 0:08:59But the key thing is architecture.
0:08:59 > 0:09:03In a sense, the way to think about the current work on artificial
0:09:03 > 0:09:05intelligence, we have these artificial neural networks
0:09:05 > 0:09:11which are almost like the building blocks.
0:09:11 > 0:09:14It's a bit like having marble, but just having a lot of wonderful
0:09:14 > 0:09:17Italian marble doesn't mean you can make a cathedral,
0:09:17 > 0:09:20you need to have the design, you need to have the architecture
0:09:20 > 0:09:23and the know-how to build that cathedral and we don't have
0:09:23 > 0:09:26anything like that.
0:09:26 > 0:09:30One more general point and then I want to get down to the specifics.
0:09:30 > 0:09:33Nick Bostrom at Oxford University, you know him, I know you do
0:09:33 > 0:09:39because he works in the same field as you, you have to think of AI
0:09:39 > 0:09:43as a fundamental game changer for humanity.
0:09:43 > 0:09:47It could be the last invention that human intelligence ever needs
0:09:47 > 0:09:50to make, he says, because it's the beginning of a completely
0:09:50 > 0:09:53new era, the machine intelligence era and in a sense we are a bit
0:09:53 > 0:09:57like children playing with something we have picked up and it happens
0:09:57 > 0:09:59to be an unexploded bomb and we don't even know
0:09:59 > 0:10:01the consequences that could come with it.
0:10:01 > 0:10:05Do you share that vision?
0:10:05 > 0:10:08I partially share it.
0:10:08 > 0:10:12Where I disagree with Nick is that I don't think we are under threat
0:10:12 > 0:10:15from a kind of runaway super intelligence, which is the thesis
0:10:15 > 0:10:19of his book of that subject, Superintelligence, but I do think
0:10:19 > 0:10:24we need to be ever so careful.
0:10:24 > 0:10:27In a way, I alluded to this earlier, we don't understand what natural
0:10:27 > 0:10:30intelligence is, we don't have any general scientific theory
0:10:30 > 0:10:33of intelligence so trying to build artificial general intelligence
0:10:33 > 0:10:37is a bit like trying to do particle physics at CERN without any theory,
0:10:37 > 0:10:44without any underlying scientific theory.
0:10:45 > 0:10:49It seems to me that we need both some serious theory,
0:10:49 > 0:10:52which we don't have yet, we have some but it isn't unified,
0:10:53 > 0:10:55there isn't a single theory if you like like
0:10:55 > 0:11:02the standard model physics.
0:11:02 > 0:11:05We also need to do responsible research and innovation.
0:11:05 > 0:11:09In other words we need to innovate ethically to make sure any as it
0:11:09 > 0:11:15were unintended consequences are foreseen and we had them off.
0:11:15 > 0:11:18Let's talk in a more practical sense, unintended consequences may
0:11:18 > 0:11:20well come up.
0:11:20 > 0:11:24Let's start with something I think most of us are aware of now,
0:11:24 > 0:11:27and regard as one of the most both challenging and perhaps exciting
0:11:27 > 0:11:37specific AI achievements, that is the driverless car.
0:11:37 > 0:11:39Yes.
0:11:39 > 0:11:42It seems to me all sorts of issues are raised by a world
0:11:42 > 0:11:44in which cars are driverless.
0:11:44 > 0:11:47A lot of moral and ethical issues as well as practical ones.
0:11:48 > 0:11:51You work with people in this field, are you excited by driverless cars?
0:11:51 > 0:11:52I am, yes.
0:11:52 > 0:11:54I think driverless cars have tremendous potential
0:11:54 > 0:11:59for two things...
0:11:59 > 0:12:00Do you see them as robots?
0:12:00 > 0:12:03I do, yes, a driverless car is a robot.
0:12:03 > 0:12:06Typically once a robot becomes part of normal life we stop calling it
0:12:06 > 0:12:12a robot, like a vacuum cleaner.
0:12:12 > 0:12:15I think there are two tremendous advances from driverless cars we can
0:12:15 > 0:12:19look forward to, one is reducing the number of people killed in road
0:12:19 > 0:12:21traffic accidents significantly, if we can achieve that,
0:12:21 > 0:12:24so I'm going to be cautious when I speak more on this.
0:12:24 > 0:12:26The other is giving mobility to people, elderly people,
0:12:26 > 0:12:39disabled people who currently don't have that.
0:12:39 > 0:12:42Both of those are very practical but Science Magazine last year
0:12:43 > 0:12:46studied a group of almost 2,000 people, ask them
0:12:46 > 0:12:50about what they wanted to see in terms of the morale at the almost
0:12:50 > 0:12:53of using driverless cars, how the programming of the car
0:12:53 > 0:12:56would be developed to ensure that, for example, in a hypothetical,
0:12:56 > 0:13:00if a car was on the road and it was about to crash but if it
0:13:00 > 0:13:09veered off the road to avoid a crash it would hit a group
0:13:09 > 0:13:12of schoolchildren being led by a teacher down the road.
0:13:12 > 0:13:16The public in this survey wanted to know that the car
0:13:16 > 0:13:19would in the end accept its own destruction and that of its driver,
0:13:20 > 0:13:22human passenger rather, as opposed to saving itself
0:13:22 > 0:13:25and ploughing into the children on the side of the road.
0:13:25 > 0:13:32How do you as a robot ethicist cope with this sort of challenge?
0:13:33 > 0:13:39The first thing I'd say is let's not get it out of proportion.
0:13:39 > 0:13:43You have to ask yourself as a human driver, probably like me you've got
0:13:43 > 0:13:45many years of experience of driving, have you ever
0:13:45 > 0:13:47encountered that situation?
0:13:47 > 0:13:52Not in my case, but I want to know if I ever step into a driverless car
0:13:52 > 0:13:56someone has thought about this.
0:13:56 > 0:13:58I think you're right.
0:13:58 > 0:14:01The ethicists and the lawyers are not clear.
0:14:01 > 0:14:03The point is we need to have a conversation.
0:14:03 > 0:14:07I think it's really important that if we have driverless cars that make
0:14:07 > 0:14:10those kinds of ethical decisions, you know, that essentially decide
0:14:10 > 0:14:15whether to potentially harm the occupants...
0:14:15 > 0:14:18You're doing what you told me off for doing, you're
0:14:18 > 0:14:20anthropomorphising, it wouldn't be making an ethical decision,
0:14:20 > 0:14:24it would be reflecting the values of the programmer.
0:14:24 > 0:14:30Those rules need to be decided by the whole of society.
0:14:30 > 0:14:33The fact is, whatever those rules are, there will be occasions
0:14:33 > 0:14:36when the rules result in consequences that we don't
0:14:36 > 0:14:38like and therefore I think the whole of society needs
0:14:38 > 0:14:46to if you like own the responsibility for those cases.
0:14:46 > 0:14:50So you are making a call, be it driverless cars or any other
0:14:50 > 0:14:53examples we're thinking about with AI, of the technological
0:14:53 > 0:14:55developments in lockstep with a new approach to monitoring,
0:14:55 > 0:14:59regulation, universal standardisation.
0:14:59 > 0:15:09And a conversation, a big conversation in society
0:15:09 > 0:15:14so that we own the ethics that we decide should be embedded.
0:15:14 > 0:15:16But that will not help.
0:15:16 > 0:15:21Much of the development here, I mean, you work at Bristol
0:15:21 > 0:15:24in a robotics lab but a lot of cutting-edge work is being done
0:15:24 > 0:15:25in the private sector.
0:15:25 > 0:15:28Some of it is done by secretive defence establishments.
0:15:28 > 0:15:30There is no standardisation, there is no cooperation.
0:15:30 > 0:15:35It is a deeply competitive world.
0:15:35 > 0:15:43It jolly well needs to be.
0:15:43 > 0:15:47My view is simple.
0:15:47 > 0:15:50The autopilot of a driverless car should be subject to the same levels
0:15:50 > 0:15:56of compliance with safety standards as the autopilot of an aircraft.
0:15:56 > 0:16:03We all accept...
0:16:03 > 0:16:07You and I would not get into an aircraft if we thought
0:16:07 > 0:16:10that the autopilot had not met those very high standards.
0:16:10 > 0:16:13It is inconceivable that we could allow driverless cars on our roads
0:16:13 > 0:16:24that have not passed those kind of safety certification processes.
0:16:24 > 0:16:27Leaving driverless cars and going to areas that are,
0:16:27 > 0:16:30perhaps, more problematic for human beings.
0:16:30 > 0:16:32They developed the idea of the machine, the future
0:16:33 > 0:16:35intelligent machine, taking jobs and roles that have
0:16:35 > 0:16:37traditionally always been done by human beings because they involve
0:16:37 > 0:16:41things like empathy and care.
0:16:41 > 0:16:46And compassion.
0:16:46 > 0:16:50I'm thinking about roles of social care and education.
0:16:50 > 0:16:53Even, frankly, a sexual partner because we all now read
0:16:53 > 0:16:55about the sexbots that are being developed.
0:16:55 > 0:16:59In these roles, do you feel comfortable with the notion that
0:16:59 > 0:17:04machines will take over from human beings?
0:17:04 > 0:17:06No.
0:17:06 > 0:17:08And I do not think they will.
0:17:08 > 0:17:10But they already are...
0:17:10 > 0:17:11Japan has carers.
0:17:11 > 0:17:14A care robot may well be able to care for you,
0:17:14 > 0:17:17for example, for your physical needs, but it cannot care about you.
0:17:18 > 0:17:25Only humans can care about either humans or any other animal.
0:17:25 > 0:17:29Object, robots cannot care about people or things,
0:17:29 > 0:17:34for that matter.
0:17:34 > 0:17:38And the same is true for teachers.
0:17:38 > 0:17:42Teachers typically care about their classes.
0:17:42 > 0:17:46You think some people are getting way overheated about this?
0:17:46 > 0:17:49One of the most well-known teachers here in Britain he now says that
0:17:49 > 0:17:53in his vision of a future education system, many children will be taught
0:17:53 > 0:18:00one-on-one in a spectacular new way by machines.
0:18:00 > 0:18:03He says it is like giving every child access to
0:18:03 > 0:18:08the best private school.
0:18:08 > 0:18:14Ultimately, there may well be...
0:18:14 > 0:18:17And we are talking about into the future,
0:18:17 > 0:18:23some combination of machine teaching and human teaching.
0:18:23 > 0:18:26You cannot take the human out.
0:18:26 > 0:18:28An important thing to remember here is particularly human
0:18:28 > 0:18:32characteristics of empathy, sympathy...
0:18:32 > 0:18:37Theory of mind, the ability to anticipate, to read each other.
0:18:37 > 0:18:40These are uniquely human characteristics as is our creativity
0:18:40 > 0:18:41and innovation and intuition.
0:18:41 > 0:18:50These are things we have no idea how to build artificially.
0:18:50 > 0:18:58Jobs that involve those things are safe.
0:18:58 > 0:18:58Interesting.
0:18:58 > 0:19:00People nowadays are looking at doomsday scenarios
0:19:00 > 0:19:04with the development of robotics and AI where frankly,
0:19:04 > 0:19:11most jobs one can think of - and I was being flippant earlier
0:19:11 > 0:19:15about being replaced by a robot - but you suggest to me that so many
0:19:15 > 0:19:18different jobs, not just blue-collar but white-collar as well will be
0:19:18 > 0:19:19done by machines...
0:19:19 > 0:19:21Again, are we overstating it?
0:19:21 > 0:19:22I think we are.
0:19:23 > 0:19:23Yes.
0:19:23 > 0:19:28I am not saying that will not happen eventually but I think that
0:19:28 > 0:19:33what we have is much more time than people suppose to find
0:19:33 > 0:19:35a harmonious, if you like, accommodation between
0:19:35 > 0:19:39human and machine.
0:19:39 > 0:19:43That actually allows us to exploit the qualities of humans
0:19:43 > 0:19:46and the skills, the things that humans want to do
0:19:46 > 0:19:55which is, you know...
0:19:55 > 0:19:57If you don't mind me saying, you seem both extraordinarily
0:19:57 > 0:20:00sanguine and comfortable and optimistic about the way
0:20:00 > 0:20:04in which AI is developing under the control of human beings,
0:20:04 > 0:20:08and your faith in humanity's ability to co-operate on this and establish
0:20:08 > 0:20:11standards seems to run in the face of facts.
0:20:11 > 0:20:13In one area is weaponisation.
0:20:13 > 0:20:16The notion that AI and robotics will revolutionise warfare
0:20:16 > 0:20:21and war fighting.
0:20:21 > 0:20:24You were one of 1000 senior scientists who signed an appeal
0:20:24 > 0:20:26for a ban on AI weaponry in 2015.
0:20:26 > 0:20:35That that will not happen, will it?
0:20:35 > 0:20:35The ban...
0:20:35 > 0:20:36It may...
0:20:36 > 0:20:38Did you see what Vladimir Putin said?
0:20:38 > 0:20:40He said that artificial intelligence is the future for Russia
0:20:40 > 0:20:44and all of humankind - and this is the key bit,
0:20:44 > 0:20:46"Whoever becomes the leader in this sphere will become the ruler
0:20:46 > 0:20:51of the world."
0:20:51 > 0:20:55I am an optimist but I am also very worried about exactly this thing.
0:20:55 > 0:20:58We have already seen, if you like, the political weaponisation of AI.
0:20:58 > 0:21:01It is clear, isn't it, that the evidence is mounting that
0:21:01 > 0:21:04AI was used in recent elections?
0:21:04 > 0:21:05You are talking about the hacking?
0:21:06 > 0:21:11And some of that believe is from Russia?
0:21:11 > 0:21:14That is political weaponisation and we do need to be worried
0:21:14 > 0:21:19about these things.
0:21:20 > 0:21:22We do need to have ethical standards and we need
0:21:22 > 0:21:29to have worldwide agreement.
0:21:29 > 0:21:32I am optimistic about a ban on lethal autonomous weapons
0:21:32 > 0:21:35systems.
0:21:35 > 0:21:38The campaign is gaining traction, there have been all kinds
0:21:38 > 0:21:41of discussions and I know some of the people involved quite well
0:21:41 > 0:21:45in the United Nations.
0:21:45 > 0:21:48But we know the limitations of the United Nations
0:21:48 > 0:21:51and the limitations of politics and we know that human nature
0:21:51 > 0:21:54usually leads to the striving to compete and to win,
0:21:54 > 0:21:57whether it be in politics or in the battlefield.
0:21:57 > 0:21:58Can I leave you with this thought?
0:21:59 > 0:22:02It seems that there is a debate within science and you are on one
0:22:02 > 0:22:04side being sanguinary and optimistic.
0:22:04 > 0:22:06Perhaps on the other side, Stephen Hawking recently said
0:22:06 > 0:22:09that the development of full artificial intelligence could spell
0:22:09 > 0:22:10the end of the human race.
0:22:10 > 0:22:16Do you find that kind of thought helpful or deeply unhelpful?
0:22:16 > 0:22:17Deeply unhelpful.
0:22:17 > 0:22:22The problem is that it is not inevitable.
0:22:22 > 0:22:26What he is talking about here is a very small probability.
0:22:26 > 0:22:30If you like it is a very long series of if this happens,
0:22:30 > 0:22:33then if that happens, then this and so on and on.
0:22:33 > 0:22:35I wrote about this in the Observer back in 2014.
0:22:35 > 0:22:42Before Stephen Hawking got involved in this debate.
0:22:42 > 0:22:46My view is that we are worrying about an extraordinarily unlikely
0:22:46 > 0:22:55event, that is the intelligence explosion,...
0:22:55 > 0:22:58Do you think we have actually been too conditioned by science fiction
0:22:58 > 0:23:01and by the Terminator concept?
0:23:01 > 0:23:02We have.
0:23:02 > 0:23:03There is no doubt.
0:23:03 > 0:23:05The problem is that we are fascinated.
0:23:05 > 0:23:08It is a combination of fear and fascination and that is why
0:23:08 > 0:23:14we love science-fiction.
0:23:14 > 0:23:16But in your view it is fiction?
0:23:16 > 0:23:18That scenario is fiction that you are quite right,
0:23:19 > 0:23:21there are things we should worry about now.
0:23:21 > 0:23:24We need to worry about jobs, we need to worry about weaponisation
0:23:24 > 0:23:27of AI and we need to worry about standards in driverless cars
0:23:27 > 0:23:29and care robots, in medical diagnosis AIs.
0:23:29 > 0:23:32There are many things that are here and now problems
0:23:32 > 0:23:36in the sense that are kind of more to do with the fact that AI is not
0:23:37 > 0:23:46very intelligent so we need to worry about artificial stupidity.
0:23:46 > 0:23:48That is a neat way of ending.
0:23:48 > 0:23:51Alan Winfield, thank you very much.