Professor Rosalind Picard - Founder of Affective Computing, MIT

Download Subtitles

Transcript

:00:00. > :00:00.on the content of the phone call she had with President Obama. That would

:00:00. > :00:15.have been ironic. Now it's time for HARDtalk. Welcome. Imagine a world

:00:16. > :00:21.where robots can think and feel like humans. One pioneering scientist in

:00:22. > :00:26.this field has advanced the capability of computers to recognise

:00:27. > :00:32.human emotions. My guess today is the American Professor Rosalind

:00:33. > :00:38.Picard from MIT. Could robots fitted with intelligent computers perform

:00:39. > :00:42.tasks like caring for the elderly or fight as soldiers on the

:00:43. > :01:12.battlefield? What would be the ethical implications of that?

:01:13. > :01:20.Rosalind Picard, welcome to HARDtalk. Thank you. How have you

:01:21. > :01:27.managed to make computers read human emotions? Computers are now able to

:01:28. > :01:30.see our facial expressions, listen to a vocal changes, and in some

:01:31. > :01:34.cases we can wear them as sensors that can read our physiological

:01:35. > :01:40.changes. That's the thing that looks like a sweat band on your wrist.

:01:41. > :01:45.Yes, it is one of several centres and testing rate now. How does it

:01:46. > :01:53.work? How does a computer recognise whether you are sad, happy, board?

:01:54. > :01:56.The first thing we do is understand that people feel comfortable

:01:57. > :02:02.communicating in a certain situation. If they are comfortable

:02:03. > :02:04.having a camera look at them, see if they are smiling, frowning, looking

:02:05. > :02:08.interested, if you're out looking around and there is no camera

:02:09. > :02:14.looking at you but you want to sense what is going on inside your body,

:02:15. > :02:19.you might wear a censor. What about the computer itself expressing human

:02:20. > :02:26.emotions? Is that possible? Computers have been able to smile a

:02:27. > :02:40.change their tone of voice for along time. Ranging from either the ``

:02:41. > :02:43.Marvin the paranoid android two Macintosh. But actually have

:02:44. > :02:48.mechanisms of emotions remains a challenge. And when we are talking

:02:49. > :02:56.about emotional intelligence computer might have, that is the

:02:57. > :03:00.area you are working on? Yes, we want computers to have emotions that

:03:01. > :03:03.are annoying or four dramatic rise attainment purposes, but that are

:03:04. > :03:09.smart about entertaining with so they are less frustrating for to

:03:10. > :03:13.deal with. But how can you measured whether a computer is displaying

:03:14. > :03:19.emotional intelligence? I don't know if you remember when Microsoft had a

:03:20. > :03:22.little paperclip software. A lot of people connected to go away. They

:03:23. > :03:31.headed Helen looked so happy when they were having misfortune. That

:03:32. > :03:34.was a sign of failure. Yet it had a lot of intelligence underneath. But

:03:35. > :03:39.not knowing when to smile and when not to smile or something that

:03:40. > :03:44.didn't work. What we will see when it succeeds in having emotional

:03:45. > :03:47.intelligence is that people will want to interact with it more. It

:03:48. > :03:50.won't be that the emotion looks like it's there, it will just feel like

:03:51. > :04:02.the interaction is smarter and nicer. You mentioned a

:04:03. > :04:07.groundbreaking aspect `` affective computing. You say we must give them

:04:08. > :04:17.the ability to recognise, understand and express emotions. Why would you

:04:18. > :04:22.want to do that? It's hard to say when you see most robots in the home

:04:23. > :04:27.that are just vacuum cleaners. Are you talking about putting this

:04:28. > :04:36.emotional intelligence computer inside a machine, robber with limbs

:04:37. > :04:39.that can walk? `` robot. That is the easiest example for people to

:04:40. > :04:43.understand. You are in the living room and the robot comes in to clean

:04:44. > :04:46.up. It makes a noise and distracts you in some way, your national

:04:47. > :04:51.response might be to scale at it" in its direction. If it doesn't see

:04:52. > :04:56.that you don't like what it's doing, maybe apologise and slink off, you

:04:57. > :05:03.feel irritated. Over time as it keeps ignoring your desires, are

:05:04. > :05:06.going to want to get rid of it. He jumped on stage you. You're saying

:05:07. > :05:11.how we respond to this robot doing your cleaning for you. You have

:05:12. > :05:16.written about how are likely to play a central roles in our lives the

:05:17. > :05:25.future. You comfortable with your research and forbidding to this

:05:26. > :05:29.field? `` contributing. I think it's vital that we make interactions with

:05:30. > :05:36.these machines much smarter. Many people are familiar with how smart

:05:37. > :05:44.phones work, or a few glasses and it pops up your messages there, it's

:05:45. > :05:47.kind of insensitive. The natural way to indicate a computer at the idea

:05:48. > :05:51.is to do better is to show a negative piece of feedback, a frown

:05:52. > :05:54.or head check. If it with that and uses it to learn how to do a better

:05:55. > :06:01.job of presenting things at the right time, that is smart. How

:06:02. > :06:06.quickly do think we will get to this area or robots to play a central

:06:07. > :06:12.role in our lives? Many people say by 2015 this is going to be a $50

:06:13. > :06:16.billion industry. Timing is driven by the people driving the business.

:06:17. > :06:22.The dollar is on the marketing and all that. I more on the science

:06:23. > :06:24.side. That has progressed enormously. It goes much faster as

:06:25. > :06:29.we are able to get more people interested in sharing their data.

:06:30. > :06:35.Sharing their emotions, turning on their cameras and giving the

:06:36. > :06:40.computer feedback. Some people are saying that within a decade... There

:06:41. > :06:42.is a project at Birmingham University to create ahead with two

:06:43. > :06:47.blinking eyes for humans to interact with. At easier than a sort of TV

:06:48. > :06:54.screen. He says within a decade we will see humans interacting with

:06:55. > :07:03.robots. We have had them in a lab for a while. You can come to MIT and

:07:04. > :07:06.interact with robots that have a similar sale. Taking them to the

:07:07. > :07:12.market of people 's homes is a different thing. I would hope so,

:07:13. > :07:16.but... We already have some technology that is out there are

:07:17. > :07:22.now, but if you opt in and turn on your camera it can read your facial

:07:23. > :07:26.expressions and you can then let the maker of a video or advertise me

:07:27. > :07:36.know if you like what they are showing you or are offended by it or

:07:37. > :07:40.confused. Or just bored. So a lot of people share your ambition. In the

:07:41. > :07:46.UK, in Scotland, the chief executive for the national health service,

:07:47. > :07:49.Gordon Jensen, has a project called off, which is about Robert topping

:07:50. > :08:05.with the care of patients with dementia. `` robots helping. Would

:08:06. > :08:11.you want to see a robot performing their duties of medical personnel? I

:08:12. > :08:16.don't think we are going to completely replace doctors and

:08:17. > :08:19.nurses. I think we'll still want the human element. That said, there are

:08:20. > :08:24.some places where people are already showing that they prefer software

:08:25. > :08:31.agents there are put in a computer body and roles in. It has shown that

:08:32. > :08:34.when patients are being discharged and given a budget of structures,

:08:35. > :08:43.they prefer to hear it the instructions from a character on as

:08:44. > :08:46.screen than from a human being. He said it is not to replace doctors

:08:47. > :08:52.and nurses, but you don't have to a robot. For institutions that have to

:08:53. > :08:57.make sure that they balance their books, it is going to be attractive,

:08:58. > :09:01.isn't it? To replace a large number of their medical personnel, if they

:09:02. > :09:06.can, with robots don't have to be paid. I think we will see us than

:09:07. > :09:12.replacing some of the more brought and boring tasks that just require a

:09:13. > :09:19.lot of patient examination of information. The software can give

:09:20. > :09:22.you instructions and if you don't understand it can look a little

:09:23. > :09:25.concern and repeat it with infinite patience. A nurse was already late

:09:26. > :09:30.for her next task might not have that patients. But you can't

:09:31. > :09:35.guarantee that they will only be used in that restricted way. When

:09:36. > :09:38.people are entitled kinds of things can happen. And people will still be

:09:39. > :09:45.entitled to lead the robot take over, but I don't see that

:09:46. > :09:52.happening. There is a prediction that companies will soon sell robots

:09:53. > :09:56.designed to babysit children and the service companions to people with

:09:57. > :10:00.disabilities. She says this is demeaning and transgressive to our

:10:01. > :10:05.collective sense of humanity. I share some of her concerns. When you

:10:06. > :10:09.go to a nursing home and they have just handed somebody an artificial

:10:10. > :10:13.animal to parents keep them happy. That is pretty insensitive. I would

:10:14. > :10:19.not have wanted to leave that was my father when he was close to death. I

:10:20. > :10:24.wanted him to be with humans. At the same time, if somebody chooses, and

:10:25. > :10:30.you see this with kids who choose to comfort themselves stroking their

:10:31. > :10:34.stuffed animal, if very soothing. And if it kind of wiggles and cosies

:10:35. > :10:38.up to them in a comfortable way with a sense of their stress, there is a

:10:39. > :10:43.sweet spot for something there that can augment what we can do is

:10:44. > :10:51.humans. But the idea of artificial companionship becoming a new norm,

:10:52. > :10:55.she is alarmed by that. I think if you are trying to replace all of us

:10:56. > :11:02.without official stuff, I don't see it happening in the next decade. I

:11:03. > :11:06.remember years ago, my friend Marven, one of my colleagues, said

:11:07. > :11:11.the computers will be so smart that we will be lucky if they keep us

:11:12. > :11:17.around as pets. I was bothered by that. I don't want to make that kind

:11:18. > :11:21.of computer. If we make that then maybe we deserve to be demeaned by

:11:22. > :11:26.that which we have created. We have choices as technology creators. My

:11:27. > :11:33.choice is to make technology that enables us to expand our compassion

:11:34. > :11:36.and emotional intelligence. But that is a real worry, the computers and

:11:37. > :11:43.robots could become superior to humans. It's not just the stuff of

:11:44. > :11:51.science fiction. It is. It is a choice that we make. You don't think

:11:52. > :11:57.it's possible? It's our choice. Do we choose to make them that way? I

:11:58. > :12:02.did a mass, I do the programming that creates the way that they

:12:03. > :12:05.function. If I choose to make them arrogant and demeaning towards us, I

:12:06. > :12:11.suffer from making something like that. You are saying it's not

:12:12. > :12:19.desirable, but I am asking if it is feasible. Professor Hugh Price in

:12:20. > :12:24.the UK, he says the scientific community, people like you, which

:12:25. > :12:27.consider such issues. He says as robots and computers become smarter

:12:28. > :12:31.than humans we could find ourselves at the mercy of machines that are

:12:32. > :12:41.not malicious but whose interests don't include us. I see the

:12:42. > :12:44.temptation to scout. They are going to be smarter than us, their

:12:45. > :12:49.interests don't include us. Deep inside that, what is that really

:12:50. > :12:53.mean? What is a need to be smart? Is it simply is holding more

:12:54. > :12:59.mathematical equations? Picking things up on the Web faster? The

:13:00. > :13:09.choice we have is how do we make them into the kind of future that we

:13:10. > :13:15.want to have? This is one of the focus is that we have, why we have

:13:16. > :13:20.shifted from looking at making technology that is more intelligent

:13:21. > :13:25.and we are, to looking at making things that increase our

:13:26. > :13:29.intelligence. The idea of people attaching themselves to these

:13:30. > :13:35.artificial creatures, and the possibility of them becoming smarter

:13:36. > :13:38.and superior to us. The point I'm trying to get to hear, is under the

:13:39. > :13:42.umbrella of an issue that you yourself have raised, which is the

:13:43. > :13:48.greater the freedom of a machine, in more it will need moral standards.

:13:49. > :13:55.And in this field, in which you are so closely involved, we need ethical

:13:56. > :13:58.guidelines, don't be? I agree. As computers get more abilities to make

:13:59. > :14:04.decisions without our direct control, we have two face some hard

:14:05. > :14:09.questions about how they are going to make those decisions, and do we

:14:10. > :14:14.bias them toward one kind of reference and away from another? In

:14:15. > :14:18.accord with perhaps my values, or someone with very different values

:14:19. > :14:22.to me, wants to see them make decisions in a different way. Are

:14:23. > :14:26.you involved in these kind of ethical discussions yourself? Or are

:14:27. > :14:30.you just involved in the technology that is going to bring about better

:14:31. > :14:35.computers that have emotional intelligence and robots? I am

:14:36. > :14:42.involved. When I wrote my book, some people said, why did you include a

:14:43. > :14:45.chapter about potential concerns of the technology, it hasn't even

:14:46. > :14:49.launched yet, it's like shooting yourself in the foot before you have

:14:50. > :14:53.taken the first ten steps. It is important that we think about what

:14:54. > :14:59.we can build, but also about what we should build. You say short, but I'm

:15:00. > :15:03.tried to get an idea of how concerned you are. Another colleague

:15:04. > :15:09.of yours, the Professor of personal robots, says, now is the time to

:15:10. > :15:12.start hammering things out. People should have serious dialogues before

:15:13. > :15:16.these robots are in contact with vulnerable populations in

:15:17. > :15:20.particular, she says. Are you involved in the serious dialogues.

:15:21. > :15:27.And there you mentioned it in your book, Effective Computing, are you

:15:28. > :15:30.perhaps preoccupied with the technological aspects, and not doing

:15:31. > :15:34.enough on the ethical field? There is so much to do, we can always

:15:35. > :15:39.wonder where to put the limited resources of time. I think the

:15:40. > :15:43.dialogues need to be much broader than those of us building the

:15:44. > :15:48.technology. We need to be involved, because ultimately we are the ones

:15:49. > :15:54.making those programme decisions, but they really need to be societal

:15:55. > :15:58.dialogues. So, kudos to you for raising this to an audience. The big

:15:59. > :16:05.thing that really worries many people is the lethal autonomous

:16:06. > :16:12.Robotics, that can kill targets without the involvement of human

:16:13. > :16:18.handlers. Peter Singer, who is an expert on warfare at the Brookings

:16:19. > :16:23.Institute, says the robot warrior raises profound questions from the

:16:24. > :16:27.military tactics you use on the ground, and also the bigger ethical

:16:28. > :16:32.questions like the ones we are discussing. It is kind of detached

:16:33. > :16:38.killing. We send out something to do it for us, and somehow it enables us

:16:39. > :16:43.to pull back emotionally. A machine on the battlefield deciding whether

:16:44. > :16:48.a human combatant should live or die. That is really quite repugnant

:16:49. > :16:51.to a lot of people. You have to realise that the algorithm the

:16:52. > :16:57.machine is using to decide is an algorithm that a bunch of people got

:16:58. > :16:59.together and said how we make the best decision possible, given all

:17:00. > :17:06.the information that a machine and a person with sensors could sense?

:17:07. > :17:13.Just like future cars may be able to drive with fewer accidents, being

:17:14. > :17:18.100% vigilant with great sensors, it may be that in the future a robot

:17:19. > :17:22.deployed to site where a future is, may have more medical expertise to

:17:23. > :17:27.make a life`saving decision than the first soldier to arrive. So, you

:17:28. > :17:34.say, get the technology right. You don't believe in the American winner

:17:35. > :17:39.of the Nobel Peace Prize for her work against landmines, who says,

:17:40. > :17:44.killer robots loom over our existence, if we don't take action

:17:45. > :17:51.to ban them now, she is very concerned. You agree? There are

:17:52. > :17:57.several countries around the world also researching that? We wish there

:17:58. > :18:02.was no war, and that everyone could sit down and play computer games

:18:03. > :18:07.instead of taking peoples lives, so I would certainly be on the side

:18:08. > :18:11.of, why don't we put as much money into alternative is to help people

:18:12. > :18:16.get along, and to show off their prowess in some way besides

:18:17. > :18:22.murdering each other. Would you like to see a moratorium? I would like to

:18:23. > :18:27.see the huge amount of resources that are sunk into weapons being

:18:28. > :18:31.sunk into technology that helps people live healthier, more

:18:32. > :18:34.fulfilling lives. Especially people with limited abilities, extending

:18:35. > :18:39.their abilities to live a better life. But, you know, your research,

:18:40. > :18:43.as I said, you are one of the pioneers in this field, is enabling

:18:44. > :18:52.the killer robot, the warrior robot, whatever you want to call it. If you

:18:53. > :18:55.are to give such a robot a sense of compassion and caring for people's

:18:56. > :19:04.feelings, then it might deliberate out there about the killing, instead

:19:05. > :19:09.of the dis` topic science`fiction... Can you do that?

:19:10. > :19:16.Because machines lack of morale at the end mortality, and as

:19:17. > :19:20.Christopher Haines says, as a result they should not have life`and`death

:19:21. > :19:26.powers of the human. They've lack of morale at the end mortality. You

:19:27. > :19:30.saying they don't? Or they want? It depends on the bias is that we

:19:31. > :19:36.programme them with. If we programme them to value human life, perhaps,

:19:37. > :19:41.maybe the military is going to programme them to take human life,

:19:42. > :19:46.somebody hacks the squadron of robots to actually disobey the

:19:47. > :19:51.military and to value human life, there is a tantalising thought,

:19:52. > :19:55.right? It is much harder to hack a group of human soldiers who have

:19:56. > :19:59.sworn their allegiance, then a group of machines. That is a new kind of

:20:00. > :20:04.risk that they face, that the machines may become disobedient. Are

:20:05. > :20:09.you not really delving into the realms of creating humanoids, in a

:20:10. > :20:14.way? You are saying that these machines have work, they are not

:20:15. > :20:18.worthless, you then given dignity? And, if you give them dignity, then

:20:19. > :20:21.you have to give them morals, as you say, so they can make decisions when

:20:22. > :20:26.they are on the battlefield for instance. And for someone who

:20:27. > :20:29.started life as an atheist, but then as an adult became a committed

:20:30. > :20:35.Christian, is it not worry you that you are creating humanoids? I am not

:20:36. > :20:40.worried about making machines that have humanlike characteristics.

:20:41. > :20:47.There are a lot of questions in there. I think we, as we try to

:20:48. > :20:50.build something, it is a way of trying to understand something. As

:20:51. > :20:55.we try to build a computer vision system it is trying to understand

:20:56. > :20:57.how our own eyes work, which turns out to be pretty amazing and

:20:58. > :21:02.miraculous, and we still don't fully understand it. We opened them and

:21:03. > :21:06.they work, and you think it is pretty easy. As we build a combo

:21:07. > :21:18.Katie Gee and then, which, by the way, we are very far then, from

:21:19. > :21:25.doing that `` build a complicated human being. It is learning about

:21:26. > :21:31.how we are made. You are referring to your belief that God made us, and

:21:32. > :21:38.you are in or of that. I don't deny what biology has shown, with

:21:39. > :21:42.genetics and the evolutionary process, the science I have no

:21:43. > :21:50.problem with. You don't want to be put in the tight category of a

:21:51. > :21:56.creationist versus evolution is. Think they are false categories, I

:21:57. > :22:00.don't know anybody who fits into those categories. What science

:22:01. > :22:07.doesn't show is why we are here, what gave rise to the first

:22:08. > :22:12.particles, the first forces. You think that when somebody like

:22:13. > :22:15.Richard Dawkins says, one of the truly bad effects of religion is

:22:16. > :22:18.that it treats us that it is a virtue to be satisfied with not

:22:19. > :22:23.understanding, should you, as a good scientists, say, there are no limits

:22:24. > :22:28.to what I can understand as a scientist, in order to be a really

:22:29. > :22:32.good scientists. I'm afraid his comments are misleading to people,

:22:33. > :22:38.and he speaks with authority that he really doesn't have, to make claims

:22:39. > :22:42.like that. It is not that these are in opposition, they address

:22:43. > :22:45.different things. Science addresses what we can measure, what is

:22:46. > :22:49.repeatable, it gives us mechanisms for describing something. It is a

:22:50. > :22:53.very powerful way to understand things. I don't believe we should

:22:54. > :22:58.take something that we don't understand scientifically, and just

:22:59. > :23:04.say, a miracle happened and God did it. That is called the God of the

:23:05. > :23:09.gaps, and I don't practice that. He needs to see that there are plenty

:23:10. > :23:15.of scientists who are actually quite devout Christians, and it is not

:23:16. > :23:18.incompatible. Finally, I should say that the research that you have

:23:19. > :23:22.worked on is being used in the area of trying to help autistic people,

:23:23. > :23:26.and people who have difficulty recognising human emotions, and so

:23:27. > :23:30.on, and there is a lot of work being done in that area. But, as the

:23:31. > :23:37.mother of three young, energetic boys, would you like to have a robot

:23:38. > :23:41.helping you in your duties, as well as that of their father, to help you

:23:42. > :23:50.bring them up? I would love to have a robot in the house to help out.

:23:51. > :23:53.Tasks like helping clean up, and nagging them to make them do some

:23:54. > :23:57.more sometimes. I have three amazing sons and an incredible husband who

:23:58. > :24:02.all work together to get through each day. But a robot would be

:24:03. > :24:05.welcome, we would rather have a robot then a dog or a cat. Thank you

:24:06. > :24:34.for coming on HARDtalk. My pleasure. Had hope you made the most of

:24:35. > :24:39.today's dry and bright weather, because it will all change overnight

:24:40. > :24:42.tonight. It will be much cloudier, wetter, and windy conditions. You

:24:43. > :24:43.can see why. This massive