Browse content similar to Professor Rosalind Picard - Founder of Affective Computing, MIT. Check below for episodes and series from the same categories and more!
Line | From | To | |
---|---|---|---|
on the content of the phone call she had with President Obama. That would | :00:00. | :00:00. | |
have been ironic. Now it's time for HARDtalk. Welcome. Imagine a world | :00:00. | :00:15. | |
where robots can think and feel like humans. One pioneering scientist in | :00:16. | :00:21. | |
this field has advanced the capability of computers to recognise | :00:22. | :00:26. | |
human emotions. My guess today is the American Professor Rosalind | :00:27. | :00:32. | |
Picard from MIT. Could robots fitted with intelligent computers perform | :00:33. | :00:38. | |
tasks like caring for the elderly or fight as soldiers on the | :00:39. | :00:42. | |
battlefield? What would be the ethical implications of that? | :00:43. | :01:12. | |
Rosalind Picard, welcome to HARDtalk. Thank you. How have you | :01:13. | :01:20. | |
managed to make computers read human emotions? Computers are now able to | :01:21. | :01:27. | |
see our facial expressions, listen to a vocal changes, and in some | :01:28. | :01:30. | |
cases we can wear them as sensors that can read our physiological | :01:31. | :01:34. | |
changes. That's the thing that looks like a sweat band on your wrist. | :01:35. | :01:40. | |
Yes, it is one of several centres and testing rate now. How does it | :01:41. | :01:45. | |
work? How does a computer recognise whether you are sad, happy, board? | :01:46. | :01:53. | |
The first thing we do is understand that people feel comfortable | :01:54. | :01:56. | |
communicating in a certain situation. If they are comfortable | :01:57. | :02:02. | |
having a camera look at them, see if they are smiling, frowning, looking | :02:03. | :02:04. | |
interested, if you're out looking around and there is no camera | :02:05. | :02:08. | |
looking at you but you want to sense what is going on inside your body, | :02:09. | :02:14. | |
you might wear a censor. What about the computer itself expressing human | :02:15. | :02:19. | |
emotions? Is that possible? Computers have been able to smile a | :02:20. | :02:26. | |
change their tone of voice for along time. Ranging from either the `` | :02:27. | :02:40. | |
Marvin the paranoid android two Macintosh. But actually have | :02:41. | :02:43. | |
mechanisms of emotions remains a challenge. And when we are talking | :02:44. | :02:48. | |
about emotional intelligence computer might have, that is the | :02:49. | :02:56. | |
area you are working on? Yes, we want computers to have emotions that | :02:57. | :03:00. | |
are annoying or four dramatic rise attainment purposes, but that are | :03:01. | :03:03. | |
smart about entertaining with so they are less frustrating for to | :03:04. | :03:09. | |
deal with. But how can you measured whether a computer is displaying | :03:10. | :03:13. | |
emotional intelligence? I don't know if you remember when Microsoft had a | :03:14. | :03:19. | |
little paperclip software. A lot of people connected to go away. They | :03:20. | :03:22. | |
headed Helen looked so happy when they were having misfortune. That | :03:23. | :03:31. | |
was a sign of failure. Yet it had a lot of intelligence underneath. But | :03:32. | :03:34. | |
not knowing when to smile and when not to smile or something that | :03:35. | :03:39. | |
didn't work. What we will see when it succeeds in having emotional | :03:40. | :03:44. | |
intelligence is that people will want to interact with it more. It | :03:45. | :03:47. | |
won't be that the emotion looks like it's there, it will just feel like | :03:48. | :03:50. | |
the interaction is smarter and nicer. You mentioned a | :03:51. | :04:02. | |
groundbreaking aspect `` affective computing. You say we must give them | :04:03. | :04:07. | |
the ability to recognise, understand and express emotions. Why would you | :04:08. | :04:17. | |
want to do that? It's hard to say when you see most robots in the home | :04:18. | :04:22. | |
that are just vacuum cleaners. Are you talking about putting this | :04:23. | :04:27. | |
emotional intelligence computer inside a machine, robber with limbs | :04:28. | :04:36. | |
that can walk? `` robot. That is the easiest example for people to | :04:37. | :04:39. | |
understand. You are in the living room and the robot comes in to clean | :04:40. | :04:43. | |
up. It makes a noise and distracts you in some way, your national | :04:44. | :04:46. | |
response might be to scale at it" in its direction. If it doesn't see | :04:47. | :04:51. | |
that you don't like what it's doing, maybe apologise and slink off, you | :04:52. | :04:56. | |
feel irritated. Over time as it keeps ignoring your desires, are | :04:57. | :05:03. | |
going to want to get rid of it. He jumped on stage you. You're saying | :05:04. | :05:06. | |
how we respond to this robot doing your cleaning for you. You have | :05:07. | :05:11. | |
written about how are likely to play a central roles in our lives the | :05:12. | :05:16. | |
future. You comfortable with your research and forbidding to this | :05:17. | :05:25. | |
field? `` contributing. I think it's vital that we make interactions with | :05:26. | :05:29. | |
these machines much smarter. Many people are familiar with how smart | :05:30. | :05:36. | |
phones work, or a few glasses and it pops up your messages there, it's | :05:37. | :05:44. | |
kind of insensitive. The natural way to indicate a computer at the idea | :05:45. | :05:47. | |
is to do better is to show a negative piece of feedback, a frown | :05:48. | :05:51. | |
or head check. If it with that and uses it to learn how to do a better | :05:52. | :05:54. | |
job of presenting things at the right time, that is smart. How | :05:55. | :06:01. | |
quickly do think we will get to this area or robots to play a central | :06:02. | :06:06. | |
role in our lives? Many people say by 2015 this is going to be a $50 | :06:07. | :06:12. | |
billion industry. Timing is driven by the people driving the business. | :06:13. | :06:16. | |
The dollar is on the marketing and all that. I more on the science | :06:17. | :06:22. | |
side. That has progressed enormously. It goes much faster as | :06:23. | :06:24. | |
we are able to get more people interested in sharing their data. | :06:25. | :06:29. | |
Sharing their emotions, turning on their cameras and giving the | :06:30. | :06:35. | |
computer feedback. Some people are saying that within a decade... There | :06:36. | :06:40. | |
is a project at Birmingham University to create ahead with two | :06:41. | :06:42. | |
blinking eyes for humans to interact with. At easier than a sort of TV | :06:43. | :06:47. | |
screen. He says within a decade we will see humans interacting with | :06:48. | :06:54. | |
robots. We have had them in a lab for a while. You can come to MIT and | :06:55. | :07:03. | |
interact with robots that have a similar sale. Taking them to the | :07:04. | :07:06. | |
market of people 's homes is a different thing. I would hope so, | :07:07. | :07:12. | |
but... We already have some technology that is out there are | :07:13. | :07:16. | |
now, but if you opt in and turn on your camera it can read your facial | :07:17. | :07:22. | |
expressions and you can then let the maker of a video or advertise me | :07:23. | :07:26. | |
know if you like what they are showing you or are offended by it or | :07:27. | :07:36. | |
confused. Or just bored. So a lot of people share your ambition. In the | :07:37. | :07:40. | |
UK, in Scotland, the chief executive for the national health service, | :07:41. | :07:46. | |
Gordon Jensen, has a project called off, which is about Robert topping | :07:47. | :07:49. | |
with the care of patients with dementia. `` robots helping. Would | :07:50. | :08:05. | |
you want to see a robot performing their duties of medical personnel? I | :08:06. | :08:11. | |
don't think we are going to completely replace doctors and | :08:12. | :08:16. | |
nurses. I think we'll still want the human element. That said, there are | :08:17. | :08:19. | |
some places where people are already showing that they prefer software | :08:20. | :08:24. | |
agents there are put in a computer body and roles in. It has shown that | :08:25. | :08:31. | |
when patients are being discharged and given a budget of structures, | :08:32. | :08:34. | |
they prefer to hear it the instructions from a character on as | :08:35. | :08:43. | |
screen than from a human being. He said it is not to replace doctors | :08:44. | :08:46. | |
and nurses, but you don't have to a robot. For institutions that have to | :08:47. | :08:52. | |
make sure that they balance their books, it is going to be attractive, | :08:53. | :08:57. | |
isn't it? To replace a large number of their medical personnel, if they | :08:58. | :09:01. | |
can, with robots don't have to be paid. I think we will see us than | :09:02. | :09:06. | |
replacing some of the more brought and boring tasks that just require a | :09:07. | :09:12. | |
lot of patient examination of information. The software can give | :09:13. | :09:19. | |
you instructions and if you don't understand it can look a little | :09:20. | :09:22. | |
concern and repeat it with infinite patience. A nurse was already late | :09:23. | :09:25. | |
for her next task might not have that patients. But you can't | :09:26. | :09:30. | |
guarantee that they will only be used in that restricted way. When | :09:31. | :09:35. | |
people are entitled kinds of things can happen. And people will still be | :09:36. | :09:38. | |
entitled to lead the robot take over, but I don't see that | :09:39. | :09:45. | |
happening. There is a prediction that companies will soon sell robots | :09:46. | :09:52. | |
designed to babysit children and the service companions to people with | :09:53. | :09:56. | |
disabilities. She says this is demeaning and transgressive to our | :09:57. | :10:00. | |
collective sense of humanity. I share some of her concerns. When you | :10:01. | :10:05. | |
go to a nursing home and they have just handed somebody an artificial | :10:06. | :10:09. | |
animal to parents keep them happy. That is pretty insensitive. I would | :10:10. | :10:13. | |
not have wanted to leave that was my father when he was close to death. I | :10:14. | :10:19. | |
wanted him to be with humans. At the same time, if somebody chooses, and | :10:20. | :10:24. | |
you see this with kids who choose to comfort themselves stroking their | :10:25. | :10:30. | |
stuffed animal, if very soothing. And if it kind of wiggles and cosies | :10:31. | :10:34. | |
up to them in a comfortable way with a sense of their stress, there is a | :10:35. | :10:38. | |
sweet spot for something there that can augment what we can do is | :10:39. | :10:43. | |
humans. But the idea of artificial companionship becoming a new norm, | :10:44. | :10:51. | |
she is alarmed by that. I think if you are trying to replace all of us | :10:52. | :10:55. | |
without official stuff, I don't see it happening in the next decade. I | :10:56. | :11:02. | |
remember years ago, my friend Marven, one of my colleagues, said | :11:03. | :11:06. | |
the computers will be so smart that we will be lucky if they keep us | :11:07. | :11:11. | |
around as pets. I was bothered by that. I don't want to make that kind | :11:12. | :11:17. | |
of computer. If we make that then maybe we deserve to be demeaned by | :11:18. | :11:21. | |
that which we have created. We have choices as technology creators. My | :11:22. | :11:26. | |
choice is to make technology that enables us to expand our compassion | :11:27. | :11:33. | |
and emotional intelligence. But that is a real worry, the computers and | :11:34. | :11:36. | |
robots could become superior to humans. It's not just the stuff of | :11:37. | :11:43. | |
science fiction. It is. It is a choice that we make. You don't think | :11:44. | :11:51. | |
it's possible? It's our choice. Do we choose to make them that way? I | :11:52. | :11:57. | |
did a mass, I do the programming that creates the way that they | :11:58. | :12:02. | |
function. If I choose to make them arrogant and demeaning towards us, I | :12:03. | :12:05. | |
suffer from making something like that. You are saying it's not | :12:06. | :12:11. | |
desirable, but I am asking if it is feasible. Professor Hugh Price in | :12:12. | :12:19. | |
the UK, he says the scientific community, people like you, which | :12:20. | :12:24. | |
consider such issues. He says as robots and computers become smarter | :12:25. | :12:27. | |
than humans we could find ourselves at the mercy of machines that are | :12:28. | :12:31. | |
not malicious but whose interests don't include us. I see the | :12:32. | :12:41. | |
temptation to scout. They are going to be smarter than us, their | :12:42. | :12:44. | |
interests don't include us. Deep inside that, what is that really | :12:45. | :12:49. | |
mean? What is a need to be smart? Is it simply is holding more | :12:50. | :12:53. | |
mathematical equations? Picking things up on the Web faster? The | :12:54. | :12:59. | |
choice we have is how do we make them into the kind of future that we | :13:00. | :13:09. | |
want to have? This is one of the focus is that we have, why we have | :13:10. | :13:15. | |
shifted from looking at making technology that is more intelligent | :13:16. | :13:20. | |
and we are, to looking at making things that increase our | :13:21. | :13:25. | |
intelligence. The idea of people attaching themselves to these | :13:26. | :13:29. | |
artificial creatures, and the possibility of them becoming smarter | :13:30. | :13:35. | |
and superior to us. The point I'm trying to get to hear, is under the | :13:36. | :13:38. | |
umbrella of an issue that you yourself have raised, which is the | :13:39. | :13:42. | |
greater the freedom of a machine, in more it will need moral standards. | :13:43. | :13:48. | |
And in this field, in which you are so closely involved, we need ethical | :13:49. | :13:55. | |
guidelines, don't be? I agree. As computers get more abilities to make | :13:56. | :13:58. | |
decisions without our direct control, we have two face some hard | :13:59. | :14:04. | |
questions about how they are going to make those decisions, and do we | :14:05. | :14:09. | |
bias them toward one kind of reference and away from another? In | :14:10. | :14:14. | |
accord with perhaps my values, or someone with very different values | :14:15. | :14:18. | |
to me, wants to see them make decisions in a different way. Are | :14:19. | :14:22. | |
you involved in these kind of ethical discussions yourself? Or are | :14:23. | :14:26. | |
you just involved in the technology that is going to bring about better | :14:27. | :14:30. | |
computers that have emotional intelligence and robots? I am | :14:31. | :14:35. | |
involved. When I wrote my book, some people said, why did you include a | :14:36. | :14:42. | |
chapter about potential concerns of the technology, it hasn't even | :14:43. | :14:45. | |
launched yet, it's like shooting yourself in the foot before you have | :14:46. | :14:49. | |
taken the first ten steps. It is important that we think about what | :14:50. | :14:53. | |
we can build, but also about what we should build. You say short, but I'm | :14:54. | :14:59. | |
tried to get an idea of how concerned you are. Another colleague | :15:00. | :15:03. | |
of yours, the Professor of personal robots, says, now is the time to | :15:04. | :15:09. | |
start hammering things out. People should have serious dialogues before | :15:10. | :15:12. | |
these robots are in contact with vulnerable populations in | :15:13. | :15:16. | |
particular, she says. Are you involved in the serious dialogues. | :15:17. | :15:20. | |
And there you mentioned it in your book, Effective Computing, are you | :15:21. | :15:27. | |
perhaps preoccupied with the technological aspects, and not doing | :15:28. | :15:30. | |
enough on the ethical field? There is so much to do, we can always | :15:31. | :15:34. | |
wonder where to put the limited resources of time. I think the | :15:35. | :15:39. | |
dialogues need to be much broader than those of us building the | :15:40. | :15:43. | |
technology. We need to be involved, because ultimately we are the ones | :15:44. | :15:48. | |
making those programme decisions, but they really need to be societal | :15:49. | :15:54. | |
dialogues. So, kudos to you for raising this to an audience. The big | :15:55. | :15:58. | |
thing that really worries many people is the lethal autonomous | :15:59. | :16:05. | |
Robotics, that can kill targets without the involvement of human | :16:06. | :16:12. | |
handlers. Peter Singer, who is an expert on warfare at the Brookings | :16:13. | :16:18. | |
Institute, says the robot warrior raises profound questions from the | :16:19. | :16:23. | |
military tactics you use on the ground, and also the bigger ethical | :16:24. | :16:27. | |
questions like the ones we are discussing. It is kind of detached | :16:28. | :16:32. | |
killing. We send out something to do it for us, and somehow it enables us | :16:33. | :16:38. | |
to pull back emotionally. A machine on the battlefield deciding whether | :16:39. | :16:43. | |
a human combatant should live or die. That is really quite repugnant | :16:44. | :16:48. | |
to a lot of people. You have to realise that the algorithm the | :16:49. | :16:51. | |
machine is using to decide is an algorithm that a bunch of people got | :16:52. | :16:57. | |
together and said how we make the best decision possible, given all | :16:58. | :16:59. | |
the information that a machine and a person with sensors could sense? | :17:00. | :17:06. | |
Just like future cars may be able to drive with fewer accidents, being | :17:07. | :17:13. | |
100% vigilant with great sensors, it may be that in the future a robot | :17:14. | :17:18. | |
deployed to site where a future is, may have more medical expertise to | :17:19. | :17:22. | |
make a life`saving decision than the first soldier to arrive. So, you | :17:23. | :17:27. | |
say, get the technology right. You don't believe in the American winner | :17:28. | :17:34. | |
of the Nobel Peace Prize for her work against landmines, who says, | :17:35. | :17:39. | |
killer robots loom over our existence, if we don't take action | :17:40. | :17:44. | |
to ban them now, she is very concerned. You agree? There are | :17:45. | :17:51. | |
several countries around the world also researching that? We wish there | :17:52. | :17:57. | |
was no war, and that everyone could sit down and play computer games | :17:58. | :18:02. | |
instead of taking peoples lives, so I would certainly be on the side | :18:03. | :18:07. | |
of, why don't we put as much money into alternative is to help people | :18:08. | :18:11. | |
get along, and to show off their prowess in some way besides | :18:12. | :18:16. | |
murdering each other. Would you like to see a moratorium? I would like to | :18:17. | :18:22. | |
see the huge amount of resources that are sunk into weapons being | :18:23. | :18:27. | |
sunk into technology that helps people live healthier, more | :18:28. | :18:31. | |
fulfilling lives. Especially people with limited abilities, extending | :18:32. | :18:34. | |
their abilities to live a better life. But, you know, your research, | :18:35. | :18:39. | |
as I said, you are one of the pioneers in this field, is enabling | :18:40. | :18:43. | |
the killer robot, the warrior robot, whatever you want to call it. If you | :18:44. | :18:52. | |
are to give such a robot a sense of compassion and caring for people's | :18:53. | :18:55. | |
feelings, then it might deliberate out there about the killing, instead | :18:56. | :19:04. | |
of the dis` topic science`fiction... Can you do that? | :19:05. | :19:09. | |
Because machines lack of morale at the end mortality, and as | :19:10. | :19:16. | |
Christopher Haines says, as a result they should not have life`and`death | :19:17. | :19:20. | |
powers of the human. They've lack of morale at the end mortality. You | :19:21. | :19:26. | |
saying they don't? Or they want? It depends on the bias is that we | :19:27. | :19:30. | |
programme them with. If we programme them to value human life, perhaps, | :19:31. | :19:36. | |
maybe the military is going to programme them to take human life, | :19:37. | :19:41. | |
somebody hacks the squadron of robots to actually disobey the | :19:42. | :19:46. | |
military and to value human life, there is a tantalising thought, | :19:47. | :19:51. | |
right? It is much harder to hack a group of human soldiers who have | :19:52. | :19:55. | |
sworn their allegiance, then a group of machines. That is a new kind of | :19:56. | :19:59. | |
risk that they face, that the machines may become disobedient. Are | :20:00. | :20:04. | |
you not really delving into the realms of creating humanoids, in a | :20:05. | :20:09. | |
way? You are saying that these machines have work, they are not | :20:10. | :20:14. | |
worthless, you then given dignity? And, if you give them dignity, then | :20:15. | :20:18. | |
you have to give them morals, as you say, so they can make decisions when | :20:19. | :20:21. | |
they are on the battlefield for instance. And for someone who | :20:22. | :20:26. | |
started life as an atheist, but then as an adult became a committed | :20:27. | :20:29. | |
Christian, is it not worry you that you are creating humanoids? I am not | :20:30. | :20:35. | |
worried about making machines that have humanlike characteristics. | :20:36. | :20:40. | |
There are a lot of questions in there. I think we, as we try to | :20:41. | :20:47. | |
build something, it is a way of trying to understand something. As | :20:48. | :20:50. | |
we try to build a computer vision system it is trying to understand | :20:51. | :20:55. | |
how our own eyes work, which turns out to be pretty amazing and | :20:56. | :20:57. | |
miraculous, and we still don't fully understand it. We opened them and | :20:58. | :21:02. | |
they work, and you think it is pretty easy. As we build a combo | :21:03. | :21:06. | |
Katie Gee and then, which, by the way, we are very far then, from | :21:07. | :21:18. | |
doing that `` build a complicated human being. It is learning about | :21:19. | :21:25. | |
how we are made. You are referring to your belief that God made us, and | :21:26. | :21:31. | |
you are in or of that. I don't deny what biology has shown, with | :21:32. | :21:38. | |
genetics and the evolutionary process, the science I have no | :21:39. | :21:42. | |
problem with. You don't want to be put in the tight category of a | :21:43. | :21:50. | |
creationist versus evolution is. Think they are false categories, I | :21:51. | :21:56. | |
don't know anybody who fits into those categories. What science | :21:57. | :22:00. | |
doesn't show is why we are here, what gave rise to the first | :22:01. | :22:07. | |
particles, the first forces. You think that when somebody like | :22:08. | :22:12. | |
Richard Dawkins says, one of the truly bad effects of religion is | :22:13. | :22:15. | |
that it treats us that it is a virtue to be satisfied with not | :22:16. | :22:18. | |
understanding, should you, as a good scientists, say, there are no limits | :22:19. | :22:23. | |
to what I can understand as a scientist, in order to be a really | :22:24. | :22:28. | |
good scientists. I'm afraid his comments are misleading to people, | :22:29. | :22:32. | |
and he speaks with authority that he really doesn't have, to make claims | :22:33. | :22:38. | |
like that. It is not that these are in opposition, they address | :22:39. | :22:42. | |
different things. Science addresses what we can measure, what is | :22:43. | :22:45. | |
repeatable, it gives us mechanisms for describing something. It is a | :22:46. | :22:49. | |
very powerful way to understand things. I don't believe we should | :22:50. | :22:53. | |
take something that we don't understand scientifically, and just | :22:54. | :22:58. | |
say, a miracle happened and God did it. That is called the God of the | :22:59. | :23:04. | |
gaps, and I don't practice that. He needs to see that there are plenty | :23:05. | :23:09. | |
of scientists who are actually quite devout Christians, and it is not | :23:10. | :23:15. | |
incompatible. Finally, I should say that the research that you have | :23:16. | :23:18. | |
worked on is being used in the area of trying to help autistic people, | :23:19. | :23:22. | |
and people who have difficulty recognising human emotions, and so | :23:23. | :23:26. | |
on, and there is a lot of work being done in that area. But, as the | :23:27. | :23:30. | |
mother of three young, energetic boys, would you like to have a robot | :23:31. | :23:37. | |
helping you in your duties, as well as that of their father, to help you | :23:38. | :23:41. | |
bring them up? I would love to have a robot in the house to help out. | :23:42. | :23:50. | |
Tasks like helping clean up, and nagging them to make them do some | :23:51. | :23:53. | |
more sometimes. I have three amazing sons and an incredible husband who | :23:54. | :23:57. | |
all work together to get through each day. But a robot would be | :23:58. | :24:02. | |
welcome, we would rather have a robot then a dog or a cat. Thank you | :24:03. | :24:05. | |
for coming on HARDtalk. My pleasure. Had hope you made the most of | :24:06. | :24:34. | |
today's dry and bright weather, because it will all change overnight | :24:35. | :24:39. | |
tonight. It will be much cloudier, wetter, and windy conditions. You | :24:40. | :24:42. | |
can see why. This massive | :24:43. | :24:43. |