:01:11. > :01:15.Just before Christmas, I witnessed something extraordinary. I stood in
:01:15. > :01:20.a room with two robots that were starting to do something distinctly
:01:20. > :01:27.human. They were learning like young children do, when they
:01:27. > :01:30.discover the world for the first time. They were developing a
:01:30. > :01:36.language, one that I didn't understand. It's quite scary. Oh,
:01:36. > :01:46.gosh, it's asked me to do something. More and more of the things we once
:01:46. > :01:47.
:01:47. > :01:53.thought made us human are now being taken on by machines. I want to
:01:53. > :01:59.find out how close we are to creating artificial intelligence.
:01:59. > :02:05.And what that might mean for us as human. As a mathtition, should I be
:02:05. > :02:10.worried that they may put me out of a job, or could they extend human
:02:10. > :02:20.intelligence in unimaginable new ways? What are these machines? Or,
:02:20. > :02:41.
:02:42. > :02:48.In February 2011, New York was a seen for a very public test of how
:02:48. > :02:55.clever machines have become. A showdown watched by millions, took
:02:56. > :02:59.place between two men and a supercomputer. The arena for this
:02:59. > :03:04.battle was a quiz show called Jeopardy. It's one of the most
:03:04. > :03:14.popular and long-running quiz shows in America. It's famous for its
:03:14. > :03:19.
:03:19. > :03:27.Fortunately, those in mankind's corner had no problem with these
:03:27. > :03:34.questions. One of the two human contestants was Brad Rutter, the
:03:34. > :03:41.show's biggest-ever money winner. He became a five-time champion,
:03:42. > :03:45.earning a record $3.2 million. long as I can remember, I grew up
:03:45. > :03:48.with Jeopardy and by the time I was in high school I started realising
:03:48. > :03:52.I knew the answers to most questions and if I ever get a
:03:52. > :03:58.chance to try out, I think I can do pretty well. If wow have told me
:03:59. > :04:08.how well I would actually do I would have said you were crazy, but
:04:08. > :04:11.$3.2 million later here I am. the newcomer. This is Watson.
:04:11. > :04:20.Watson was one of the most sophisticated AI computers ever
:04:20. > :04:23.built. Sitting in the audience was his creator. We wanted to do this
:04:23. > :04:28.kind of experiment, this illustration of what would it take
:04:28. > :04:34.for a computer to play Jeopardy against a human. A brain that fits
:04:34. > :04:38.in a shoebox powered on a glass of water and sandwich then you have a
:04:38. > :04:43.computer 80 kilowatts of electricity and 20 tonnes of air
:04:43. > :04:51.conditioning. This is what it took. In the lead-up to the match, Watson
:04:51. > :04:56.had to learn how to play the game. In effect, it had to be coached.
:04:56. > :04:59.One of the most difficult tasks it faced was to get to grips with the
:04:59. > :05:09.cryptic format of the game, where the contestants are first given the
:05:09. > :05:11.
:05:11. > :05:16.answer and have to come up with the question. This queen fled to London.
:05:16. > :05:23.It had to go through the 200 million pages of information it had
:05:23. > :05:30.been programmed with and find the answer in just three seconds.
:05:30. > :05:39.famous clown or any incompetent fool? Who is Bozzo. After four
:05:39. > :05:46.years of development, Watson was considered ready for the challenge.
:05:46. > :05:52.Let's play Jeopardy. Here we go. was a really, really tense moment.
:05:52. > :05:57.If I was going to fail we were going to fail in front of - This
:05:57. > :06:02.was real jeopardy? This is real. You were putting yourself on the
:06:02. > :06:06.line. This was real and live. People are going to route for the
:06:06. > :06:13.people rather than the machine, so we felt a little bit of pressure,
:06:13. > :06:18.but also a great opportunity to kind of represent humanity against
:06:18. > :06:25.encroaching technology, shall we say? Four-letter word for a vantage
:06:25. > :06:32.point or belief? What is a view. You have adrenaline going. It was
:06:32. > :06:38.an open playing field. Anything could happen. Who can Grendell
:06:38. > :06:46.is the 1930s? After the first round of the first game, Watson was tied
:06:46. > :06:49.with Brad at about $5,000. Into a tie for the lead. I think everyone
:06:50. > :06:55.sat there and thought at least we didn't lose. In other words, at
:06:55. > :06:58.least at this point, the awedients is understanding -- audience is
:06:58. > :07:03.understanding that the system is good to tie a grand champion.
:07:03. > :07:08.close for comfort. I remember that moment when I first heard about
:07:08. > :07:12.Watson. I thought what a key moment for AI. We have already programmed
:07:12. > :07:18.a computer to beat humans at chess, but chess is a vertical thought
:07:18. > :07:24.process, where you are making deep, logical connections. Watson is
:07:24. > :07:27.doing something completely different and thinking horizontally,
:07:27. > :07:33.accessing a huge database. This is machine intelligence taken in a new
:07:33. > :07:37.direction. The computer is starting to think in a multi-dimensional way.
:07:37. > :07:41.How the game would unfold was to change the way many people think
:07:41. > :07:46.about artificial intelligence, including me. But in order to
:07:46. > :07:56.understand the real significance of Watson, I need to know where this
:07:56. > :08:00.
:08:00. > :08:06.dream to replicate our intelligence For thousands of years, we have
:08:06. > :08:10.invented machines to do things for us. But there was one moment in
:08:10. > :08:20.history where we realised they weren't just slaves, but had the
:08:20. > :08:24.
:08:24. > :08:32.potential to be our rivals. Or even I've come to the place where this
:08:32. > :08:36.realisation first came to life. It was in this building, that the
:08:36. > :08:46.search for artificial intelligence began. What happened here still
:08:46. > :08:47.
:08:47. > :08:55.resonates today. The idea that machines could be intelligent like
:08:55. > :09:04.us was sparked by one man. He was the brilliant man, Alan Turing and
:09:04. > :09:08.he's a real hero of mine. When he was a young man in his 20s he came
:09:08. > :09:13.to work here at Bletchley Park, as a code breaker. This was the home
:09:13. > :09:18.of British intelligence at the time. It was the 1930s and Britain was
:09:18. > :09:24.gearing up to war with Germany. As well as needing the brute force of
:09:24. > :09:28.the military, the Government were gathering a crack team hear. And
:09:28. > :09:38.his genius was precisely what was needed at this time of national
:09:38. > :09:54.
:09:55. > :09:58.The main problem that he had to wrestle with whilst here, was this.
:09:59. > :10:02.It's an enigma machine and it's an incredibly clever piece of kit. It
:10:03. > :10:09.was used by the German military to scramble secret messages into code
:10:10. > :10:13.that nobody could understand. Cracking this code was incredibly
:10:13. > :10:23.difficult for humans. The Germans believed that the code was
:10:23. > :10:26.
:10:26. > :10:31.uncrackable. It was up to human minds to decipher the cryptic
:10:31. > :10:34.communication that was picked up on the airwaves. This was an
:10:34. > :10:44.impossibly long task. We were on the brink of war and didn't have
:10:44. > :10:49.time to waist. -- waste. The machine consists of a key board
:10:49. > :10:53.that I will type my message on to try to encode it. If I press an R,
:10:53. > :10:58.then the wires in the machine go through and light up a letter at
:10:58. > :11:02.the top here, so the R was encoded by theler N, but the incredible
:11:02. > :11:07.thing is, about the machine, if I press R again it doesn't get
:11:07. > :11:11.encoded by N the next time, because these rotas have kicked on one step
:11:11. > :11:17.and so the wires are connecting up the letters in a completely
:11:17. > :11:21.different way now. So when I now press R it is F. The operators had
:11:21. > :11:26.different ways that they could set up the machine, so they can make a
:11:26. > :11:29.choice, for example, of which rotas they were using, so there were five
:11:29. > :11:33.different rotas and they would choose three and they could put
:11:33. > :11:38.them inside in different orders. Then they would set each of the
:11:38. > :11:42.rotas on one of 26 different settings and then not only that,
:11:42. > :11:49.they had some plugs at the front here, which would also scramble up
:11:49. > :11:53.all of the letters, so the combined effect is that there are over 150
:11:53. > :12:03.Slobodan Milosevic different ways that the machine can be -- million,
:12:03. > :12:13.million, million, different ways that the machine can be set up. It
:12:13. > :12:13.
:12:13. > :12:17.really was human against machine. Alan Turing and his team realised
:12:17. > :12:24.the human mind wasn't fast enough to do this and the only hope they
:12:24. > :12:28.had of cracking the code was to create another machine to do it.
:12:28. > :12:33.This is the incredible machine that he and his colleagues created. It's
:12:33. > :12:37.called the Bomb. It was from this mesh of cogs and switches held
:12:37. > :12:41.together by over ten miles of wire that finally came the ability to
:12:41. > :12:46.crack the code on a daily basis. Something that no human had come
:12:46. > :12:49.close to being able to do. What this machine did changed the course
:12:49. > :12:59.of the war and many regard as crucial to the allied victory over
:12:59. > :13:03.
:13:03. > :13:06.In one sense, this machine was just a collection of cogs and wheels.
:13:06. > :13:14.But it could do something that some of the finest finds in Britain
:13:14. > :13:24.couldn't. It was an unsettling moment. Was this machine cleverer
:13:24. > :13:27.than a human? That is an incredibly satisfying sound. It must have been
:13:27. > :13:30.an amazing moment when they switched it on for the first time,
:13:30. > :13:33.seeing the cogs clicking through all the different possibilities,
:13:33. > :13:39.getting a possible setting in ten minutes what would have taken
:13:39. > :13:44.humans weeks to try to find. To see this machine solve a problem that
:13:44. > :13:48.many regarded as impossible, it must have really fired Alan
:13:48. > :13:51.Turing's imagination. This was built to crack the enigma code, but
:13:51. > :14:01.he must have begun to think about what was the limit of what machines
:14:01. > :14:10.could do? Once the war was over, his ideas about machines that could
:14:10. > :14:15.mimic the human mind really started to take shape. But it was when he
:14:15. > :14:19.asked the question, can a machine think in a paper in 1950, that the
:14:19. > :14:25.idea finally sparked people's imaginations. For many, it conjured
:14:25. > :14:30.up terrifying visions of the future and sparked furious debate. But it
:14:30. > :14:38.was this idea, above any other, than inspired scientists in labs
:14:38. > :14:42.across the world to try to re- create what is essentially us. The
:14:42. > :14:49.hunt for artificial intelligence has become tied up with one
:14:49. > :14:58.technology, the computer. At the heart of it is the simple idea that
:14:58. > :15:04.humans and computers aren't so very different. The central thesis of AI
:15:04. > :15:08.is that actually all human thought is just computation, that the mind
:15:08. > :15:13.and the brain are just a sophisticated sort of machine. So,
:15:13. > :15:22.if you could make a big enough computer, with the best software,
:15:22. > :15:27.you can make a copy of the human So riveted by the man versus
:15:27. > :15:32.machine Jeopardy game. Because at estheart, it was so much more than
:15:32. > :15:41.just a TV showdown. Hoare was a machine that could grapple with
:15:41. > :15:48.natural language, as well as any human. -- here. Including pun,
:15:48. > :15:52.riddles and complex questions across a broad range of knowledge.
:15:52. > :15:58.Please welcome Watson. At the end of the first round, the
:15:58. > :16:03.supercomputer Watson was neck and neck with human Brad. But as the
:16:03. > :16:10.second round began, the computer started to show what it was really
:16:10. > :16:16.capable of. Watson who is... Frans list. Who is the church lady. It
:16:16. > :16:23.all fell apart from Ken and me the second day. Watson went on a tear.
:16:23. > :16:30.What is violin. It ex exceeded expectations Watson dominated the
:16:30. > :16:37.entire thing. The final blow for human kind was when Watson went on
:16:37. > :16:40.to win a bonus question. Answer... Which doubleded his score. When
:16:40. > :16:46.Watson hit the daily double that was pretty much the end of the road
:16:46. > :16:53.ch you could see the look on Ken's face, we knew, the guys in the team,
:16:53. > :16:57.that "It was over, we've won." Watson's landslide victory made
:16:57. > :17:02.headlines across America. I would have thought technology like this
:17:02. > :17:08.was years away but it is here now. I have the bruised eco-to prove it.
:17:08. > :17:12.For days after the interhet was alive with chatter about the demise
:17:12. > :17:18.of humanity. Here was a computer that was displaying uniquely human
:17:18. > :17:25.qualities. This machine could understand all the complexities,
:17:25. > :17:35.puns and riddles much faster than any human. But was this true
:17:35. > :17:36.
:17:36. > :17:39.artificial intelligence? Was Watson really thinking like us? The 1980s
:17:39. > :17:42.the American philosopher John Searle challenged the idea whether
:17:42. > :17:47.a machine could ever really think. He came up with an interesting
:17:47. > :17:52.thought experiment to try and prove his idea. It is called the Chinese
:17:52. > :18:02.Room. I am going to try and put that into practise to explore his
:18:02. > :18:12.
:18:12. > :18:22.He imagined a non-Chinese speaking person like me, in a bare room,
:18:22. > :18:23.
:18:23. > :18:26.with an instruction manual, and a selection of Chinese characters. He
:18:26. > :18:31.then imagined someone who could speak Chinese, to be outside the
:18:31. > :18:41.room, communicating with me, by a series of messages written using
:18:41. > :18:49.
:18:49. > :18:56.Chinese characters. Can you speak Chinese? The messages are posted to
:18:56. > :19:02.me through a door. And I have to come up with a response, even
:19:02. > :19:08.though I don't understand a word of Chinese. OK I haven't got a clue
:19:08. > :19:13.what this says. But if I follow the instructions in this book, I should
:19:13. > :19:17.be able to give some sort of response to this question, which
:19:17. > :19:25.will make some sort of sense, so let's have a look. What do I have
:19:25. > :19:30.to find? There is a little one over X symbol. The book tells me to find
:19:30. > :19:34.an exact match of the message on the top half of the page. And then
:19:34. > :19:39.to reply by copying the Chinese characters from the bottom half of
:19:39. > :19:45.the page. That is interesting. Some of the characters appear in the
:19:45. > :19:50.answer, which makes some sense. Gosh, I don't know whether these
:19:50. > :20:00.are the right way round. That looks like a little house with a roof on
:20:00. > :20:03.
:20:03. > :20:08.the top. I need this character here. Oh yes. Bingo! Who knows what I
:20:08. > :20:13.have just said! Can you speak Chinese? He compared the person in
:20:13. > :20:23.the room making up the Chinese message, to a computer reading a
:20:23. > :20:45.
:20:45. > :20:51.bit of code. Have you been to No I haven't been to China but
:20:51. > :20:55.would one day like to. I am just faking I can speak Chinese. So just
:20:55. > :20:59.how convincing was I? You your Chinese was perfect. If we were on
:20:59. > :21:03.line having a chat like that, you would have been convinced you were
:21:03. > :21:10.talking to... I really really would. I would have thought you were
:21:10. > :21:13.someone who spoke Chinese. Chinese Room I think is an
:21:13. > :21:17.important thought experiment. Not for the answers it offers but more
:21:17. > :21:20.for the questions it raise, because I certainly didn't understand a
:21:20. > :21:26.word of what I was writing down there, yet I seem to be able to
:21:26. > :21:31.fake the fact I was speaking Mandarin, so if a computer is
:21:31. > :21:36.following instructions, isn't really thinking, is it? But then
:21:36. > :21:40.again, what is my mind doing when I am actually articulating words now?
:21:40. > :21:45.It is following a set of instructions, so it raises the
:21:45. > :21:54.question, where is the threshold? What point can you they a machine
:21:54. > :21:58.is really understanding and thinking? So, our bigger and faster
:21:59. > :22:03.machines going to bring us closer to this threshold? Since John
:22:03. > :22:10.Searle came one the Chinese Room experiment computer power has been
:22:10. > :22:16.doubling every two years. I am off to see one of the most powerful
:22:16. > :22:22.computers that exists in the world today. We enter the lab I need to
:22:22. > :22:27.warn you it will be very noisy. You will need some ear protection.
:22:27. > :22:31.What is the noise? So that is all cold air blowing through the racks,
:22:31. > :22:34.to cool off the computer chip. Because they are working so hard.
:22:34. > :22:42.Because they are working so hard. This is the first time I have done
:22:42. > :22:47.an interview with ear plugs in. This beast of a machine called Blue
:22:47. > :22:51.Gene is a phenomenal piece of engineering. It is used to perform
:22:52. > :22:59.the kind of calculations that my brain could never do, and it never
:23:00. > :23:07.has to sleep. Fred Mintzer is the naers inventor and Cray to have
:23:07. > :23:13.Blue Gene. -- master. So this is first generation Blue Gene rack.
:23:13. > :23:19.Its commutational performance is 5.6 Terra nops a second. A teraflop
:23:20. > :23:26.is a million million operations per second. Faster than I can go.
:23:26. > :23:32.Faster than you can go. How many have you got. We have 16 Rab rack,
:23:32. > :23:41.they all work in parallel. That is 60,000 processors and a performance
:23:41. > :23:45.of 91 teraflops a second. Blue Gene is solving complex calculations
:23:45. > :23:51.about biological processes. It would take me the rest of my life
:23:51. > :23:59.to do what this computer can do in an hour. But is it really being
:23:59. > :24:06.intelligent? Or is it just doing what it is told? The real
:24:06. > :24:10.intelligence comes from the programmer. It is totally
:24:11. > :24:15.programmed intelligence. Would you say it's a big number cruncher.. It
:24:15. > :24:20.is an enormous number cruncher. But it the doing the things our brains
:24:20. > :24:30.could never get anywhere near. computers are good at computation,
:24:30. > :24:34.
:24:34. > :24:37.brains do much more. It is impressive to see just how powerful
:24:37. > :24:40.they have become. This firepower you have with Blue Gene is
:24:40. > :24:45.something my brain could get nowhere near, but then again, with
:24:45. > :24:50.when I am doing maths it is not just doing number crunching. I am
:24:50. > :24:54.doing more than that. The brain is working, making leap, a lot of
:24:54. > :25:00.imagination, there is a lot of choices involved, which often
:25:00. > :25:04.involve a kind of aesthetic sensibility. I am not saying a ma
:25:05. > :25:08.sheen like that might not be able do that but at the moment it is a
:25:08. > :25:18.sheer brute force crunching. It hasn't captured what my brain can
:25:18. > :25:21.do. These other qualities that inare inhe want hernt in how man
:25:21. > :25:26.intelligence, like creativity and imagine nation, are what make us so
:25:26. > :25:31.unique. And they are the reason that the quest for artificial
:25:31. > :25:34.intelligence is such an interesting and difficult challenge. There is
:25:34. > :25:40.no reason to think that a computer couldn't one day develop these
:25:40. > :25:50.qualities too. But to do this, scientists have got to get o grips
:25:50. > :26:01.
:26:01. > :26:07.with exactly how well develop them At the heart of human intelligence
:26:07. > :26:14.lies ourable to learn to do things we have never done before. Which is
:26:14. > :26:20.what has borough me here. And that is not me by the way. I have joined
:26:21. > :26:26.the circus an my challenge is to walk the tightrope. Not something
:26:26. > :26:33.mathematicians normally do. My instructor for the day is Lila.
:26:33. > :26:39.are nervous. It is not that. There we go. This is a training wire.
:26:39. > :26:42.So... This isn't the real thing. is. It is the same you will feel on
:26:42. > :26:49.the high wire. You don't have the fear fabg - factor because you can
:26:49. > :26:59.stand over it safely. The other one there is a bit of jeopardy. No, it
:26:59. > :27:00.
:27:00. > :27:10.is just... You could get a body bubble edouble. He is wearing a red
:27:10. > :27:11.
:27:11. > :27:14.shirt! See, no movement. So once upon a time he couldn't do that. He
:27:14. > :27:19.wasn't born being able do that. That is something he learned.
:27:19. > :27:25.I should have the confidence that I can learn this new skill.. You know
:27:25. > :27:33.the younger you learn something... What are you saying? I am saying it
:27:33. > :27:41.is easier. You saying I am too old. No, not at all. I am going to do
:27:41. > :27:47.this. Lila can't programme me with a set of rules. Unlike a computer,
:27:47. > :27:52.my brain and my body have to master this by themselves. As a
:27:52. > :27:56.mathematician I know there is a formula to describe what is going
:27:56. > :28:06.on here. But I can't calculate my way out of this. I just have to do
:28:06. > :28:13.
:28:13. > :28:18.Let your muscles do the thinking. Turn your brain off a bit. I have o
:28:18. > :28:23.do things with my body and I have to think and counter act certain
:28:23. > :28:33.things which I shouldn't do. There is a guy over there, who can just
:28:33. > :28:38.
:28:38. > :28:42.sort of skip and dance across. Hands up. Hands up. Hands up!
:28:42. > :28:46.trying to concentrate on the wall over there and keep my hand up, I
:28:46. > :28:50.am trying to keep this leg straight, and maybe I am thinking too much.
:28:50. > :28:54.think, we have to get some of it into muscle memory so you don't
:28:54. > :29:00.think about that, that it comes naturally. As I flail round on the
:29:00. > :29:04.tightrope, I can see the complicates physics I am asking my
:29:04. > :29:08.brain and body do to master this new skill. It is this idea that the
:29:08. > :29:18.brain and body work together. That is taking the hunt for AI in a
:29:18. > :29:31.
:29:31. > :29:41.I have come to meet a scientist who has built the world's first
:29:41. > :29:47.anthropomimetic robot. Which means it mimics the hue -- human body.
:29:47. > :29:51.Because he believes that our fiscal form actually helps shape the way
:29:51. > :29:57.we think. Without a body, artificial intelligence cannot
:29:57. > :30:01.exist. 6. What we are interested in is how the brain controls body and
:30:01. > :30:07.having built the body we have realised what a serious problem
:30:07. > :30:15.that. Can I interact with it? you like to shake hands with it?
:30:15. > :30:25.Yes, provided it doesn't clonking me anywhere sensitive. Give its
:30:25. > :30:29.
:30:29. > :30:33.hand a good squeeze and a shake. It's squeezing back. It's really
:30:33. > :30:37.responding to contact with me, picking up something. It gives
:30:37. > :30:47.people the sensation of interacting with something that in a way of
:30:47. > :30:47.
:30:47. > :30:55.really like themselves. This robot called Ecce Robot has no blood or
:30:56. > :31:00.skin or flesh, but ha bones, joints, muscles and tendons that move like
:31:00. > :31:03.ours, so it's able to have human interactions. This is an
:31:03. > :31:09.extraordinary piece of engineering, but how does it help in getting
:31:09. > :31:14.insight into intelligence? We are interested in what is known as
:31:14. > :31:18.embodied intelligence. This is the idea that your intelligence is not
:31:18. > :31:22.an abstract free intelligence, but tied very much to having a body and
:31:22. > :31:29.in particular to having a human body. In order to investigate this,
:31:29. > :31:33.what we have to do was build a robot that had as near to a human
:31:33. > :31:37.body as we can manage and investigate the way it interacts
:31:37. > :31:41.with the world and enables it to develop a particular sort of
:31:41. > :31:48.intelligence for dealing with objects and so on. Once we get on
:31:48. > :31:52.that track we'll be able to see how that intelligence, how that is
:31:52. > :31:58.actually determined and conditioned by its body. You think it will be
:31:58. > :32:04.distinct from the sort of intelligence that - does it know
:32:04. > :32:07.I'm talking about it? The intelligence that before they were
:32:07. > :32:11.doing things in robots, computers and that doesn't have a sense of a
:32:11. > :32:15.body, it's a box? What happened when people started trying to make
:32:15. > :32:19.robots that were controlled by AI and computers, is that they found
:32:19. > :32:24.that the things that we thought would be difficult like playing
:32:24. > :32:28.drafts or something like that were extremely easy. What was difficult
:32:28. > :32:31.was moving the pieces. The interaction with the world is very,
:32:31. > :32:37.very difficult and requires a lot of computation, but we are
:32:37. > :32:40.completely unaware of this. What about the ultimate goal of things
:32:40. > :32:45.like imagination or consciousness, this would actually become
:32:45. > :32:48.conscious of its own body? I like to think it's the most interesting
:32:48. > :32:52.question of all, but there are lots of problems to overcome before we
:32:52. > :32:57.get to that stage. I happen to believe it will be possible one day
:32:58. > :33:02.to build a machine that will have a kind of consciousness, whether it's
:33:02. > :33:06.the same consciousness as ours, whatever that means, will be an
:33:06. > :33:14.issue. I don't really believe we are going to arrive at
:33:14. > :33:17.consciousness with a disembodied computer. This robot suggests that
:33:17. > :33:27.human intelligence simply cannot be divorced from the human body that
:33:27. > :33:30.created it. The hunt for AI has ended up on an alternative path.
:33:30. > :33:40.It's not about hardware and software, but about trying to
:33:40. > :33:41.
:33:41. > :33:44.replicate human biology. Back at the circus school, I'm still having
:33:44. > :33:50.problems with my own body as I continue to struggle to get across
:33:50. > :33:57.the tightrope. Take that leg off right away. Good. Up, up. Come on,
:33:57. > :34:02.fight it. It's a battle. You think this isn't going to work, but the
:34:02. > :34:10.minute you reach up you are pulling yourself back in line. It doesn't
:34:10. > :34:14.seem logical. It really didn't. I was thinking that will do that. I
:34:14. > :34:19.feel there are lots of instructions going on here and I've got to try
:34:19. > :34:27.to put them together and maybe I'm - well, I don't know, whether I'm
:34:27. > :34:33.ever going to be able to do this. This is something -- is this
:34:33. > :34:36.something instinctive I'll be able to learn, or whatever? Move, move,
:34:36. > :34:45.move. If you are doing maths, you are thinking, thinking, you need to
:34:45. > :34:49.clear that. This feels really alien. But when I take a step back I
:34:49. > :34:55.realise that I've done something very similar to this before. That's
:34:55. > :35:01.when I learnt to walk as a child. Really relax your brain. Hands up.
:35:01. > :35:04.Step. Come on. I need to get mean now. Hands up. What's truly
:35:04. > :35:08.remarkable about our brains is that everything we learn to do from
:35:08. > :35:18.seeing, thinking, walking and talking, becomes automatic when
:35:18. > :35:19.
:35:19. > :35:26.practice. Go, go, go. Don't rush. Hands up. Hands up! We'll try a
:35:26. > :35:30.little bit of military technique. Use a whip now! This ability to
:35:30. > :35:35.learn to do something instinctively, without even thinking about it, is
:35:35. > :35:45.what makes us human. Getting a machine to do this is one hell of a
:35:45. > :35:51.
:35:51. > :35:59.task. Getting a computer to see has become one of the most difficult
:35:59. > :36:09.challenges in AI. Our vision is key to our intelligence. Without it how
:36:09. > :36:16.
:36:16. > :36:20.would we perceive the world around us? The one scientist -- for one
:36:20. > :36:29.scientist, the challenge has been to get a computer to be able to
:36:29. > :36:32.monitor and interpret CCTV footage, like a human does. You would think
:36:32. > :36:36.with the power of computers and a mainframe in here scanning all the
:36:36. > :36:40.programmes we can write, you would be able to pick up We haven't
:36:40. > :36:45.written the programme that can do this sort of thing. It's so
:36:45. > :36:49.difficult for a computer to pick out the needles in the haystacks.
:36:49. > :36:54.Our brains have learnt to control our eye movements effortlessly,
:36:54. > :36:59.taking in the whole seen. We instantly combine this with our
:36:59. > :37:03.previous knowledge of what a car, bus, person or building looks like.
:37:03. > :37:09.In less than seconds we can make sense of all the activity that's
:37:09. > :37:14.going on in busy Piccadilly Circus. For a computer to recognise just
:37:14. > :37:22.one of these elements, is almost impossible. James wants to show me
:37:22. > :37:30.why this is. This is a simple programme to demonstrate how
:37:30. > :37:33.difficult vision is. What it does is remove our hardware, our vision
:37:34. > :37:38.processing hardware from the problem, so we have to process each
:37:38. > :37:43.bit of the image piece by piece, rather like a computer has to.
:37:43. > :37:48.challenge is to try to recognise a few familiar faces. It's through a
:37:48. > :37:55.tiny dot on the screen. Is this somebody I should know? Yep, you
:37:55. > :38:00.should know all of these. OK. I've found the eye. OK. I've got a bit
:38:00. > :38:04.of clothing there. Check shirt. Exactly. You can put all this
:38:05. > :38:12.together, Marcus. Then I should get it? Yeah. I don't know. Give in on
:38:12. > :38:22.that one. Stephen Hawking. again. Another face. It's weird, I
:38:22. > :38:22.
:38:22. > :38:27.can pick out eyes and nose and mouth. Putting that all together in
:38:27. > :38:37.bits, but it doesn't have enough hair for Einstein. It's relevant
:38:37. > :38:38.
:38:38. > :38:46.for this programme. OK. Alan Turing. We'll have one more. Beyond picking
:38:46. > :38:54.out that this is a face, I think just the little pieces kind of
:38:54. > :38:59.piece by piece - all right, it's me. I don't spend ages looking at me.
:38:59. > :39:04.It really demonstrates how difficult it is. A computer can
:39:04. > :39:06.only interpret one small business of the image at a time. But we have
:39:06. > :39:10.learnt to piece all the pixels together and see the whole thing
:39:10. > :39:15.with no effort at all. James has been wrestling with the problems
:39:16. > :39:20.for years. He has done something really interesting, I think. He's
:39:20. > :39:26.developed a piece of software that uses ground-breaking technology to
:39:26. > :39:33.recognise individuals in a crowd. He wants to use me as a guinea pig.
:39:33. > :39:37.What we'll do now is take you out on to the streets and track you out
:39:37. > :39:40.and about. This means tracking you as you go within a camera and then
:39:40. > :39:44.when you move on to a different view, we'll track you into the new
:39:44. > :39:54.camera and try to find you there and carry on to see how far we can
:39:54. > :39:54.
:39:54. > :40:00.take you. I'll be passing under the gaze of the Dons of CCTV cameras in
:40:00. > :40:04.and around Piccadilly Circus. -- dozens. It's difficult for a
:40:04. > :40:08.computer to pick out, because as I walk through the different camera
:40:08. > :40:16.shots my size and position changes all the time. Whereas we can
:40:16. > :40:20.separate what is me and what isn't without even thinking about it.
:40:20. > :40:22.James' computer system gives each person a sequence of numbers and
:40:22. > :40:28.then attempts to recognise individuals by matching their
:40:28. > :40:31.number between all the different camera views. On the right-hand
:40:31. > :40:36.side of the screen we can see where the computer has successfully
:40:36. > :40:40.picked me out of the crowd on my journey around central London. In
:40:40. > :40:50.the end, I was tracked across five views as I walked a quarter of a
:40:50. > :40:51.
:40:51. > :40:58.mile. James is making real progress in getting a computer to recognise
:40:58. > :41:03.what it's looking at. But there's a deeper question at stake here. --
:41:03. > :41:07.steak here. It's not just about -- stake here. It's not just about
:41:07. > :41:12.assigning objects, but what we see. It's something that humans working
:41:13. > :41:17.here don't have a problem with. don't think we'll get rid of the
:41:17. > :41:21.human intervention. On a Friday or Saturday night, we see lots of
:41:21. > :41:24.people, the ones who have had a drink will mess about and play
:41:24. > :41:26.fight. I don't think a computer would be able to tell the
:41:26. > :41:31.difference between a real and play fight, because we can look at the
:41:31. > :41:35.expressions on faces and see if they're smiling or if they're angry,
:41:35. > :41:41.so I don't think we're at that stage. The chaos of the urban
:41:41. > :41:46.jungle. Exactly. The human eye is still better. We are programmed to
:41:46. > :41:53.pick out something unusual. That's weird. What are you found there?
:41:53. > :41:56.pair of shoes on a seat. I was curious as to why they are there.
:41:56. > :42:00.It's kind of strange. I've looked around to see who they might belong
:42:00. > :42:05.to why they might be there, a couple of people have sat down next
:42:05. > :42:09.to them and walked away, so you know they're not theirs, but I
:42:09. > :42:16.don't think a computer can be programmed with the level of
:42:16. > :42:19.curiosity and that's a natural human emotion. This desire to
:42:19. > :42:25.understand meaning is absolutely at the forefront of artificial
:42:25. > :42:35.intelligence right now. Our brains do it by learning complue
:42:35. > :42:39.
:42:39. > :42:43.experience and it's become a really -- through experience and it's a --
:42:44. > :42:47.it's become a really complex experience. I'm being reminded
:42:48. > :42:55.today what a great human thing it is to be able to learn something
:42:55. > :43:01.through experience. You would learn this pretty quick if you thought a
:43:01. > :43:06.crocodile was under you. Trying to get across a wire encapsulates all
:43:06. > :43:09.the problems involved in getting a machine to do a new task. Through
:43:09. > :43:16.the process of learning by experience we are able to combine
:43:16. > :43:22.all the different abilities that are needed. Like body control,
:43:22. > :43:30.reasoning and instinct. I've tried everything to get across this wire.
:43:30. > :43:37.Nothing seems to be working. But then Leila comes up with one little
:43:37. > :43:47.thing. Try a little song. It releases any little tune. I'm
:43:47. > :43:52.
:43:53. > :43:59.chatting to you now to try to relax you a bit. La, la, la, la. Keep
:43:59. > :44:08.coming, don't stop. Keep the leg down. Keep coming. Keep it small.
:44:08. > :44:13.Hands up. Hands up. Come on! That was so easy. You were there.
:44:13. > :44:17.close and then my hand went down. What you just did there, right to
:44:17. > :44:20.that point, looked really easy. I thought you can cracked this. I
:44:20. > :44:25.don't know, sometimes the last, the anticipation, it's the last step or
:44:25. > :44:29.something. This is the last bit. You've got it, so keep that
:44:29. > :44:39.momentum going. We can't fully explain how our brains learn in
:44:39. > :44:39.
:44:40. > :44:49.this way. I don't know where the huming is helping, but it is.
:44:50. > :45:09.
:45:09. > :45:19.rush. Oh, yes, I'm rushing. Mm, mm, Hand up. Leg off, hand up. Hand up.
:45:19. > :45:21.
:45:21. > :45:25.Little steps. Hands up. Hand up. Hand up. Yes! It wasn't elegant.
:45:25. > :45:29.Frpbts it was pretty elegant. Perfect. Well done. That is an
:45:29. > :45:32.amazing feeling to be able to get from one end of the tightwire to
:45:32. > :45:38.the other. But that is the amazing thing about human intelligence,
:45:38. > :45:41.just how adaptable it is, how I can go from nothing to learning a new
:45:41. > :45:44.skill like doing the tightwire. If you think about a computer, you
:45:44. > :45:50.would have to programme it especially to be able to do such a
:45:50. > :46:00.something different and for it to be lost again. Perhaps we can try
:46:00. > :46:01.
:46:01. > :46:09.the trapeze now? Yes, sure, great! One other human quality we like to
:46:09. > :46:18.celebrate, and often link to intelligence, is creativity. I have
:46:18. > :46:25.come to Paris, where the streets are steeped in artistic culture. I
:46:25. > :46:28.am here to visit an unusual artist's studio. Run by computer
:46:29. > :46:37.scientist Simon Colton, who is normally based at Imperial College
:46:37. > :46:42.London. All these immap -- images have been created not by him but by
:46:42. > :46:48.a computer. Because Simon wants to find out if it is possible to break
:46:48. > :46:51.down the creative process into a set of instructions. That is why I
:46:51. > :46:55.started simulating paint strokes. And this is a good example of that.
:46:55. > :47:00.As well as programming it with painting techniques, Simon gives
:47:00. > :47:04.the computer rules about different artistic styles. We can choose a
:47:05. > :47:10.painting style according to the emotion of the sitter. So it is
:47:10. > :47:15.kind of feeding off its suant and trying to tap in and appreciate its
:47:15. > :47:21.mood. His of her mood and express that through the art. I can see
:47:21. > :47:26.what a difficult task this is. But there is one image that has really
:47:26. > :47:31.caught my eye. I love this one. Lots of comments from people.
:47:31. > :47:38.is what dancers? That is right. But these people don't really exist.
:47:38. > :47:42.This isn't a photo that, or a video it has looked at. It is taking some
:47:42. > :47:46.concept of people fed into it. you watch it being painted it
:47:46. > :47:50.paints it all in one stroke. This is a single line. Is that is what
:47:50. > :47:54.is giving it this sense of energy and motion? That is right. Yes.
:47:54. > :47:58.That is a classic example of where you put in a rule, where you have
:47:58. > :48:05.more out of it than you put in I had no idea that I would get this
:48:05. > :48:08.kind of effect. I had no idea it would give such a dynamism. I say
:48:08. > :48:12.that real creativity makes you think more, so the software we are
:48:12. > :48:18.developing, that will make us think more not less. Just like every
:48:18. > :48:21.artist or musician or poet wants us to think more. So, I see a
:48:21. > :48:28.different future for AI which is somewhere software is out there to
:48:28. > :48:36.challenge you. So if these creations are meant to challenge us,
:48:36. > :48:45.what do the city's artists make of them? There is something sexual. In
:48:45. > :48:52.this artwork. Interesting. Like bacon, like monkey. You are seeing
:48:52. > :48:57.bacon and monkey? That is interesting. Is it a man or ma heen
:48:57. > :49:02.who does the paintings. Why do you think? Was a it is done
:49:02. > :49:09.automatically. How can you tell, you are right, but, I am interested.
:49:09. > :49:13.You can see it. For me I am not an artist, so I would be hard pushed.
:49:13. > :49:18.What gives it away? There is no sensibility. Really? You can pick
:49:18. > :49:23.that up. That is interesting. I quite like the image, or some of
:49:23. > :49:28.them any way. It is clear that trying to mimic something as
:49:28. > :49:38.elusive as raytivety remains a huge challenge. -- creativity. But
:49:38. > :49:39.
:49:39. > :49:42.scientists are starting to put down the first important building blocks.
:49:42. > :49:49.100 years the birth of the visionary mathematician Alan Turing,
:49:49. > :49:55.we are discover hag the hunt for AI is a more difficult problem than he
:49:55. > :49:59.or we ever imagined. Machines have surpassed us as specific tasks, but
:49:59. > :50:06.the human brain sets the bar for artificial intelligence extremely
:50:06. > :50:16.high, in its ability to constantly learn new skills. Hard wear and
:50:16. > :50:18.
:50:18. > :50:25.software, are struggling to mimic our biology. But maybe there is
:50:25. > :50:29.another way to create a thinking machine. That can capture the
:50:29. > :50:35.versatility of the human brain. I have come to Berlin, for the
:50:35. > :50:39.highlight of my journey. To meet a scientist who believes that for
:50:39. > :50:45.something to be truly intelligent, it needs to develop and evolve like
:50:45. > :50:49.we did at the very beginning of our lives and as a species. If you want
:50:49. > :50:55.to understand how to create or understand intelligence in some way,
:50:55. > :50:59.we have to capture this evolutionary adaptive aspect of it.
:50:59. > :51:03.So we cannot programme intelligence? We cannot hope to
:51:03. > :51:11.give it the purpose and do the learning and, a lot of people think
:51:11. > :51:17.that today, but it, we have to understand this continuous adaptive
:51:17. > :51:25.nature, this flexibility, also this motivation to continue to learn, to
:51:25. > :51:30.be excited about new thing, to seek out new things. Luc is taking me to
:51:30. > :51:40.a lab on the outskirts of the city. To show me some robots that are
:51:40. > :51:40.
:51:40. > :51:45.starting to do something remarkable. Three of them. They are punch
:51:45. > :51:55.larger than I thought they were going to be. -- much. These robots
:51:55. > :51:58.
:51:58. > :52:03.have been set up to develop much They are beginning to understand
:52:03. > :52:08.how their bodies work, by looking at a reflection of themselves. You
:52:08. > :52:12.have a mirror, so what is the.... Well the experiment is about that
:52:12. > :52:18.the robot would learn something about its own body, because in
:52:18. > :52:22.order to move in the world, in order to control it, in order to
:52:22. > :52:26.also recognise the movements of another, you need to have some sort
:52:26. > :52:33.of model of your own body. Right. The way that the model is going to
:52:33. > :52:39.be built up is that the robot is doing actions, and watching itself.
:52:39. > :52:44.Performing the actions. So to get a relation between the visual image,
:52:44. > :52:50.and movement of the motor for. So here you see this looking at the
:52:50. > :52:55.its hand. You also see very much how I was trying to keep balance.
:52:55. > :53:04.It is very impressive. How all the motor commands are in an early
:53:04. > :53:11.phase, right. Well caught! It has just woken up. I know that feeling.
:53:11. > :53:15.I think this was a beautiful example of feedback, and of finding
:53:15. > :53:23.balance. It is extraordinary. It really does look like it is
:53:23. > :53:28.encountering itself for the first time. But what is even more
:53:28. > :53:33.remarkable about Luc's robots, is that once they are able to
:53:33. > :53:40.recognise themselves, they start to evolve their own language and
:53:40. > :53:45.communicate with each other. So what will happen now? Well, one of
:53:45. > :53:48.them is going to speak, then he is going o to ask the other one to do
:53:48. > :53:54.an action. He is going to invent a word, because he doesn't have yet
:53:54. > :53:58.the word to maim that action. OK, so then he says the word and
:53:58. > :54:05.this one isn't sure whether what it could mean, this is a brand-new
:54:05. > :54:08.word, so he will make a guess, and if the guess is OK, totally by luck,
:54:08. > :54:12.well they both know this word and they know for the future they can
:54:12. > :54:17.use this word to communicate with each other. What if he gets it
:54:17. > :54:24.wrong? If it gets it wrong this one will say this is not what I had in
:54:24. > :54:33.mind. He will show it. OK. I mean I don't know which one is going to
:54:33. > :54:40.speak first. OK, he is speaking first. He is doing the action.
:54:40. > :54:43.is fantastic. OK. Notice how he looked. So he is... He is doing the
:54:43. > :54:50.real action. So now there is another interaction going to happen
:54:50. > :54:57.again. I don't know which one is going to speak. OK. Is that the
:54:57. > :55:03.word it just learned. Kimato? doing it. He will say yes,
:55:03. > :55:10.presumably. Yes. So would they interact with me, to learn this
:55:10. > :55:19.language? Yes, we could try it. Do you remember any one of the
:55:19. > :55:24.words. We will see. In is scary. Gosh it has asked me to do
:55:24. > :55:34.something. Kimato. What was that? Was it that, have I got it right.
:55:34. > :55:35.
:55:36. > :55:41.OK, you tell me what Kimato is. OK, that is right. All right. Tomima.
:55:41. > :55:49.Tomima. I think that was lifting my right arm. Did it get it right. Yes.
:55:49. > :55:53.I am learning robot! What would you say to people that would suggest
:55:53. > :55:57.that, well this looks simple. They are doing a few action, swapping
:55:57. > :56:03.words what is the big deal? first thing I would say to people,
:56:03. > :56:08.just try it yourself, OK, just try to build a robot that stands up.
:56:08. > :56:12.And then, you already will see its extraordinary complicated. How fast
:56:12. > :56:16.do you think the development will be in of the intelligence of these
:56:16. > :56:20.robot, such they are beginning to create a new language between
:56:20. > :56:25.themselves, but how long will it be before they are, I don't know,
:56:25. > :56:30.creating great works of art, the sort of things that we can do?
:56:30. > :56:36.know, if we have these two robots and we let them alone. We come back
:56:36. > :56:39.at the end o of the day and they will have developed a vocabulary
:56:39. > :56:44.for co-ordinating their action, once they start talking about
:56:44. > :56:48.shapes and colours and we don't know when they say Bobita do they
:56:49. > :56:53.mean a colour or a position in space. You will be going through
:56:53. > :57:01.the same process that the robot did to acquire that language, I mean...
:57:01. > :57:05.Yeah. The exciting thing about the robots, is they have their own
:57:05. > :57:11.evolutionary journey to go on. Tomima. Which could happen much
:57:11. > :57:18.quicker than ours. And they open up the possibility for intelligences
:57:18. > :57:22.very different from our own to emerge. That was a striking example
:57:22. > :57:25.of a merging intelligence, something which wasn't
:57:25. > :57:30.preprogrammed. Those robots really started with a pretty blank sheet,
:57:30. > :57:35.and by the end of the day just talking to each other, they had
:57:35. > :57:39.evolved their own language. A Laing cadge I couldn't really understand.
:57:39. > :57:49.Turing asked can machines think? I think he would be excited by what I
:57:49. > :57:50.
:57:50. > :57:54.-- a language I couldn't understand. I have discovered that computers
:57:54. > :58:00.have far surpassed humans in many fields but are unlikely to put us
:58:00. > :58:07.out of business yet. Because these super machines haven't come close
:58:07. > :58:11.to capturing the true nature of our intelligence. But I also learned
:58:11. > :58:15.how closely our intelligence is linked to our biology. And that
:58:15. > :58:25.perhaps the secret to creating thinking machines lies in letting
:58:25. > :58:33.
:58:33. > :58:39.But the one thought that has been creeping up on me throughout all of
:58:39. > :58:44.this, is that maybe we have become obsessed with our own abilities.
:58:44. > :58:48.And are blinded by being human. It does raise the question about
:58:48. > :58:54.whether we should be bothering to try and recreate what is us, when
:58:54. > :58:59.machines can do something so different to what human beings can.