Your Face or Mine

Download Subtitles

Transcript

0:00:02 > 0:00:06Now on BBC News, it's time for Click.

0:00:06 > 0:00:09This week, lady becomes model.

0:00:09 > 0:00:10Man becomes ape.

0:00:10 > 0:00:15And Spen becomes Trump.

0:00:15 > 0:00:17AS DONALD TRUMP:Great, the best.

0:00:43 > 0:00:45OK, movie quiz time.

0:00:45 > 0:00:47Five points if you can name this film.

0:00:47 > 0:00:47Correct - it's Raiders of the Lost Ark.

0:00:49 > 0:00:52No, that is not Harrison Ford, that is the face of Nicholas Cage.

0:00:52 > 0:00:58OK, try this one.

0:00:58 > 0:01:05Yes, it is The Fellowship of the Ring.

0:01:05 > 0:01:08100 points if you spotted Nicholas Cage, Nicolas Cage

0:01:08 > 0:01:12and Nicholas Cage.

0:01:12 > 0:01:14So, what on Earth is going on?

0:01:14 > 0:01:16We're just about getting used to the idea that there

0:01:16 > 0:01:18are loads of fakes online.

0:01:18 > 0:01:20Fake news, fake tweets, fake photoshopped images.

0:01:20 > 0:01:22But these videos are a whole level above anything

0:01:22 > 0:01:25that we've seen before, and they may have consequences that

0:01:25 > 0:01:38go far beyond just switching out a few movie stars.

0:01:38 > 0:01:41A lot of what we talk about over the dinner table is,

0:01:41 > 0:01:45we live in a diverse world.

0:01:45 > 0:01:47Researchers at the University of Washington released

0:01:47 > 0:01:50this video last year, which used a computer vision

0:01:50 > 0:01:52algorithm to very convincingly doctor Obama's mouth movements

0:01:52 > 0:01:55to make him lipsync to something he said in a different interview.

0:01:55 > 0:01:58A lot of kids, the doors that have been opened to me

0:01:58 > 0:01:59aren't open to them.

0:02:00 > 0:02:02And with the tricks and tools of machine learning becoming

0:02:02 > 0:02:05better and easier to use, it's now possible to do this without

0:02:05 > 0:02:11a particularly powerful computer.

0:02:11 > 0:02:13Remember the Nick Cage videos from earlier?

0:02:13 > 0:02:16Well, this mix of Donald Trump and Angela Merkel was created

0:02:16 > 0:02:25using the same tool, a tool called Deepfakes.

0:02:25 > 0:02:29To be clear, this is not just a face swap like you might see on Snapchat.

0:02:29 > 0:02:34This is artificial intelligence that has learned what Trump's face looks

0:02:34 > 0:02:37like and then made it copy Merkel's facial expressions.

0:02:37 > 0:02:40What's fascinating is that these weren't made by a team

0:02:40 > 0:02:42of researchers, or a Hollywood visual effects department.

0:02:42 > 0:02:44These were made by individuals following an online tutorial

0:02:44 > 0:02:53on a desktop machine.

0:02:53 > 0:02:56Now, to see how easy it is, we're going to do it.

0:02:56 > 0:02:59We're going to take my face and make me President.

0:02:59 > 0:03:02We trained a neural network by feeding it video of some

0:03:02 > 0:03:03of my past appearances.

0:03:03 > 0:03:06We mixed it with President Trump's State of the Union Address.

0:03:06 > 0:03:08The software broke the video into individual frames,

0:03:08 > 0:03:11run them through the network and, in less than a day,

0:03:11 > 0:03:20this was the result.

0:03:20 > 0:03:28All of us, together, as one team...

0:03:28 > 0:03:30So, this is the original video of Trump.

0:03:30 > 0:03:32And this is me, on his head.

0:03:32 > 0:03:33We all share the same home.

0:03:33 > 0:03:37I'm not sure it's an improvement, but that does seem to be

0:03:37 > 0:03:37President Spenley Trump.

0:03:37 > 0:03:40The other half of the experiment didn't go quite so well.

0:03:40 > 0:03:45This is Click presenter Donald Kelly.

0:03:45 > 0:03:48Now, this was a very short and quick experiment.

0:03:48 > 0:03:49It's far from perfect.

0:03:49 > 0:03:52It's blurry, you can see the edges and sometimes -

0:03:52 > 0:03:53well, it's just downright scary.

0:03:53 > 0:03:57But had we left the network to train for longer, on better videos,

0:03:57 > 0:03:59we could have got much more convincing results.

0:03:59 > 0:04:00Now, Deepfakes has hit the headlines in recent weeks,

0:04:00 > 0:04:07but it's not because of Trump or Nicolas Cage, or even me.

0:04:07 > 0:04:10It's because of porn.

0:04:10 > 0:04:12An online community has been using AI to put Hollywood

0:04:13 > 0:04:14actresses' faces onto adult movie stars' bodies.

0:04:14 > 0:04:17Now, obviously we can't show you much of the resulting videos,

0:04:18 > 0:04:20but they feature a number of female celebrities.

0:04:20 > 0:04:23The videos are stolen, and so are the faces.

0:04:23 > 0:04:26I was surprised but, at the same time, not really,

0:04:26 > 0:04:29because of how technology is advancing so much that it's just

0:04:29 > 0:04:30seems kind of inevitable.

0:04:30 > 0:04:32However, the context of the material, taking a celebrity

0:04:33 > 0:04:35or a well-known face - or even anyone's face,

0:04:35 > 0:04:38for that matter - and putting it on a porn performer's body,

0:04:38 > 0:04:40and essentially creating this fake, non-consensual

0:04:40 > 0:04:51porn, it's disturbing.

0:04:51 > 0:04:53It was definitely a disturbing thing to see.

0:04:53 > 0:04:56Are people going to take, like, pictures of ex-girlfriends and put

0:04:56 > 0:05:05them on porn performers?

0:05:05 > 0:05:08Issues surrounding consent and copyright have led the website

0:05:08 > 0:05:10hosting this content to ban Deepfake photography, and the

0:05:10 > 0:05:11communities making it.

0:05:11 > 0:05:13Reddit too, the site where the community started,

0:05:13 > 0:05:15changed its rules to forbid involuntary fake pornography.

0:05:15 > 0:05:18Away from porn, doesn't take much imagination to see how one

0:05:18 > 0:05:20could create international outrage by making fake statements

0:05:20 > 0:05:21from world leaders.

0:05:21 > 0:05:24Something that may become possible very soon, thanks to some software

0:05:24 > 0:05:26that we looked at last year.

0:05:26 > 0:05:29This is Lyrebird.

0:05:29 > 0:05:32The idea here is that I can train a neural network with samples

0:05:32 > 0:05:36of my voice and then it will be able to speak like me.

0:05:36 > 0:05:54Harry hoped he would see some success from the current project.

0:05:54 > 0:05:56Parents should look out for...

0:05:56 > 0:05:59The software asks you to read out at least 30 sentences

0:05:59 > 0:06:02of its choosing, from which it can pull out the basic building blocks

0:06:02 > 0:06:05of words, the phonemes, that can then be put back together

0:06:05 > 0:06:06in any order.

0:06:06 > 0:06:13In other words, "in other words".

0:06:13 > 0:06:15I've always been a big fan of One Direction.

0:06:16 > 0:06:20They were, quite frankly, better than the Beatles.

0:06:20 > 0:06:27SPENCER LAUGHS.

0:06:27 > 0:06:30Although the creators of Lyrebird are aware that this technology

0:06:30 > 0:06:34could be misused, they say that by releasing it as a free tool,

0:06:34 > 0:06:36well, at least the public will become aware that fake voices

0:06:37 > 0:06:38are already a reality.

0:06:38 > 0:06:39AS DONALD TRUMP:Great, the best!

0:06:39 > 0:06:42One idea that we are considering is to watermark the audio

0:06:42 > 0:06:43samples that we produce.

0:06:43 > 0:06:49So we are able to detect immediately if it is generated by us.

0:06:49 > 0:06:52So, how do we protect ourselves from having our online photos,

0:06:52 > 0:06:54videos and sound recordings used to create fake us-es?

0:06:54 > 0:06:57At the moment, we are in a wild, wild west situation.

0:06:57 > 0:07:00We don't know the attitude of the courts to this problem.

0:07:00 > 0:07:03We don't have a clear piece of legislation that would cover it.

0:07:03 > 0:07:06We have piecemeal laws on privacy, copyright, trademark and passing off

0:07:06 > 0:07:09that would be useful to somebody in trying to stop

0:07:09 > 0:07:10this from happening.

0:07:10 > 0:07:13We don't have a clear legal definition and we don't have a clear

0:07:14 > 0:07:16piece of legislation that is exactly on point.

0:07:16 > 0:07:18And until we have that, this legal uncertainty will continue.

0:07:18 > 0:07:21The morality and the legality of Deepfakes are murky issues.

0:07:21 > 0:07:24Just as we are wrestling with the fact that we can't trust

0:07:24 > 0:07:28what we read, very soon we will need to confront the fact

0:07:28 > 0:07:49that we can't trust anything we see or hear either.

0:07:49 > 0:07:51Hello, and welcome to The Week In Tech.

0:07:51 > 0:07:54It was the week that Google's dedicated health farm announced it

0:07:54 > 0:07:57has created an algorithm that can protect high blood pressure,

0:07:57 > 0:08:00as well as risk of heart attacks or strokes, simply by scanning

0:08:00 > 0:08:02a person's eyes.

0:08:02 > 0:08:05As Boston Dynamics keep on showing us how intelligent

0:08:05 > 0:08:07robots are becoming, experts this week warned that

0:08:07 > 0:08:10artificial intelligence in the wrong hands is ripe for exploitation.

0:08:10 > 0:08:20Drones turned into missiles, fake videos manipulating public

0:08:20 > 0:08:23opinion and automated hacking are just three of the

0:08:24 > 0:08:26And Amazon CEO Jeff Bezos is getting a new clock.

0:08:26 > 0:08:29But it's not one you can pick up from a shop.

0:08:29 > 0:08:33Costing a huge $42 million - or £30 million - it is designed

0:08:33 > 0:08:35to run for 10,000 years and construction began this week

0:08:35 > 0:08:37inside a hollowed-out mountain in Texas.

0:08:37 > 0:08:39The Vice President of Facebook's adverts found himself

0:08:39 > 0:08:42in hot water this week, after tweeting that Russian-backed

0:08:42 > 0:08:44ads were not designed to sway the US elections.

0:08:44 > 0:08:47Rob Goldman's tweet came after 13 Russians were charged with meddling

0:08:47 > 0:08:55in the election via social media.

0:08:55 > 0:08:57His views were not those of Facebook itself.

0:08:57 > 0:09:00And a conspiracy theory video claiming that survivors of last

0:09:00 > 0:09:02week's Florida shooting are "crisis actors" somehow became YouTube's

0:09:02 > 0:09:14number one trending clip before being removed.

0:09:14 > 0:09:16And finally, pictures of Samsung's latest phone,

0:09:16 > 0:09:19the Galaxy S9 were leaked ahead of its launch at the

0:09:19 > 0:09:21Mobile World Congress, starting on Sunday -

0:09:21 > 0:09:26ironically, via an app released by the company itself.

0:09:26 > 0:09:28We'll move on now to video game news.

0:09:28 > 0:09:30Remember Nintendo's Switch, its hugely successful console that's

0:09:30 > 0:09:33both mobile and which plugs into a TV?

0:09:33 > 0:09:36Well, the Japanese gaming giant has now created a host of rather unusual

0:09:36 > 0:09:39new peripherals which wildly alter how the machine is used.

0:09:39 > 0:09:55And Marc Cieslak has been getting all bent out of shape over it.

0:09:55 > 0:09:56HE PLAYS A MUSICAL SCALE.

0:09:57 > 0:10:00You may be forgiven for thinking that this cardboard is the packaging

0:10:00 > 0:10:02for the new peripherals for the Nintendo Switch console.

0:10:02 > 0:10:04However, the cardboard are the peripherals themselves.

0:10:04 > 0:10:08Called Labo, it's a range of devices which includes things like a piano,

0:10:08 > 0:10:10motor bike handlebars, fishing rod and even a robot suit.

0:10:10 > 0:10:16Straps on the shoes...

0:10:16 > 0:10:20I might look like I'm stomping around in a slightly weird way.

0:10:20 > 0:10:24But this game asks you to really get into the character of a giant robot.

0:10:24 > 0:10:26And, if I pull out my visor...

0:10:26 > 0:10:27I activate first-person mode.

0:10:27 > 0:10:39For precision destruction!

0:10:39 > 0:10:41Called Toy-Cons, they are all constructed from folded cardboard.

0:10:41 > 0:10:44Some use elastic bands and all use the Switch's motion

0:10:44 > 0:10:48sensing controllers.

0:10:48 > 0:10:51I think Labo is a big deal for the Nintendo Switch,

0:10:51 > 0:10:54just because it proves that Nintendo is capable of continue to innovate

0:10:54 > 0:10:56on an already innovative product.

0:10:56 > 0:10:59The fact that it is made out of cardboard and your existing

0:10:59 > 0:11:02controllers fit in, I think that will blow parents' minds and,

0:11:02 > 0:11:05more importantly, blow children's minds as well.

0:11:05 > 0:11:08But before you can play with your Toy-Con, you've

0:11:08 > 0:11:11got to build at first, something that you might worry

0:11:11 > 0:11:14requires the prowess of an origami expert crossed with the advanced

0:11:14 > 0:11:16flatpack furniture building skills of a self-assembly sensei.

0:11:16 > 0:11:18Building these devices takes varying lengths of time.

0:11:18 > 0:11:20More complicated Toy-Cons, like the robot suit,

0:11:20 > 0:11:26can take up to eight hours to complete.

0:11:26 > 0:11:29But that's part of the appeal of Labo, taking pleasure

0:11:29 > 0:11:32from the building of the devices that you are about to

0:11:32 > 0:11:33use, and understanding how they go together.

0:11:34 > 0:11:43A little bit of patience and some deft folding results in this.

0:11:43 > 0:11:45Nintendo reckons this is a radio controlled car.

0:11:45 > 0:11:51Last time I looked, cars had wheels.

0:11:52 > 0:11:54My completed Toy-Con, which I can make move around,

0:11:54 > 0:11:57because the Switch controllers have got HD rumble and it

0:11:57 > 0:11:59means that you can have different levels of rumble.

0:11:59 > 0:12:01Allowing this particular Toy-Con to move about.

0:12:01 > 0:12:05Now, this only took me about ten minutes to build, all in all.

0:12:05 > 0:12:08And because the Switch controllers have an IR camera in them,

0:12:08 > 0:12:10on the Switch itself, you can see where

0:12:10 > 0:12:25the Toy-Con is going.

0:12:25 > 0:12:27Each one of the Toy-Cons comes with a game.

0:12:27 > 0:12:30Some are more complicated than others, but will require

0:12:30 > 0:12:32an element of physical control, which comes courtesy

0:12:32 > 0:12:40of the folded cardboard.

0:12:40 > 0:12:41The games themselves are more like mini-games.

0:12:42 > 0:12:43But that's not the point.

0:12:43 > 0:12:46This is more about creativity and making something than it is

0:12:46 > 0:12:47a hard-core gaming experience.

0:12:47 > 0:12:57But I do question the durability of cardboard peripherals.

0:12:57 > 0:12:58How does that go back in there?

0:12:59 > 0:13:01Not very, based on my time with them.

0:13:01 > 0:13:04We've managed to have a pit stop with our very own

0:13:04 > 0:13:04cardboard mechanic.

0:13:05 > 0:13:05So, while I managed to damage my cardboard motorcycle,

0:13:05 > 0:13:07repairs are really quite easy.

0:13:07 > 0:13:09There were two different offerings so far, the Variety Pack,

0:13:09 > 0:13:12which includes five different Toy-Cons, priced at £59.99, and

0:13:12 > 0:13:13the Robosuit, which costs £69.99.

0:13:13 > 0:13:16That seems like a lot of money for cardboard toys

0:13:16 > 0:13:19with bits of string for guts.

0:13:20 > 0:13:22Nintendo hasn't yet said whether they are going to give

0:13:22 > 0:13:26you replacement parts for that, or whether you are going to have to

0:13:26 > 0:13:28scavenge cardboard from supermarkets or things like that.

0:13:28 > 0:13:31So it's going to be interesting to see how much Nintendo

0:13:31 > 0:13:41are expecting you to spend on top of the base game and cardboard kits.

0:13:41 > 0:13:43This week, Caterpillar announced the release of a new smartphone.

0:13:44 > 0:13:47You'd be forgiven for not even knowing they produced such a thing.

0:13:47 > 0:13:51These devices are specifically aimed at the construction industry.

0:13:51 > 0:13:55But this one has a few interesting features.

0:13:56 > 0:13:58An upgrade to their FLIR thermal imaging camera,

0:13:58 > 0:14:01the addition of a laser beam for measuring how far away

0:14:01 > 0:14:04something is, or room size, and the standout feature, a nose.

0:14:04 > 0:14:11Yes, it can smell.

0:14:11 > 0:14:15Or, more specifically, it has an indoor air quality sensor

0:14:15 > 0:14:17which aims to alert users if there are high levels

0:14:18 > 0:14:26of volatile organic compounds - or VOCs - in the air,

0:14:26 > 0:14:28something commonly found in paint, solvents and cleaning products.

0:14:28 > 0:14:34Sound a bit niche?

0:14:34 > 0:14:36Well, its creators don't think so.

0:14:36 > 0:14:37Builders, plumbers, electricians, carpenters, farmers.

0:14:37 > 0:14:40These type of people kind of generally get overlooked

0:14:40 > 0:14:41by the everyday phone vendors.

0:14:41 > 0:14:44So, what we are doing is understanding the technology

0:14:44 > 0:14:46that we can integrate into our products that really

0:14:46 > 0:14:53makes their lives better.

0:14:53 > 0:14:57Next week on the show, we'll be bringing you all of the latest news

0:14:57 > 0:14:58and releases from NWC in Barcelona.

0:14:58 > 0:15:04That was Lara.

0:15:06 > 0:15:09Now, earlier we looked at how one face can be

0:15:09 > 0:15:15transplanted onto another.

0:15:15 > 0:15:18Next, it's time to meet the people who took this man's face

0:15:18 > 0:15:24and body and turned him into a completely different species.

0:15:24 > 0:15:28The team behind War For The Planet Of The Apes is hoping to win

0:15:28 > 0:15:30the Best Visual Effects Oscar next weekend.

0:15:30 > 0:15:38If it does, it will be the second Oscar in a row for Dan Lemon.

0:15:38 > 0:15:39Bad place.

0:15:39 > 0:15:39Human zoo.

0:15:40 > 0:15:42Bad human.

0:15:42 > 0:15:43Bad humans.

0:15:43 > 0:15:45Soldiers.

0:15:45 > 0:15:48Soldiers...

0:15:48 > 0:15:52War For The Planet Of The Apes represents the effort of over 500

0:15:52 > 0:16:00people in visual effects.

0:16:00 > 0:16:03Some of us worked on the movie for, like, two years.

0:16:03 > 0:16:06The movie runs about two hours and 20 minutes,

0:16:06 > 0:16:09and all but 15 shots in the movie had visual effects.

0:16:09 > 0:16:12Many of the heaviest shots in the movie took over 1400 hours

0:16:12 > 0:16:16per frame to render.

0:16:16 > 0:16:18So, that's a core hours.

0:16:18 > 0:16:21So we have a lot of those processors working at the same time.

0:16:21 > 0:16:24Maybe the frames wouldn't come back for two days, three days.

0:16:24 > 0:16:25Bad place.

0:16:25 > 0:16:28I find long time ago, after zoo...

0:16:28 > 0:16:31So, the razor's edge that we walk is figuring out how

0:16:31 > 0:16:34to take our apes and adjust them, and make the performance readable

0:16:34 > 0:16:37to a human audience, so it is legible as something that

0:16:37 > 0:16:44you and I can connect with.

0:16:44 > 0:16:47But also, not to take it so far that suddenly it doesn't

0:16:47 > 0:16:48feel ape-like any more.

0:16:48 > 0:16:49Somehow it feels too human.

0:16:49 > 0:16:56That is the fine line that we ride and that

0:16:56 > 0:16:57requires our animators to really...

0:16:58 > 0:17:00You know, it's their kind of skill and experience that

0:17:00 > 0:17:03comes into play there.

0:17:04 > 0:17:07This particular movie plays as sort of this epic Western, you know?

0:17:07 > 0:17:10Where they are travelling from one location to the next and you get

0:17:10 > 0:17:16to see a lot of different environments, and spaces.

0:17:16 > 0:17:18The big environment was the prison camp.

0:17:18 > 0:17:22That is where most of the second half of the movie takes place.

0:17:22 > 0:17:30The prison camp, we actually built it near the Vancouver Airport

0:17:30 > 0:17:31in British Columbia.

0:17:31 > 0:17:33It was a 70,000 square feet set.

0:17:33 > 0:17:35So it was a really big set.

0:17:35 > 0:17:38But it was surrounded by over 100,000 square feet of green screen.

0:17:38 > 0:17:41We needed all that screen because we had big wide shots

0:17:41 > 0:17:45that were set quite low.

0:17:45 > 0:17:49And beyond this set, we needed to put the Sierra Nevada mountains.

0:17:49 > 0:17:52We used a process called photogrammetry, where we went up

0:17:52 > 0:17:55in a helicopter and shot reference up and down the Sierra Nevadas

0:17:55 > 0:17:57in that kind of region.

0:17:57 > 0:18:04And then we turned those still photographs into

0:18:04 > 0:18:06three-dimensional geometry using the photogrammetry process.

0:18:06 > 0:18:19And we were able to use those pieces of geometry sort of like a kit set.

0:18:19 > 0:18:23And in the kind of traditional model making sense, we were able to take

0:18:23 > 0:18:25geometry from those mountains, cut them up and then put

0:18:25 > 0:18:28them together into a way where we could achieve specific

0:18:28 > 0:18:31compositions that matched what the director was after for that set.

0:18:31 > 0:18:34We developed this new piece of software called Totara.

0:18:34 > 0:18:35Totara is an ecosystem simulator.

0:18:35 > 0:18:38So, we built the terrain and we sort of map resources.

0:18:38 > 0:18:42We said, this is where the good soil is, this is where the bad soil is.

0:18:42 > 0:18:45And then we let the plants grow from these seeds.

0:18:45 > 0:18:48We ran this similation over the equivalent of 80 years

0:18:48 > 0:18:48inside the computer.

0:18:49 > 0:18:51And it allowed the trees to grow up.

0:18:51 > 0:18:53And as the trees grew, they competed with one

0:18:53 > 0:18:54another for resources.

0:18:54 > 0:18:57So, they would kind of try to reach higher and higher.

0:18:57 > 0:18:59And what happened was a lot of complexity emerged

0:18:59 > 0:19:00from that simulation.

0:19:00 > 0:19:04It looked a lot more realistic, and we got a lot more realistic

0:19:04 > 0:19:05variation and naturalism from that simulation.

0:19:05 > 0:19:09One of the things that are so great about our job is not knowing

0:19:09 > 0:19:11what the next thing is.

0:19:11 > 0:19:14And that, for us, is the thing that is so much fun,

0:19:14 > 0:19:17to be able to be, like, OK, here is the creative challenge.

0:19:17 > 0:19:21How can we take our technical tools and bend them to tell this story,

0:19:21 > 0:19:24or what can we invent, what can we make up to be

0:19:24 > 0:19:25able to tell the story?

0:19:26 > 0:19:28No, no, no...

0:19:28 > 0:19:30OK...

0:19:31 > 0:19:34You might remember that late last year I had a brilliant chat

0:19:34 > 0:19:38with Andy Serkis about his role as Caesar in War For The Planet

0:19:38 > 0:19:41Of The Apes, and the art of motion capture in general.

0:19:41 > 0:19:48And you can watch that again right now on our YouTube channel.

0:19:48 > 0:19:51And next week we will get exclusive, behind-the-scenes access to another

0:19:51 > 0:19:55visual effects Oscar nominee, Star Wars: The Last Jedi.

0:19:55 > 0:19:59Now, as it happens, the visual effects company that power Star Wars

0:19:59 > 0:20:03has also been involved in something rather different recently.

0:20:04 > 0:20:11Just in time for Fashion Week season, the London College

0:20:11 > 0:20:13of Fashion has teamed up with Industrial Light & Magic to put

0:20:13 > 0:20:19on a show with a difference.

0:20:19 > 0:20:22Two years in the making, ILMxLAB is debuting its live CGX

0:20:22 > 0:20:25technology in an augmented reality catwalk experience.

0:20:25 > 0:20:32Real-life models are joined by a virtual avatar

0:20:32 > 0:20:35that is controlled by a human, wearing ILM's signature performance

0:20:35 > 0:20:36capture gear backstage.

0:20:36 > 0:20:38As the performer walks around, a specially designed rig,

0:20:38 > 0:20:42motion capture and depth sensing cameras tracked her every move.

0:20:42 > 0:20:49The avatar mirrors her physicality in real-time.

0:20:49 > 0:20:52Ahead of the show, we got an exclusive sneaky

0:20:52 > 0:20:53peek behind the scenes.

0:20:53 > 0:20:55So, when we are calibrating, we are solving your skeleton

0:20:55 > 0:20:58from all the markers, so that you know, OK,

0:20:58 > 0:21:01that is how your wrist moves, that is how your hip moves.

0:21:01 > 0:21:04It gets the right proportions and everything.

0:21:04 > 0:21:07Having a live event, where we are using the technology

0:21:07 > 0:21:10that doesn't require a headset to be put on, and it is also

0:21:10 > 0:21:13not Star Wars-based, that is quite a two-part

0:21:13 > 0:21:13departure for us.

0:21:13 > 0:21:18So, you know, we are really excited to go into this new foray of what it

0:21:18 > 0:21:21means to collaborate on a live event, number one, and, number two,

0:21:21 > 0:21:29just being out here in London.

0:21:30 > 0:21:32It was really interesting to design with the consideration

0:21:32 > 0:21:34that it would be the Jedis.

0:21:34 > 0:21:36So, the textures and materials were chosen consciously

0:21:36 > 0:21:48to make sure it reflects what you guys can create.

0:21:49 > 0:21:51No catwalk is complete, of course, without a wardrobe change.

0:21:51 > 0:21:54And so, during the show, the virtual model's clothing

0:21:54 > 0:21:56transforms via the real-life performer taking a bow.

0:21:56 > 0:22:00The gesture triggers the change live.

0:22:01 > 0:22:03To view the transformation, though, tending fashionistas have to look

0:22:03 > 0:22:08at a large five by three metre LED screen.

0:22:08 > 0:22:12It shows a live video feed of the stage taken from the balcony.

0:22:12 > 0:22:15The room around them comes to life, too, with overlaid graphics,

0:22:15 > 0:22:21turning this traditional interior into a Macau inspired street scene.

0:22:21 > 0:22:23The fashion industry is based on emotion.

0:22:23 > 0:22:25There is enormous desire for newness.

0:22:25 > 0:22:38So integrating technology and fashion allows us

0:22:38 > 0:22:40to create new experiences, new garments, in this

0:22:40 > 0:22:42case, for consumers to enjoy and experience.

0:22:42 > 0:22:45So, the last five minutes, all of the models will stand in position.

0:22:45 > 0:22:48Seeing how the fabric drapes and falls, and how that mimics

0:22:48 > 0:22:50into digital format, that makes me really excited

0:22:50 > 0:22:53because of the potential of what you can do to fabric.

0:22:53 > 0:22:56So, that kind of control is something that I guess,

0:22:56 > 0:22:58as designers, we don't really normally have.

0:22:58 > 0:23:02And that's what we really wanted to kind of push for.

0:23:02 > 0:23:05Eventually, the aim is to shrink this elaborate setup down to a pair

0:23:06 > 0:23:07of wearable goggles.

0:23:07 > 0:23:10Let's hope they are suitably en vogue!

0:23:14 > 0:23:18Bravo!

0:23:18 > 0:23:21And that's it from us for this week.

0:23:21 > 0:23:24Don't forget we live on Twitter @BBCClick and on Facebook, too.

0:23:24 > 0:23:27You can find us there, hanging around every day of every week.

0:23:27 > 0:23:29Thanks for watching and we will see you soon.