The first of two shows highlighting the best bits from 2015, including the robots that can build stuff, understand us and help us with daily tasks.
Browse content similar to 26/12/2015. Check below for episodes and series from the same categories and more!
Now on BBC News - Click.
Robots build a table, cockroaches go cyborg,
and I go a little bit crazy.
This is the best of Click, 2015.
It's the end of the year and time to look back on what we
have learned in the past 12 months.
And above everything else that has happened in 2015, there is one thing
that we all agree has been a thing.
2015 has seen the rise of the machines.
Yeah, they may not be quite ready to take over just yet but I genuinely
believe we are starting to see the beginnings of a robot revolution.
Machines are starting to understand the world around them,
they are starting to understand what we are talking about,
and they are starting to be able to build things on their own.
Welcome to MIT, where these guys are doing something that all humans hope
we won't have to do in the future.
They're building furniture.
Really slowly, but it is doing it.
It has the screw in, which is better than me for a start.
The grip is just four rubber bands but as it twists, it manages to
grip the table leg properly.
Each piece of the furniture has a unique
pattern of reflective balls on.
There is a whole array of infrared sensors around the room.
The computer system running this demo knows where everything is.
The Computer Science and Artificial Intelligence Laboratory is the
largest research lab here at MIT and it is also the weirdest looking.
Looks like Gaudi has had a go at that one.
Anyway, it is here that we enrol on our journey.
The Distributive Robotic Lab looks like this.
I have no idea what that is.
This is Baxter, a very famous robot.
And here is a robotic garden full of programmable moving LED flowers and
designed to illustrate some less visually interesting
but nevertheless essential computer science techniques.
It is difficult to get young students, particularly girls,
interested in computer science.
Concepts like fundamental algorithms that every computer scientist needs
to know, such as how to find the shortest path
from A to B, demonstrated here by the flowers changing colour.
One of the main missions of the lab in particular
is to develop robots that can think for themselves
and work together to solve increasingly complex problems.
But to create robots that can do anything,
you first have to understand how we and other animals use our brains.
Back in March, we visited researchers at Sheffield
University, who were working to map out the brain of a bee.
As you might have guessed, this is not the easiest thing to do,
which is why they have started with one part of the brain,
the part that lets the bee see.
Now, the scientists have plugged this
simulated bee brain into a drone.
A computer simulation of a bee's brain is flying this aircraft.
The bee brain simulation is made up of thousands of virtual neurons,
each represented by one of these coloured spheres.
The way they are laid out and wired up is copied directly
from a real bee and just like with a real bee brain,
when what the camera sees is filtered through these simulated
neurons, this is what happens.
If you look closely, you can see the chessboard pattern forming.
Lots of time has been spent training honeybees to fly down tunnels
and our model reproduces all of the behaviours that real
And you can manipulate the flight behaviour of the model
in the same way that you can manipulate the flight behaviour
of a real honeybee that has been trained to fly down a corridor.
The team here are not the only researchers looking to bees
One team has tried to replicate a bee's sense of smell.
And across the globe, researchers at Harvard University are trying to
create tiny bee-sized robots, which they hope could eventually be
used to pollinate our crops.
In a tiny basement room at Texas University live hundreds of Central
American giant cave cockroaches.
The school is famous for adapting robots for disaster
zones, but these cockroaches are destined to be cyborgs designed to
operate in areas difficult for humans to reach, like nuclear
disaster zones or earthquakes.
They chose this cockroach for its natural tendency to seek out
dark spaces and for its size.
The cockroaches are gassed with carbon dioxide before being
brought over to be operated on.
He is fully asleep and he will stay asleep for at least ten minutes.
The idea is to work pretty quickly on this.
Why are you using the whiteout?
Cockroaches have a waxy surface.
It creates a light adhesive.
Little hairy legs!
These acupuncture needles are then set
into the cockroach's ganglia, an area of neurons responsible
for involuntary movement.
It is kind of deceptive.
And that is the finished product.
Is that hurting him?
No, it is just startling because you've picked him up.
Some of our viewers might think it is cruel
to put wires into their brain.
I don't think the cockroaches have any feeling
for that kind of problem.
They don't have a big brain to start with.
They are happy, I have no doubt about that.
We're not really hurting them in any way, we're not really causing pain.
The final step in the process is attaching the battery
so it can work with the controller.
This is a simple remote that we modified.
I'm going to try and make the cyborg cockroach go.
There he goes.
Once we have experimented with the cockroaches,
we put them back in the box.
The important thing is we don't test them again.
Once we do the test, they get retired.
These cyborg cockroaches will be getting ready for field tests
and the researchers here are already looking
at other insects they could use.
Silicon Valley, the centre of the tech world.
San Francisco and its satellite towns have spawned
thousands of technology companies over the years, but few have had
as much impact as this one.
From its enormous campus in Palo Alto,
its tentacles now reach everywhere.
Welcome to the Googleplex.
Google dominates web search these days.
Although on this lazy afternoon in the sun, it does appear to be
taking it easy.
Well, maybe after years of work that started as just a few
geeks in a garage, this massive empire feels the need for a break.
The job of search has been done.
The web has been indexed.
But in another sense, there is a whole new job to do
and that is to understand it.
After building up a collection of trillions of words,
Google, amongst others, is trying to connect them all up in meaningful
ways, maybe even in ways similar to the brains in our heads.
And this will help Google to work out more precisely what
we really need to know.
Here is the Twitter account from BBC Sport.
They just tweeted that Gareth Edwards has been knighted.
I wonder how old he is.
OK, Google, how old is he?
Gareth Edwards is 67 years old.
And it has understood the most important thing in that string of
text and it knows what is the "he."
It understand the context.
This is a demo of a function called Now On Tap, which is coming to the
new version of the Android operating system when it is released.
It's an extension of Google Now and it offers more information
on the things that you read about.
That sounds simple but it requires more understanding
than you might think.
If I were to say to you Michelangelo was my favourite Renaissance
painter, your brain would instantly do loads of things.
You would know I was talking about Michelangelo the artist,
not the turtle.
You would know that the Renaissance was a period of time.
And you would know there were other artists around then as well,
including sculptures and musicians.
But to a computer, that sentence is just a collection of words.
It doesn't actually mean anything.
The aim is to make computers understand that these words are
actually people, places and other things, and crucially,
that they all interconnect.
This is what Google calls the Knowledge Graph.
Think of this as Google's understanding of the world
and all the things.
It can be all sorts of things.
Movies, places, restaurants, cocktail recipes.
But understanding words is only one part of the equation.
For a robot to be able to function in the real world, it also needs to
interpret the deluge of information it gets from its cameras.
In other words, it needs to understand what it sees.
Computers find this task incredibly hard because the real world is not
easily represented by pure data.
Researchers are working on computer vision.
For it to be successful, the computer needs to be able to
distinguish items in a scene, identify what it is looking at,
and develop an understanding of its circumstances
so it can complete its task.
The researchers are working on a neural network
that can identify 20 objects at a time.
That does not sound like many but if they get this right,
they can apply the same method to millions of objects.
The network is fed manually separated images.
As it scans the features of an object, it develops
an understanding, learning from its mistakes and getting better
at recognising other instances.
Most importantly, it gets more efficient at it every time.
But for it to be of any use, it needs to get it right as often
as humans do and very quickly, no matter how tricky the scene.
This is where some human help can come in handy.
Here is a room with some objects inside.
I'm using a 3D infrared camera to scan my surroundings.
Now, I'm going to hand the camera to Stuart while I label the scene.
And you do that like this.
As I go around touching the items, I'm quickly identifying
the different classes of objects in my environment.
One day, we may all be able to help our machines to recognise our
stuff no matter how unique it is.
The point of this research is that someday we will have robots
that can perform lots of tasks to help us in our daily lives.
But we're already seeing this technology being used
out in the real world,
whether it is to help nurses assist surgeons
or to find a cure for cancer.
I was working as a currency trader.
I got a call one day from my mum,
saying that my dad was having trouble finishing his sentences.
They did an MRI and they found three unidentified masses on his brain,
which turned out to be glioblastoma multiforma,
which is the most common and aggressive brain tumour in adults.
Matt Da Silva's story is in many ways very similar to anyone who
has lost someone to cancer but it becomes extraordinary when you hear
about his ambition to revolutionise the way that we treat the disease.
In his laboratory in San Francisco, he is looking to develop
a treatment method that could be custom made for each patient.
The idea is to combine off the shelf already approved
medicines to create a drug therapy regime that results
in shrinking tumours and hopefully complete recovery.
The problem is that there are far too many approved drugs
on the market, containing many different chemical compounds.
To test all of the possible combinations in a lab is impossible.
To test all of the possible combinations in a lab is impossible.
This is where artificial intelligence comes to the rescue.
Notable Labs has partnered up with Atomwise, a company that has
developed an intelligent algorithm that can simulate how an illness
attacks the human body, and more crucially, test chemical compounds
artificially to see which treatments would be most
effective in blocking its progress.
If you tried to, as a human, consider all of the possible
factors that relate to each of these interactions,
it could take a lifetime.
Hundreds of thousands of concurrent factors that interact
in highly non-lineal ways.
The algorith narrows down the possible combinations
from millions to just a few hundred.
Back at the lab, these combinations are tested on real cancer cells that
have been taken from patients.
This is a patient that had surgery in San Francisco three weeks ago.
We're waiting for their cells to grow and form spheres.
The reason we want those cells to form spheres is because we want
them to be like miniature tumours.
When we test it with drugs, we want to make sure that what
happens here will translate back to the patient themselves.
Matt is hoping to certify his method within a year
so he can treat large numbers of people.
And if it really does work, we could start treating some cancers with
medications that are already sitting on a shelf
and also massively cut the costs of those treatments.
It is one of the leading cancer research hospitals in the world,
with a reputation and a name to live up to.
Three years ago, to mark its centenary, the doctors invited
patients and their families to write messages and tie them to the trees.
They have stayed there ever since.
But inside they are not pinning the future of beating brain cancer
on hope alone.
This is one of the first places in the world to get some new kit
that uses robotics.
In most cases, neurosurgeons also try to remove
as much of the brain tumour as possible if it is safe to do so.
And crucially, that means avoiding damaging
Through a tiny hole made in the skull, a tube, which houses
a laser, can be fed to the exact spot, using an MRI scanner.
The laser is twisted towards the direction of the cancerous
tissue, while the healthy tissue on the other side is left untouched.
This is one of the first patients to use the system.
The initial results appear positive.
But the man in charge of brain cancer research here
doesn't want to stop there.
He is going beyond stem and T-cell treatments to help develop
international nano particles that attack cancer growth.
He's adapted new equipment used to help deliver them,
straight to the front line.
By removing the remaining burnt tumour after the treatment, space is
left inside the brain for the nano particles to then be delivered.
Either drugs or these designer cells then go to work fighting
any remaining cancer threat.
But tumours re-emerge often after treatment.
So the doctor's team wants to direction
they should special fighter cells once they are inside the brain.
By attaching microscopic magnet to the particles he hopes to move the
treatment to any area of the brain.
Simply by using a magnet.
Magnet-guided treatments are attracting serious attention.
Three months ago Google X, the scientific research arm of Google,
got to work on a similar idea.
Both teams expect new treatments in five years' time.
Now, you have heard of tug boats, well let me introduce you to
the tug gots.
Press go to continue.
25 of them roaming up and down the hospital halls, ferrying, meals,
trash and pharmaceutical supply, the latter being securely locked
in so when they arrive at their destination only people with the PIN
code or fingerprint authentication can open them up.
We have learnt of 14 football feeds to navigate,
they rely on built-in maps together with Wi-Fi to get their bearings.
This robot has summoned the elevator and now
after making sure there is no-one in it, he-she-it is going to take the
supplies up where they need to go.
Hold that lift!
This is Hugo, and it is about to embark on the
toughest test known to robot-kind.
Next weekend it's the DARPA rot ticks challenge where teams
from around the world will show up in California with their bots.
The mission - to complete a series of human tasks with minimal human
Oh, my gosh!
Tomorrow the team pack-up and fly out, which means today is
the last day of practice around their practice course.
Which is unbelievably tough!
He has to find and close a gas valve, use a freaking drill to got a
hole, pull a handle, push a button, and fight through rough terrain.
The aim is to complete the course in the fastest time, and anything under
35 minutes puts them in the running to win the $2 million prize.
So that's how it drives.
One hand on the robot...
One hand on the steering wheel.
This is a robot driving a car using controls that were made for humans.
This is going to be the coolest exit from a car since
the Dukes of Hazard got in one.
Once out, he reveals he has wheels of his own.
In roll mode he can travel further faster.
He has the handle!
Handling the drill is an even bigger test.
Once it's been identified by the team, it's up to
the sensors in his hand to feel it, find the button and apply
the correct pressure to cut a hole.
The DARPA challenge will contain one task which the teams won't know
The robot will need to analyse the scene, relay the 3D information
back to the humans and they will need to workshop a solution.
Once they have done that in virtual space, they will upload
the instructions back to the bot.
In this case, it's pushing a button, which I have to say is no match
for this brilliant butch, block of silicon!
Is it wrong to say I am ever so slightly in love?
What do you mean too exited?
It was amazing!
And Hubo ended up winning the DARPA challenge too.
I have to say, though, during the our travels this year, we
have seen some robots which didn't match up with our expectations.
Do you speak English?
Well, let's try the next receptionist.
Who turns out to be...
And he does speak my language.
Welcome to the hotel.
LAUGHTER Thank you for your visitors.
Your name on the room card on top of the fill in the phone number, please
put us to the bottom of the post.
Please press to proceed with...
Did you get that?
LAUGHTER I think I did!
Please move to the right touch panel and check in.
Do you wish to use facial recognition for entrance?
Thank you so much.
It was more of not a real robot experience but staying at the hot
natural Japan was a real blast.
That's the end of the first of looking back at 2015.
There is another one next week.
Thanks for watching.
We will see you then.
The weather is going to turn a bit calmer for the last day of 2015.