Browse content similar to 10/12/2016. Check below for episodes and series from the same categories and more!
Now on BBC News it is time for Click.
This week, mapping the poorest parts of the world.
And, hands up who is not flying the plane?
There are things happening in artificial intelligence right
now that will fundamentally change our world.
Soon, machines will learn to do our jobs.
And at that point, things get very interesting.
We'll talk more about the consequences of an automated
society in a few minutes but, after thinking, walking and driving,
have you ever wondered how hard it would be for a computer to fly?
I'm not talking about drones that can fly between points, follow
I'm talking about aircraft that could intelligently
decide on a flight plan, just as a human would.
And that is what Mark has been hunting down.
Here at BAE Systems in Woolton in Lancashire, they are used
They built and have tested Eurofighter Typhoons here.
Today, however, I am going to take a flight in an aircraft that is much
This is a Jetstream 31, a small passenger aircraft.
It's a design from the 1980s but it's currently used by BAE
Systems as a flying test-bed for technology which could lead
Maureen McCue is head of research here.
Very well flown and understood aircraft from the outside,
but on the inside, it's filled with the latest technology.
That technology will eventually allow this aircraft to fly
Today, they are testing the plane's ability to detect and avoid
clouds as well as testing its satellite communication systems.
But take-off and landing will still be handled by human
pilots and the plane will be remotely controlled
How does this fit into the autonomous equation?
At the moment, it's effectively a remotely controlled aircraft.
It is, and really, with autonomous operations,
you need to progressively expand the boundary.
You can't start with such a big bang right out at the full range
This humble looking outbuilding houses the grand station.
Here, a pilot will remotely fly the plane and he can ensure it
will react to instructions from air traffic control.
I would expect to see a joystick and images coming
through from the cockpit by you're not going to be flying
No, everything is through the numbers that you can see there.
These flights are taking place in uncongested airspace.
Today, we will be flying over the Irish Sea.
To help fly itself, this aircraft uses data from satellites
as well as identifying radio signals broadcast by other aeroplanes, so it
It is also fitted with a camera that can see other air
users, even if there are not warning signals.
So right now, the pilots aren't actually flying the aircraft.
From that 15-year-old Dell laptop that's probably Windows XP.
At this point, the aircraft is flying autonomously with a human
Handing control of the computing over to the autopilot in the back
and once established on the route, I can hand the computing
over to Clive by the satellite on the ground.
So that little shed-like building we were in earlier with Clive
in front of the computer, he is now flying the aircraft.
Over the course of the testing of this aircraft, it's going to have
to perform a variety of different, complex tasks.
For instance, it's going to have to recognise and avoid bad weather.
Not just weather, but other aeroplanes, too.
It will eventually be able to select a safe landing spot
Today, we can't really test its weather detection
abilities though as, unusually for the UK,
BAE suggests that autonomous aircraft could be used
to perform dirty, dangerous or repetitive tasks.
But could this technology be introduced into
At the moment all commercial aircraft have a set number of crew.
There are programmes in existence looking at how you reduce crew
either planned from the outset or, in the case of an emergency,
you've got the autonomous system as a fallback so that you can
still have perhaps a two-crewed aircraft on a certain length
of flight but one of the crew happens to be an autonomous helper
But what happens when things go wrong?
While aerospace manufacturers are exploring the possibilities
of fewer cockpit crew, what do commercial pilots think?
To find out, we paid the British Airline Pilots Association a visit.
Many decades of looking at aviation has brought us to the position
where we have come to the conclusion it's best to have two
pilots in the cockpit, because if you reduce that to one,
the problem you've got then is you've got no one to cross check
Take for example, the miracle on the Hudson.
When the aircraft lost both its engines, the pilots had
to have a discussion and they decided their only course
No computer can be programmed to do that.
The flight testing of autonomous aircraft continues but the debate
about regulating them and how they are going to be used has
That was Mark and this is Tim Harford, columnist
for the Financial Times and Tim, you've written a book about how
the systems that we now rely on can sometimes backfire.
What do you make of the idea of planes that might only need one
And of course, autopilots have made planes safer
but what worries me is, what happens when
No system is perfect, including a system where
I guess when it goes wrong, it has two hand back
Autopilot hands back to the human in the cockpit but then what?
The human is out of practice, the human is not used to flying
the plane and because the autopilot has failed, it's probably
There is a worrying example of this a few years ago.
An Air France crash over the Atlantic Ocean.
The plane was flying quite high above a storm.
The autopilot disconnected and the pilots just weren't used
They were only used to operating the plane on take-off and landing
and they flew a perfectly good plane into the Atlantic Ocean
because they were confused about what was happening.
They killed everybody on board, an absolute tragedy, and this
The autopilots are normally so safe, so reliable,
that when they fail, the pilots find
I guess the next question is, what about autonomous cars?
We have been talking about how they will blissfully drive us around
I suppose for the foreseeable future, they won't be good enough
I guess they'll never be 100% reliable but the model
where if it's confused, it hands back to the human,
You are there with your bagel, your coffee, your newspaper.
You look up, there's a bus coming towards you, and the car goes,
autopilot disengaged, human take control, it clearly not
What makes more sense is for the human to be driving
and for the computer to be watching out for a dangerous situation,
for the computer to take over if there's a problem.
Humans get bored, get distracted, lose their skills.
None of these things happened to computers.
I guess we are in an extended period of time before the far future
happens and computers drive and fly us.
We've got possibly decades if not a century of being in this
interim period where, if there is a problem,
we are going to end up blaming them for this really unusual,
weird crash that a human wouldn't have made.
Yes, and I think a glimpse of that is where we are asking
the computers to make a decision not about planes or cars but about,
for example, who'll get a promotion or who gets a special deal in a shop
or who gets arrested for shoplifting because the computer
We are already asking computers to make this sort of decision
and the lesson of the paradox of automation is that we need to be
much more savvy about the fact that computers do make mistakes
Hello and welcome to The Week In Tech.
It was the week that inventor Haiyan Zhang developed a smart
wristband to help people with Parkinson's disease
The device's in-built motors vibrate to distract
It was also the week that we discovered queueing
at the shops and using those beyond infuriating self-service
checkouts could soon be a thing of the past.
Amazon has unveiled a sci-fi store in Seattle that uses your smartphone
and advanced technologies like deep learning, computer vision and sensor
fusion to automatically detect when products are taken
When you're done, you can simply trot off and then wait
for that gargantuan virtual receipt to follow.
And if you feel like you're forever stuck in traffic,
Audi has rolled out an update to make every second count.
Its new traffic light information feature tells drivers exactly how
long they'll have to wait behind a red signal before it turns
green and the length of time it will stay green.
It works by connecting directly to the city's
And finally, robotic research has reached new heights, literally.
This hopping mad bot developed at UC Berkley cannot only jump a meter off
the ground but can then again jump off objects to reach
Inspired by the agility of bushbabies, researchers hope
it can one day identify jumping spots for itself.
Which these days isn't guaranteed to be true.
In the run-up to the US election, for example, the Speaker
of the House of Representatives did not get naked, the Pope did not
endorse Donald Trump and he did not win the popular vote,
but these stories, from websites posing as real news sites,
Of course, it doesn't help that in 2016, the real news sounds
But anyway, it's made events like the Trust Hack here in London
Here, journalists and technologists from large news organisations
are workshopping ways to help readers tell the difference
between well-researched journalism, propaganda, advertising,
The thought is to provide images like icons back-up materials
that the public could see connected to a piece of news and then it
would send a signal back to the news distribution platform like Google
or Twitter so that they can identify quality news out of the fake news
There are already projects afoot to try to flag up stories on sites
known to generate fake news like this plug-in but the ideas
here are not about blacklisting sites or producing automated
Both would be massive undertakings and would themselves provoke cries
This is more about letting news organisations prove to their readers
The journals of the Washington Post, I work with some amazing people.
We produce really great stuff and they remain really committed
The tools that we are building here are just a way for us
to communicate that we are putting in the effort, where our stuff
is coming from, who we are talking to.
We are trying to create something that would easily allow audiences
to verify for themselves what sources we have used.
You'd be able to click and see, who did we talk to, and you'd
Readers want to feel like journals are being held accountable to them
Other ideas here include ways to fight information bias
by surfacing articles that support the opposite side of an argument
or to look at the likelihood of truth by finding similar articles
Many think the reputation of the journalists themselves plays
a big part in the trustworthiness of reporting.
Italian newspaper La Stampa is suggesting a system
where an author is assigned a unique identifier that shows their piece
The best ideas won a small prize at the end of the day but it's
The hope is that this part-Google funded initiative may lead
to a system that helps news outlet stories rank high up on search
That said, I can't help wondering if that is actually something that
platforms like Google and Facebook really want.
Do you think they care what it is they serve to us or do
you think, really, as long as you click on it, that's
How do we know the motivations of any company?
These companies make their money through people clicking
on the adverts, so do you think any of the large companies care
Based on my conversations with them, I think they do.
The argument would be if they start being perceived as not caring
about the information that's sent out there,
Yes, it may be that the truth will out, not because of a desire
for the facts, but because everyone, readers and news aggregators,
More and more people are shopping online but still at this time
of year, the high street seems pretty chaotic and the retailers
So they are trying to create some more engaging experiences.
But do they help us or are they just a distraction?
Here in London's Covent Garden, 140 shops and restaurants are taking
part in creating one huge augmented reality experience.
With the help of AR app Blipper, things come to life.
It may not have created the personalised shopping experience
I dreamt up, but there were some promotional offers presented
as virtual Christmas presents almost around the tree.
A reindeer hunt and a giant reindeer you can take a selfie with,
I have to say, it wasn't quite as cutting-edge as I'd hoped
but I suppose it's a bit of light-hearted fun.
Rather more purposefully, the signs in windows can be scanned
using image recognition, taking you to online content,
partly the sort of stuff you'd be able to look at from your sofa
Then came our trip to London's Westfield where augmented
We've seen technology like this before but now it's actually
on the shop floor here at Charlotte Tilbury.
This is what's known as the Magic Mirror and this
You choose a lipstick and in real-time, you will see
Bright red lips, although it doesn't seem to have any around the edges,
I've tried the Rimmel app that does something similar on your phone
and you can buy things through it, but here you can do it in the store
with assistants all around and a whole shop of products that
you can test, smell, and after you see what your face
looks like on here, you might want to have a go to check that
Meanwhile, here at this eBay event, they are taking things
The data on this screen represents what is apparently visitors
Using what they call facial coding, the camera looks for reactions
which these guys reckon you have when you do online shopping.
Nice but don't know who I'd give it to.
And I've been told to overact my reactions.
Whilst my results bore absolutely no correlation to what I'd liked,
maybe they were the ones I contorted my face to the most.
Maybe it would have worked better if I'd reacted more naturally,
although I struggle to imagine that my face would have shown anything.
It left me wondering whether eBay could be developing this if it
worked as something more permanent to assess our feelings when online
Either way, don't expect me to look too excited about it.
Not that I'm sure the tech would have even noticed.
Now, in 2015, members of the United Nations adopted a set
Number one on that list is to end poverty and to achieve that goal,
you first need to work out where poverty exists and how
We met up with some scientists at Stamford who have that
Marshall Burke is a professor of earth systems science
at Stanford University but he spends much of his time in Africa
The way this is done is to elicit from the household a listing
of everything they've consumed in the last week,
So literally everything they've consumed.
Every single thing and the value of that item and then you add up
all these items for every single person in the household.
This can take hours and hours just for one single household.
Then you have to do this for thousands of households to get
It's painstaking work but Burke has teamed with computer science
Using machine learning to predict poverty data
But to find out whether the people living in those areas are rich
or poor, the researchers used a process called transferred
learning and this image of the Earth at night.
The parts of the world that are lit up are typically the wealthier parts
So basically we use the lower resolution night-time images to help
us figure out what in the really high resolution daytime images
we should should be using and then we use that to predict poverty
Between 300 and 400,000 images were used to train the algorithm.
The algorithm will figure out what's important,
So some of the things it finds are things that
you or I would recognise, things like roads,
Based on those features, the algorithm can predict
Things like refrigerators, cars, the sum of all those assets.
It can also be used to predict incomes.
These poverty maps show the team's findings.
In areas marked red, people spend as little
In green regions like Uganda's capital, Kampala, they spent
We are providing a very cheap and scalable alternatives
to traditional means of data collection.
Traditionally, you have to send people out into the field
with clipboards, the surveys aren't always accurate,
Like for example lots of governments where they are underperforming,
All we need to make our predictions are satellite images.
But can you draw conclusions about the economic well-being
of communities in Africa when you're thousands of miles away,
sitting at a laptop in an office at Stamford?
We actually have really good survey information in a few locations.
We can use the satellite imagery to make a prediction about poverty
and then we can compare that to what the survey says was actually
So we used a couple of the really good surveys we had to validate
To be a truly useful tool though, the algorithm needs an upgrade.
We would also like to use historical imagery so maybe we can figure out
how poverty dynamics work overtime and even give us the chance
of predicting what's going to happen in the future.
If you can pinpoint poverty on a map, aid could be distributed
more evenly, policies could be more effective.
A picture may be worth a thousand words but combining that picture
with artificial intelligence could make a world of difference.
That was Sumi and that's it for this week.
You can follow us on Twitter @BBCClick for backstage fun
and photos and extra technology news throughout the week.
Thanks for watching and we'll see you soon.
Well, it's still very mild and murky out there.