Our future cities (and how we get there)
Ask the young people in your classroom or club what they think a future city might look like. Delivery robots? Driverless cars? Flying pods? They might not be far off.
The global technological breakthrough we’re approaching will influence all of our lives. Future mobility means smart cities. Cities with intelligent traffic lights, electric vehicles and connected systems that work autonomously. These technologies will provide the means for improved mobility, better infrastructure, and cleaner air.
Right now, technology firms are rapidly evolving the capabilities of the robotics and autonomous systems (RAS) that will be used in our future cities. One of the cornerstones of future mobility is the self-driving car, whether as a private car or a shared mobility service.
Here we take a look at some of the technology needed to enable their safe introduction and adoption.
Sensors and cameras
The autonomous cars being developed today have sophisticated sensing abilities. They are fitted with a range of sensors, cameras and lidars to help them see, understand and map the environment around them.
Cameras detect lines on roads and perceive objects around the car. Radars show how far away the objects are, and the speed at which they are travelling. Lidars create a real time 3D view of the static and moving world around the car.
These sensors talk to the car’s central computer to build up a detailed picture of where the car is, what is surrounding it, and where potential hazards may be.
Artificial Intelligence (AI)
The car’s central computer uses AI to analyse the inputs from the sensors, cameras and lidars on board and to make a decision about what action to take. For example, where to steer or whether or not to brake.
This type of AI is called narrow AI (as it’s focused on a specific task). For example, a car’s sensors may detect a pedestrian crossing the road and the computer takes the decision brake to avoid colliding with them.
This type of AI should ultimately enable autonomous vehicles to do these specific tasks more reliably than a human, such as keeping within the white lines on a road, or braking to avoid an obstacle.
Many of the tasks that humans perform when driving a car are actually hugely complex. Think about what is involved in safely navigating a busy roundabout with numerous vehicles, pedestrians and cyclists. Think about how much harder that is to do safely at night.
This becomes a challenge when designing a self-driving car, as we can’t necessarily describe precisely how the car must behave in every possible situation it might encounter. Therefore, a better approach is often to get the vehicle to learn how to behave through training it (much like how a human driver learns to drive safely by taking driving lessons). This is known as machine learning.
The cars can be trained either by allowing them to drive on the roads whilst being carefully supervised by a safety driver, or by using a simulation. Normally a combination of real-world and simulation is used to provide training data. Once a self-driving car has been trained well enough, it should then be able to cope safely with the situations it encounters.
One of the key challenges with machine learning is ensuring that the self-driving car has enough training data to learn how to behave safely on the roads. Going back to the example above, the car’s training data has to include information about what a pedestrian looks like and acts like so that when the sensors detect an object they can distinguish it as a pedestrian (rather than, for example, a lamp post) and take the action to avoid hitting them.
The future of self-driving cars
Some of the newest cars on our roads feature a range of the technologies mentioned above. Cars can already brake to keep at a set distance from the car in front of it. They can keep within the white lines on the road. It is already possible to stand outside of some cars and use software on your mobile phone to park the vehicle. This technology is relatively simple, but these cars will eventually outperform humans in these basic tasks.
We are still some way from fully autonomous vehicles. UK developers of autonomous cars are road testing vehicles in a handful of UK towns and cities, with the initial aim of introducing autonomous taxi services.
However, while the technology is developing rapidly, the assurance and regulation of the safety of these cars is not established. The Law Commission is currently reviewing the legal framework for automated vehicles; research is taking place to review and develop assurance frameworks for the cars. We must ensure that the safety of the vehicles is assured and regulated before fully introducing them to our roads.
Engaging young people with the Grand Challenges
STEM Learning is working in conjunction with the Department for Business, Energy and Industrial Strategy to bring you an inspiring STEM enrichment programme, centred around these four key Grand Challenge themes. The Grand Challenges - Our Futures programme offers free resources and enrichment opportunities to engage young people with these Grand Challenges.
Browse related resources
About the author
Professor John McDermid OBE FREng is the Programme Director of the Assuring Autonomy International Programme. John became Professor of Software Engineering at the University of York in 1987. His research covers a broad range of issues in systems, software and safety engineering. He became Director of the Lloyd’s Register Foundation-funded Assuring Autonomy International Programme in January 2018, focusing on the safety of robotics and autonomous systems.
He acts as an advisor to government and industry, including FiveAI, the UK MoD and Rolls-Royce. He has been actively involved in standards development, including work on safety and software standards for civilian and defence applications. He is author or editor of six books and has published about 400 papers. He is a Visiting Professor at Beijing Jiaotong University. He became a Fellow of the Royal Academy of Engineering in 2002 and was awarded an OBE in 2010.