The ghost in the machine

Posted: 16 September 2016 | Mark Cartwright, Managing Director, RTIG | No comments yet

Mark Cartwright, RTIG, explores the area of self-driving vehicles and asks if the human element should be ruled out…

There has been a lot of discussion recently about self-driving vehicles in the mainstream media. RTIG Managing Director, Mark Cartwright, explores this area and asks if the human element in public transportation systems should be completely ruled out of future systems.


Automation is going to be a feature of public transport in cities before it spreads into the private vehicle market. The reasons are clear: public transport fleets are regulated and generally well-managed; they run on routes that are defined in advance; and sometimes they even have their own infrastructure. But despite trials such as the European-funded CityMobil2 project, there is no clear definition of how an autonomous bus would operate.

Brave new worlds

The main justifications for autonomous vehicles are well rehearsed: safety will be enhanced through more complete information and quicker response times; travel time will be minimised through the (safe) reduction of gaps between vehicles, increasing effective network capacity; emissions will be reduced through smoother driving patterns; accessibility will be improved by reducing or removing the barriers to vehicle usage; and so on.

The key assumption is that at the core of operation – the decision-and-control point – computerised systems will perform better than people. In many respects, this is true. At the sensory end, systems can be built with much greater ‘visual’ acuity and sensitivity than people, allowing them to detect external activity much earlier. At the motor end, computers have a much faster reaction time than people: a system could, for instance, activate a brake within microseconds, which could make the difference between stopping short and killing someone.

At higher levels of processing the advantages are less clear-cut. It is fairly straightforward for a visual system to recognise a traffic sign, but quite a lot harder to recognise a person, or to estimate the depth of a new pothole. It is even harder to recognise the nuances of behaviour: is that person about to cross the road?

Video analytics is good at the technical level but not so good at interpretation. You would struggle to find one that could, when shown an episode of Game of Thrones, suggest what might happen in the following episode.

But driving also involves judgment. For instance: suppose you are driving along a road where the speed limit is 80km/h (about 50mph), but it’s dark and raining. What speed do you adopt and how do you slow down as you approach bends? Does this change if there is no separate path for pedestrians? If you are in a rural area does it matter if it is wooded, because of the risk of deer or fallen branches? For a bus, how is this affected by whether or not you are running late?

When things are going well, there are some good computer science answers – a combination of an on-board database of situations, some heuristics, some machine learning, etc. But suppose there is an accident. How is it investigated? If blame is attributable, how is that decided? You can interview a human driver, under oath if appropriate, but you can’t interview a set of software heuristics.

Erring on the safe side

Doubtless, a legal form of words could be found, something like making ‘reasonable provision for the avoidance of injury or damage’; also, case law will start to build up some assumptions about how autonomous vehicles behave.

Indeed we are just – sadly – embarking on the first step in this journey, as a result of the recent fatal Tesla Model S accident while it was under the control of Tesla’s Autopilot lane-keeping software1. It is far too soon to interpret anything about the course of either regulation or jurisprudence from this one example. However, I offer two hypotheses which seem to me to be reasonable.

The first is that makers of autonomous vehicle systems will focus on making their systems safe, as far as possible. The effect of this would be to bring the industry disciplines and assumptions of the rail sector onto the roads. How far that will affect either engineering or regulation – for instance, giving the infrastructure primary control over vehicle movements, or declaring that no vehicles are allowed on the roads unless they have appropriate autonomous systems – is not clear, but these are huge changes and it is not easy to see them happen quickly.

The second hypothesis is that people’s behaviour will change to take account of the new vehicle behaviour and not all of this will be good. A few drivers will abuse the systems, if they can, by executing foolhardy manoeuvres and letting the car recover. A few pedestrians will do the same, deliberately stepping in front of vehicles to make them stop. With luck these problems will be very rare, but they will be there.

To understand this, take a look at an online video2 and ask yourself whether the participants would really trust each other so much that the cars go through the stream of cyclists and pedestrians without slowing, and vice versa.

Tesla, to its credit, takes great care to emphasise that the driver of a Tesla car is always responsible. That position makes a lot of sense: the system is doing some good new things; making the driver’s life easier. But it’s a position that actually sets its face against autonomy, rather than being a step towards it.

The human element of transport has been important ever since the wheel was invented. Is there really a good reason to rule it out of future systems completely? Would it not be better, at least for the foreseeable future, to work on transport systems that combine the quite different strengths of computers and people?


  1. Since providing this article there has been another Tesla crash:

Mark Cartwright is Managing Director of the public transport community RTIG where he has led operations since 2004. For the past 20 years Mark’s main focus has been Intelligent Transport Systems and standards, specifically in the management of national initiatives. Mark began his professional life in the academic world where he taught mathematics at the Universities of Oxford and Nottingham. He has previously worked as a consultant for clients in defence, telecoms, broadcasting, finance and energy sectors, at European, national and local levels. Mark joined Intelligent Transport’s Editorial Board in January 2014.

Related organisations