The Trolly problem is not a synonym for autonomous cars


 Play audio recording
Published: 25 Sep 2016

The Trolly Problem is an interesting philosophy construct to show how humans react to different ethical circumstances. It can even be used as a way to interrogate both Kantian and Utilitarian ethical views by slightly changing the position of the illustrated participants or by changing the character of the persons in danger. As described in many recent articles1 concerning autonomous vehicles:

The Trolly Problem consists a train (Trolly) moving down it’s track towards 5 people working on the track. There is no way to slow the train or warn them, they are certain to perish. However, as a person on the train, you are able to trigger a switching of track to a second track, however this second track has 1 person working on it: what do you do?2

You may or may not have concluded that the best overall outcome would be to change tracks, but in doing so would you thus be embroiled in the deaths themselves, how is one to feel about this? Now certainly there are a few variations of this scenario, for example reimagine if the one person was someone in your family, or the president, or in a different turn of events you find yourself literally having to push someone into the tracks, or not, in order to save 5 others. This is obviously all academic and is used as a way for ethicists to illustrate how morals play a part in the way we interact within our world. In relation to automated vehicles it can show how these vehicles may come across similar situations in the future and as an example show how sometimes these problems are unsolvable or simply morally complex.

This, perhaps, is where the usefulness of the comparison ends.

The problem was intended to interrogate responses not necessarily to encode them. I believe the The Trolly problem is so widely used in conjunction with questions of ethics with Automated Vehicles, that it is not just misleading but actually harming the way people think about the technology, and more importantly, this is being shown in the way legislators are talking and writing about it.

Here are some of my reasonings:

1. In The Trolly Problem there is but two options, switch the track or don’t (depending which version of the problem you looking at). With a car, and most other vehicles present on today’s roads, they can move independently of any track, this as a given the vehicle has a huge amount of 2 dimensional freedom – a car can simply move more than a train.

2. The Trolly Problem is exampled as if in a vacuum, the workers on the track never move out of the way, the train never slows, no other external persons or happenings can occur. Whereas in our physical world we don’t have that luxury.

3. Other persons or vehicles have their own agenda and independent actions. Imagine for instance a vehicle moving towards 5 oncoming cyclists. This is a nightmare situation obviously, but imagined in The Trolly Problem type vacuum, it can be said that the vehicle continues in a straight line, the cyclists move towards the vehicle also in a straight line and there is just enough time in-between to move to the left or the right of the oncoming cyclists. Now imagine the same situation in real life, if you’ve ever played the game chicken with maybe a bumper car its super unlikely that any of the 5 cyclists wouldn’t freak out and suddenly change course, swerve, fall-over or otherwise. The onboard computers on these cars are simply not going to be able to read the situation continuously and simulate the best outcome, its like counting sand on a beach. Every single action will count as new input and need a rerunning of a simulation, sum up all the ways the car can react, try (if at all possible) to take into account the way the other 5 will react and then make a new action. This would be a waste of precious time, as it would probably take longer to compute than the time it has to react in every situation.

4. Like the simulation argument above, the current vehicles aren’t able to sense the world to a high enough degree to even start making assumptions about a future second (and other simulated futures), they have hard enough time simply following a road or identifying a human amidst other objects.

5. Lastly, how are we to hard code a reaction or “guarantee” a certain ethically decision within this world of infinite possibilities? Lets say by some miracle that the computer inside the vehicle was able to process a series of possible outcomes, now what? Does it carry onboard or send off to the cloud somewhere (and wait again) the outcomes, and some database looks up the particular circumstance the vehicle is in, finds the entry that has been verified by all of humanity as the right one, or at least the best in the situation and then still have time to put it into action and hope the other persons don’t get in the way of the vehicles avoidance tactics…?

Hopefully you can see how this has got massively out of hand – the technology is simply not good enough to be able to “see” everything, let allow “simulate” everything, or even worse, get us to make a decision about the millions of potential situations that will need our ethical guidance.

So where does this leave us?

Well firstly we should probably stop using The Trolly Problem as an analogy for automated cars, it isn’t. It’s a thought experiment. Next, we could look at lots of real world situations that automated vehicles may find themselves in, simulate them along with the kind of data the vehicles would be getting onboard, at that point we can try out lots of scenarios and alter the vehicle and it’s technology, the systems, the environment (e.g. the roads), and the general public accordingly.

Let’s go back to my example of the 5 cyclists, it might be from looking at that example and simulating it couple of times (and maybe even in real life), that we find out some useful design improvements can be made. For example, the vehicle could better warn the persons via a loud speaker and some lights, indicating it’s intension to manoeuvre to the right of them, simultaneously bracing the passenger within the vehicle, and perhaps educating all persons involved before-hand with the knowledge that these vehicles possess these new abilities as to reassure them.

Of course this is a short look at some solutions to a specific case which will need more time and effort to get right. However, I do believe hard and fast ethical implementations (hard coded rules) would not be possible for automated cars – thoughtful design, observation and education however will infinitely help us.


1.

2.

 


Illustrations by Nicholas Willsher

Part 2 of my series on autonomous cars

© Ben Byford | website created by Ben Byford using Processwire