From Pauline Grant, Beaconsfield, Buckinghamshire, UK
Clare Wilson reports a test of people's responses to the philosophical “trolley problem”, now prominent as a way to probe opinions on who a self-driving car should save (19 May, p 14). There are reasons to be cautious about any interpretation of the results.
As Wilson notes, a problem with this test – which offered the choice of giving five mice an electric shock or acting to divert it to one mouse – is that some subjects did not believe the mice would be hurt. Anyone who has been moved by a dramatic scene on TV will know that strong emotions can be evoked even when we know what we are witnessing is not real. With ethics in science now much debated, the time when one could assume a subject believed a false story is surely past.
So the data can be viewed only in terms of what people think they would do in a hypothetical situation. Similarly, seeing railway workers in danger of being mown down by a trolley car falls into the realm of fantasy. By contrast, seeing a child or dog running into the road is something most of us could imagine happening, and we might say that we would run to save the child yet leave the dog to its fate.
This study involved telling subjects what was about to happen, purposely giving them time to think. Real and sudden emergencies elicit an immediate, instinctive response. A 2014 experiment using a virtual reality version of the trolley problem showed an impulse to do something rather than nothing. Post-hoc rationalisations of impulsive behaviour, such as “I decided to sacrifice one person to save many”, can of course be discounted. I wonder what would happen in an experiment where taking action would put a greater number at risk.
Advertisement
Individuals also have a hierarchy of care, so someone might favour a single kitten over several mice. And if so, does this really tell us anything that would help program a self-driving car?
