Viewers of regional news in the West Midlands will no doubt have seen the very recent story regarding the deployment of driverless buses to ferry passengers to-and-fro at Birmingham International Airport.  The inaugural journey was the source of much publicity with the system’s developers, together with local politicians, enjoying the ride. 

Driverless vehicles are not new, but still extremely limited in number and scope.  Were that not the case, we would not have had this event as a “news” story!  The reasons behind their relative rarity are two-fold.  First, on a technical level there are many considerations that are required for such vehicles to be considered safe and roadworthy – typically involving sensors, including infrared for night-time travel, and computer back-up should there be a malfunction.  Second, there is a confidence or perception consideration.  Observers will point out that we will readily place our safety in the hands of a stranger when we get on board a train, bus, or taxi and yet we are, statistically, far more reluctant to submit our welfare to an unseen but incredibly accurate and stress-tested microprocessor.  In this context, it is arguable that the technical issues are more easily surmountable than the psychological ones.

You are probably familiar with the quotation that “children must be taught how to think, not what to think” (Margaret Mead).  With driverless vehicles, we understand that there is primarily a “what to think” approach to programming.  Driverless cars are programmed with the rules of the road – speed limits, signage, rights of way etc. – and also the technical specifications of the vehicle, and how those may be affected by differing driving conditions such as the icy road, the cold start to the brake disks etc.  In this capacity, the computer controlling the vehicle has far more knowledge than any bus or taxi driver, or any other driver whose vehicle we may enter.  I think it is pretty safe to state that we all “get” that.

Where driverless cars struggle to convince us is at the psychological level.  After all, we can often see the taxi or bus driver and make a quick, superficial appraisal of their fitness to drive us.  Crucially, we also acknowledge that they have a “stake in the game” too.  Their own personal safety is a sign of their commitment regarding ours, as is their personal accountability should something happen.  Here, the driverless vehicle is, by definition, conspicuous by absence.  Maybe that’s why the term “autonomous” (suggesting a positive quality) rather than “driverless” (suggesting a deficit) is becoming the preferred term?

Finally on this, I read Noah Yuval Harari’s work “Sapiens” – in which he refers to the challenges of programming such vehicles.  The problems do not relate to the Highway Code or braking distances.  Relatively speaking, those ‘knowledge-rich’ aspects of learning are easy for the computer.  Far more challenging, and for me more interesting, are the philosophical and ethical challenges associated with coding an ‘autonomous’ vehicle (which, of course, is not truly ‘autonomous’ if it is programmed!).  Harari describes going into a car retailer showroom to peruse the latest models.  “Would sir want the Renault Egoist (which would, in the event of an impending accident, save the owner at whatever cost to other road users and pedestrians) or the Renault Altruist (which would make decisions to preserve whichever lives it could calculate most likely to be saved, whether that included the owner or not)?” Why car would you buy?  If you bought the “Altruist” would you feel safe?  And if you bought the “Egoist” would you feel guilty? 

We live in a world increasingly controlled by technology, and that technology is programmed with default settings – whether it is the number of rings on our mobile phones before they switch to voicemail, or the channel settings on the TV, or the preferred route calculation methodology on a SatNav.  Very few of us have the inclination or technical prowess to change them.  We stick with the default – where someone we do not know has made a decision to programme into devices the configurations that they have.  That is why we are seeing a rise in the prominence of coding and programming in our schools and society of course, but also why we are seeing a heightening of interest in ethics and philosophy.  The philosophy of “trolley dilemmas” is fascinating, and well worth a look if you are unfamiliar.  Their study has produced a field in its own right: Trolley-ology!  Trolley problem – Wikipedia

Our young people therefore need to not only know how to code, but what to code.  These decisions have far-reaching consequences.  Computer Science studies should always include a compulsory module on moral philosophy!   

As always, thanks for reading.