My maternal Grandmother was born in 1901 and died in 1995. When our three sons were small, we used her lifespan as an anchor for 20th century history and to illustrate how much had changed over the course of one life. From a world where transport was dominated by horses to ubiquitous cars; from being born years before the Wright brothers made their first successful flight to travelling across the world in a plane which could encompass their entire first flightpath. She lived through two world wars as well, and saw the Cold War take men to the moon – on the back of a direct descendant of the rockets that rained down on London.
I used to say that no one life time had ever encompassed so much change before, and it seemed unlikely that it would happen again. Now I’m not so sure.
Three stories struck me from The Times and The Today Programme on Radio 4 on the day of writing this blog. The first was the likelihood that the first commercial autonomous cars would have a switch on the dashboard (oh how that language sounds archaic) which would allow the owner to select a range of behaviours for the car – from “egotistical” where in extremis the car would protect the occupant at nearly all cost – an example used was ploughing into a group of pedestrians rather than risk harm to its occupant; through to “altruistic” where the vehicle would avoid doing harm to others in every way open to it. My first thought was that your insurance company might have very different cover – and premiums – for the two extremes, my second was that it is a neat solution to the problem that has been long talked about regarding the “moral” behaviours to be adopted by self-driving cars. At a stroke that has gone away by allowing the human to choose. We are used to taking moral decisions and usually do so in a fuzzy, analogue way – not least by never imagining that the worst will actually happen to us anyway.
The second story was about Alphabet Go. This is about AI learning strategies to win the game of Go having only been taught the simple rules. The interviewer on Today asked how long it had taken for the AI to learn strategies that made it now unbeatable. The spokesperson for the project said that the time was in many senses meaningless, but that the software had played Go 39 million times to acquire its insight. It emerged that has taken a matter of a week or so. No human player will ever accumulate so much experience, nor be so well equipped to learn from it.
The change my Grandmother saw was largely in the physical world – a world of transport, of home appliances being invented and then becoming better and cheaper. The change we will be seeing over the next few years is very different. It is on a moral perspective driven by AI and by insights from data. Someone has to programme a car to behave in a certain way when there is a situation in which someone will get hurt. Who is it, how badly, how many people? These are very difficult questions to answer as humans, and the economically least damaging solution (which could perhaps be computed immediately before the crash) is unlikely to fit within our current moral framework.
Isaac Asimov formulated his Three Laws of Robotics in a few dozen words. The US constitution runs to a few hundred words. The EU treaties we are currently trying to negotiate our way our are a few million words. Software runs to tens of millions of lines of code. As we move from an analogue world to a digital world we need to handle every possible case and know what is the right thing to do in that case. This is far from trivial, and will need AI, tempered by humanity. Can we expect a successor to Alphabet Go to learn humanity?
The final input today was about Jo Johnson, the Universities Minister, and speaker at a previous CIO Connect event. He has stated that UK Universities, in order to be recognised as such must guarantee free speech in their constitutions and outlaw the “safe place” mentality that has “no-platformed” worthy people who dare to challenge the current orthodoxy. Perhaps we might hope to reclaim language too.
In my opinion this is critical if we are to properly get to grips with the moral issues regarding AI. Whether the detractors or the proponents are right AI will change things – it is up to us to ensure it is for the good not the bad. That debate and discussion need the finest minds of our times to engage without “political correctness” interfering with clarity of thought and of expression. These questions are too important to be left to technologists to decide alone. We need politicians and poets of all persuasions to be involved too. And not all of the thoughts emerging will be palatable.
The changes that come about in society as a result of AI will make the 20th century seem slow and stagnant in comparison.