Artificial Intelligence is something I have lived with since childhood, at least in stories, films and television programmes. The ability of a human-made or artificial machine to find answers and make decisions independently from its creators.

From Orac in Blake’s 7 and K9 in Doctor Who to HAL in 2001: A Space Odyssey. These imaginations of self-thinking machines were interesting, amusing, and at times creepy and threatening. 

As AI enters a new realm where there are artificial systems that can make independent decisions, a light has been thrown on as much about the guiding purpose, ethics, and morals of such systems, as the technological capability.

In the Ethics Unit of my Business Module, I present and discuss data presented in Nature 2018. Through global interviews a team sought to discover, in a life-or-death situation, who people would prioritise to save. Was it the young, the old, the fit, the lawful, humans over animals? The idea was to investigate the embedded moral decisions made by autonomous cars in dangerous situations. The results are interesting in that they showed, unsurprisingly that different individuals and cultures gave different responses. Whose do we use? These decisions have real-life implications.

‘We better be sure that the purpose put into the machine is the purpose we really desire,’ Norbert Weiner stated in 1960.

In 2021, Stewart Russell OBE (Professor of Computer Science, University of California, Berkeley) in his Reith Lectures expressed some principles that he had co-developed to make the development of AI positive and safe.

  • Altruism – AI is there solely to improve human outcomes and purpose
  • Uncertainty – the objectives, purpose and outcomes will remain uncertain 
  • Learning from humans – there needs to be a dynamic process, constantly checking back and developing understanding.

Why is this important? Russell asks you to imagine that you have a robot looking after your children. You give it the clear purpose of keeping the children safe and fed with home cooked food. All sounds great. The robot goes to the cupboard and fridge and sees that there is no food in the house. Then it notices the family dog….

These ideas and discussions are vital to the safe development of AI in pursuit of the betterment of all people and the planet but they hold absolutely true for human organisations. 

In our Design to Value book, the idea of the reductionist “brief” is challenged. The book poses questions to the designer: Do we really spend the time to look carefully at the purpose? Are we working to find and deliver the client’s purpose and do we accept that narrowing down on that outcome can only come from constant iterations and evolution of understanding? The idea that the purpose and objective can be captured in a document and then a design team (human-intelligent-machine) can simply deliver it, is as frightening as the robot childminder.

For Design to Value the rules should be:

  • Altruism – the aim should be to maximise the value to the client, understanding the underlying purpose
  • Uncertainty – the problem statement and value drivers should remain uncertain 
  • Iteration – there needs to be a dynamic process: try, analyse, evaluate

Professor John Dyson spent more than 25 years at GlaxoSmithKline, eventually ending his career as VP, Head of Capital Strategy and Design, where he focussed on developing a long-term strategic approach to asset management.
While there, he engaged Bryden Wood and together they developed the Front End Factory, a collaborative endeavour to explore how to turn purpose and strategy into the right projects – which paved the way for Design to Value. He is committed to the betterment of lives through individual and collective endeavours.
As well as his business and pharmaceutical experience, Dyson is Professor of Human Enterprise at the University of Birmingham, focussing on project management, business strategy and collaboration.
Additionally, he is a qualified counsellor with a private practice and looks to bring the understanding of human behaviour into business and projects.
To learn more about our Design to Value philosophy, read Design to Value: The architecture of holistic design and creative technology by Professor John Dyson, Mark Bryden, Jaimie Johnston MBE and Martin Wood. Available to purchase at RIBA Books.