Artificial and *Human Intelligence* \\The Dyson blog

Abstract illustration of a luminous red grid receding to a glowing horizon, representing the intersection of artificial and human intelligence explored in John Dyson's Dyson Blog

In this blog, Professor John Dyson traces his lifelong relationship with artificial intelligence – from K9 in Doctor Who to agentic systems that make decisions without human input – and asks what the principles that should guide AI development tell us about how we should run organisations and design projects.


The purpose you put into the machine:

Artificial Intelligence is something I have lived with since childhood, at least in stories, films and television programmes. The ability of a human-made or artificial machine to find answers and make decisions independently from its creators.

From Orac in Blake’s 7 and K9 in Doctor Who to HAL in 2001: A Space Odyssey. These imaginations of self-thinking machines were interesting, amusing, and at times creepy and threatening. 

As AI enters a new realm where there are artificial systems that can make independent decisions, a light has been thrown on as much about the guiding purpose, ethics, and morals of such systems as the technological capability.

In the Ethics Unit of my Business Module, I present and discuss data presented in Nature 2018. Through global interviews, a team sought to discover, in a life-or-death situation, who people would prioritise to save. Was it the young, the old, the fit, the lawful, humans over animals? The idea was to investigate the embedded moral decisions made by autonomous cars in dangerous situations. The results are interesting in that they showed, unsurprisingly, that different individuals and cultures gave different responses. Whose do we use? These decisions have real-life implications.

‘We better be sure that the purpose put into the machine is the purpose we really desire,’ Norbert Weiner stated in 1960.

In 2021, Stewart Russell OBE (Professor of Computer Science, University of California, Berkeley), in his Reith Lectures, expressed some principles that he had co-developed to make the development of AI positive and safe.

  • Altruism – AI is there solely to improve human outcomes and purpose

  • Uncertainty – the objectives, purpose and outcomes will remain uncertain 

  • Learning from humans – there needs to be a dynamic process, constantly checking back and developing understanding.

Why is this important? Russell asks you to imagine that you have a robot looking after your children. You give it the clear purpose of keeping the children safe and fed with home-cooked food. All sounds great. The robot goes to the cupboard and fridge and sees that there is no food in the house. Then it notices the family dog….

These ideas and discussions are vital to the safe development of AI in pursuit of the betterment of all people and the planet, but they hold absolutely true for human organisations. 

In our Design to Value book, the idea of the reductionist ‘brief’ is challenged. The book poses questions to the designer: Do we really spend the time to look carefully at the purpose? Are we working to find and deliver the client’s purpose, and do we accept that narrowing down on that outcome can only come from constant iterations and evolution of understanding? The idea that the purpose and objective can be captured in a document and then a design team (human-intelligent-machine) can simply deliver it is as frightening as the robot childminder.

For Design to Value, the rules should be:

  • Altruism – the aim should be to maximise the value to the client, understanding the underlying purpose

  • Uncertainty – the problem statement and value drivers should remain uncertain 

  • Iteration – there needs to be a dynamic process: try, analyse, evaluate

Professor John Dyson

John Dyson spent more than 25 years at GlaxoSmithKline, eventually ending his career as VP, Head of Capital Strategy and Design, where he focused on developing a long-term strategic approach to asset management.

While there, he engaged Bryden Wood and together they developed the Front End Factory, a collaborative endeavour to explore how to turn purpose and strategy into the right projects, which paved the way for Design to Value. He is committed to the betterment of lives through individual and collective endeavours.

As well as his business and pharmaceutical experience, Dyson is Professor of Human Enterprise at the University of Birmingham, focussing on project management, business strategy and collaboration.

Additionally, he is a qualified counsellor and executive coach who looks to bring the understanding of human behaviour into business and projects.

Previous
Previous

Adjusting our *ambition* \\The Dyson blog