There are currently no results that match your search, please try again.
Artificial Intelligence is something I have lived with since childhood, at least in stories, films and television programmes. The ability of a human-made or artificial machine to find answers and make decisions independently from its creators.
From Orac in Blake’s 7 and K9 in Doctor Who to HAL in 2001: A Space Odyssey. These imaginations of self-thinking machines were interesting, amusing, and at times creepy and threatening.
As AI enters a new realm where there are artificial systems that can make independent decisions, a light has been thrown on as much about the guiding purpose, ethics, and morals of such systems, as the technological capability.
In the Ethics Unit of my Business Module, I present and discuss data presented in Nature 2018. Through global interviews a team sought to discover, in a life-or-death situation, who people would prioritise to save. Was it the young, the old, the fit, the lawful, humans over animals? The idea was to investigate the embedded moral decisions made by autonomous cars in dangerous situations. The results are interesting in that they showed, unsurprisingly that different individuals and cultures gave different responses. Whose do we use? These decisions have real-life implications.
‘We better be sure that the purpose put into the machine is the purpose we really desire,’ Norbert Weiner stated in 1960.
In 2021, Stewart Russell OBE (Professor of Computer Science, University of California, Berkeley) in his Reith Lectures expressed some principles that he had co-developed to make the development of AI positive and safe.
Why is this important? Russell asks you to imagine that you have a robot looking after your children. You give it the clear purpose of keeping the children safe and fed with home cooked food. All sounds great. The robot goes to the cupboard and fridge and sees that there is no food in the house. Then it notices the family dog….
These ideas and discussions are vital to the safe development of AI in pursuit of the betterment of all people and the planet but they hold absolutely true for human organisations.
In our Design to Value book, the idea of the reductionist “brief” is challenged. The book poses questions to the designer: Do we really spend the time to look carefully at the purpose? Are we working to find and deliver the client’s purpose and do we accept that narrowing down on that outcome can only come from constant iterations and evolution of understanding? The idea that the purpose and objective can be captured in a document and then a design team (human-intelligent-machine) can simply deliver it, is as frightening as the robot childminder.
For Design to Value the rules should be: