Artificial Intelligence in Science and Fiction

The Team: Dr Helen Young, Dr Leonard Hoon, Dr Evie Kendal, Dr Thao Phan, Professor Sean Redmond

How can culture shape the data-driven future of artificial intelligence (AI) and how it impacts on the human world? AI, by definition, can make decisions autonomously from any direct human control. Everyday people can already be affected by AI decisions in areas as varied as education, finance, the law and medicine as well as in digital spaces such as social media and chatbots. Recent scholarship in AI ethics argues that socio-cultural values must be embedded in AI reasoning from the design stage and identifies a need for new methodologies to “elicit the values held by all stakeholders, and make them explicit.” Public consultations, such as those recently conducted by the CSIRO and Standards Australia, seek input from experts, industry, and government bodies but have typically failed to generate high levels of input from the general public. Popular culture, particularly science fiction, offers a largely untapped living archive of information about social attitudes and beliefs about AI. This project aims to explore how that archive might be used to promote value-driven design (VDD) in AI, by piloting a new methodology that incorporates an ethics-in-literature approach to VDD.

The Team: Dr Helen Young (lead CI, pictured), Dr Leonard Hoon, Dr Evie Kendal, Dr Thao Phan, Professor Sean Redmond

A collaboration between the School of Communication and Creative Arts, School of Medicine, Alfred Deakin Institute, and the Applied Artificial Intelligence Institute