Interdisciplinary Collaboration in Action at SSN


By Radhika Gorur (PhD)
Associate Professor, School of Education, Deakin University,
and co-theme leader, Data Cultures, SSN

Artificial intelligence (AI) has become the new frontier for global economic competition. The rapid growth of AI and their intimate – often intrusive – relationship with ordinary citizens has confronted us with new legal and ethical situations for which current guidelines are inadequate. Furthermore, publicly available AI building blocks that can be weaponized by civilians are increasing ubiquitous. Accordingly, the world over, experts are developing new ethical guidelines for AI development and use. In Australia, CSIRO’s Data61 developed a Discussion Paper Artificial Intelligence: Australia’s Ethics Framework for the Department of Science and Innovation, and invited comment on the Framework. Some of us at the Science and Society Network’s Data Cultures strand were keen to respond to this draft. Being champions of interdisciplinarity, we naturally assembled researchers (listed at the bottom) from a range of disciplines – applied AI, business, communication, criminology, education, health informatics, Indigenous studies, information technology, media & communications studies, medicine, science studies, policy, and political studies. We were tasked by the DVCR to develop a draft ‘Deakin Response’ to the Discussion Paper that would form the basis for a Deakin submission to the Government.

Despite the differing disciplinary backgrounds, we soon got to work – nothing like a deadline and the imperative to get a result by the end of the day to focus the energy! With Leonard Hoon expertly synthesising our discussions in real time, our discussions proved extremely productive. The morning’s brainstorming and the post-coffee ‘post-it exercise’, in which we each wrote the key points we wanted to raise and posted them under the appropriate categories (thank you, Paul Cooper), ensured that we avoided the HIPPO (Highest Paid Person’s Opinion) syndrome. Finally, the paired work on different sections of the same Googledoc, based on the morning’s deliberations produced a response that was nearly ready to go.  All that remained was for Thao, Leonard and me to clean up the document and reduce the dozen or so pages to the required two (we ended up sending seven) pages, and to send it on to Emma Kowal at SSN and thence to others in A2I2 and the DVCR’s and VC’s offices. The Deakin Response was sent out by VC Jane den Hollander AO.

Our response focused on two key questions:

  1. Are the principles put forward in the Discussion Paper the right ones? Is anything missing?
  2. Do the principles put forward in the discussion paper sufficiently reflect the values of the Australian public?

The key points of our response were based on our belief that AI was inherently different to other technologies, and as such the ethical, legal and regulatory framework required was not just an extension of what is currently in place, but required a new approach altogether. The outcome of this response also led us to the conclusion that various types of harm minimization should be actively incorporated by design through research and development, and in the application and governance of AI.

In particular, we felt the commercial orientation of the framework, focused around ‘cost-benefit analysis’ and ‘net benefits’, was flawed. We argued that sometimes what was right to do might be costly, but it still ought to be done. Moreover, cost-benefit analyses can mask the fact that costs and benefits are not distributed equitably – and some groups might be bearing the costs in order that others might benefit.

Another key issue pertained to the fact that with AI, ‘opting out’ was not always possible. For example, if driverless cars appear on our roads, it would not be possible for anyone to opt out of contact with them – even if we chose to abandon driving and chose to walk, we could still be affected by them. Similarly, ordinary citizens could not be expected to provide informed consent or to make informed choices with regard to AI – so we argued for that the government should bear more of legal and regulatory burden to ensure that citizens are protected.

The discussions were rich – and the issues many – too many to fit into this space. You might find the draft Framework proposed by Data61 and the Deakin response sent out by the Vice Chancellor interesting.

 

The interdisciplinary team that worked on the response to the Data61 Framework included the following:

A/Prof Mohamed Abdelrazek, School of Information Technology

Dr Tojia Cinque, SCCA Arts & Ed

Dr Paul Cooper, Faculty of Health

Dr Antonio Giardina, Applied Artificial Intelligence Institute

A/Prof Radhika Gorur, REDI, Arts & Ed, SSN

Dr Diarmaid Harkin, Alfred Deakin Institute

Dr Luke Heemsbergen, SCCA Arts & Ed

Dr Leonard Hoon, Applied Artificial Intelligence Institute

Megan Kelleher, Indigenous Pre-Doctoral Fellow, RMIT University

Prof Emma Kowal, Alfred Deakin Institute

Damien Manuel, Director, Centre for Cyber Security

Prof Kon Mouzakis, Applied Artificial Intelligence Institute

Thao Phan, Alfred Deakin Institute

A/Prof Sandeep Reddy, School of Medicine

Dr Jeffrey Rotman, Deakin Business School

Prof Rajesh Vasa, Applied Artificial Intelligence Institute

Dr Tyson Yunkaporta, Institute for Koorie Education

Dr Simon Parker, Applied Artificial Intelligence Institute