Empathy and Debiasing Through Generative AI: Navigating Cognitive Biases, Fake News, and Sociotechnical Agency in the Digital Media Landscape
The Team: Associate Professor Toija Cinque, Allan Jones
In an era steeped in digital data and information, societies are inundated with a myriad of narratives, some of which propagate biased perspectives, fake news, and toxic ideologies, perpetuating filter bubbles and echo chambers (Pariser, 2011; Allcott & Gentzkow, 2017). The ubiquitous nature of social media and digital news platforms not only shapes collective cognition and public opinion but also intensifies the diffusion of cognitive biases (Sunstein, 2017; Bruns, 2019; Zollmann, et al., 2021). The surge in hyper-industrialisation and Human-Machine Interaction (HMI) further complicates the dynamics of this informational environment (Brynjolfsson & McAfee, 2014). The agency exerted by AI, particularly Large Language Models (LLMs), and digital media platforms in shaping and modulating these dynamics is pivotal yet inadequately understood (Caliskan, Bryson & Narayanan, 2017). This project seeks to bridge the gap between technologically driven information dissemination and cognitive de-biasing, shedding light on the potential and pitfalls of employing LLMs in navigating the complex digital media environment. It aims to provide empirical insights that could inform ethical guidelines, regulatory policies, and developmental frameworks for leveraging AI in mitigating cognitive biases, fostering empathy, and ensuring veracity in digital information landscapes. Furthermore, it endeavours to illuminate the interplay between AI agency, human cognition, and digital media platforms, thereby contributing to discourse on ethical AI use, sociotechnical systems, and information integrity in the digital age.
The Team: Associate Professor of Communication Toija Cinque of the School of Creative Arts has teamed up with Allan Jones, the General Manager for Software Engineering at Deakin’s Applied Artificial Intelligence Institute (A2I2).