top of page

2021 convening:
futures

Prototyping Ethical Futures Series

PROTOTYPING ETHICAL FUTURES SERIES

JUST AI’s Prototyping Ethical Futures Series created an interdisciplinary space of public engagement between stakeholders in the field of data and AI ethics. The weeklong program occurred in June 2021 and featured a mix of online and in-person events. The series highlighted different parts of JUST AI’s work and offered a chance to build connections across networks. 

OUR PROGRAMME

In the first event of the week-long series, Alison hosted an online conversation with Gina Neff (Oxford), Edward Harcourt (AHRC), and Hetan Shah (British Academy). This panel introduced the JUST AI network map, followed by a discussion of areas and points of potential interconnection underdeveloped or missing, especially from the perspectives of social justice, inclusion, governance, and design – including why intervention is critical to the future of the development of ethical AI.

In the next virtual panel, Teresa led Jaya Chakrabarti (Vana Project) and Jennifer Gabrys (Cambridge; Smart Forest project) in a discussion of the paradoxical relationships of data and AI technologies to climate and the environment. The JUST AI Racial Justice Fellows from Squirrel Nation acted as respondents.

Alison then led one of two in-person events on the LSE campus using her data walking methodology in which groups of participants explored the concept of ‘climate data’ and its relation to shared matters of concern. 

Paula Crutchlow led a satellite event held in Exeter that built on the collaborative practices established in the art and geography research project Museum of Contemporary Commodities to speculate on the potential for trade and environmental justice in pandemic affected city centres.

The data walking research process is documented on the site:  https://www.datawalking.uk/

Louise and Alison ran an in-person workshop using the visual produced by the Reflection Prototype to detail the reflective networking process. Participants were able to think about, map, and re-imagine relationships between different types of work in the data and AI ethics space. 

 

The RAR working group streamed a virtual conversation between Dr Alex Taylor (City University), Crystal Lee (MIT), and Mara Mills (NYU) about the automation of accessibility. Each participant drew on themes of refusal when reflecting on the dynamics between it and access. Mara offered examples of refusal, including from the ‘Womyn’s’ Braille Press lesbian separatist Talking Book collection. Alex questioned the underlying logic of a technology-centred view of access, specifically pertaining to the logic of pairing problems and solutions. Crystal analysed the narratives that tech tells about disability to understand the world cultivated through accessible designs.

Learn more about the Museum of Contemporary Commodities at http://www.moccguide.net/

The Racial Justice Fellows presented their work and discussed their strategies for advocating for racial justice in the field of data and AI ethics specific to ongoing challenges presented in the UK and EU in mobilising and promoting social change. The online panel addressed the need to create intersectional spaces to foster and sustain a network of solidarity and contemplated the stakes involved in realising and supporting structural change. 

The final conversation reflected on the roles and responsibilities of actors such as universities, startup accelerators, and companies in shaping AI and data ethics in practice. Imre presented on the pivotal moments in the development of a technology where ethically relevant choices can lead to different outcomes and how future impacts can be anticipated or addressed from an ethical perspective. 

Almost AI Futures Salon

ALMOST AI FUTURES SALON

Artificial intelligence and data-driven technologies permeate all aspects of our lives. Their proliferation increasingly leads to encounters with ‘mutant algorithms’, ‘biased machine learning’, and ‘racist AIs’ that sometimes make familiar forms of near-future fiction pale in comparison. In these examples, AI and machine learning tools inscribe a certain future based on predictions from past observations and they foreclose a multitude of other possible futures. Faced with this potential to limit and constrain what might be, JUST AI wondered if fiction and narrative offer alternatives for how AI could and should be.

 

We supported fiction writing in our effort to prototype ethical futures. We introduced science-fiction authors to researchers found on our bibliometric analysis maps and commissioned short stories and essays inspired by their conversations. We presented the genesis of those early-stage encounters in a public event that provided the authors with feedback. Several months later we held public readings of the final stories which will be published with another essay and alongside excerpts from the broader public discussions. The aim of this creative project was to show how public discussion responds to, and creates, different narratives about data and AI futures

bottom of page