AI Forum
The AI Forum provides a space open to all members of the department (students and staff) for discussion of generative AI use and non-use, including the ethics, political economy, and micro-social lived experiences of AI technologies. We are especially focused on considering the role of AI in academic settings, aiming to discuss how we envisage AI being part of our current and future professional lives, though we will also discuss the personal impacts of generative AI and how we envision longer term futures and broader social changes in relation to these technologies. No advance reading is required or expected—instead, each session will begin with an introduction to the specific topic and then open up for discussion. We will eventually upload further reading suggestions to a Moodle Page—participants are welcome to make suggestions to include in this list.
Convenors: Luke Hawksbee (lh372@cam.ac.uk) and Isabelle Higgins (irth2@cam.ac.uk)
Lent Term dates: 22nd January, 5th February, 19th February, 12th March
Time and location: 12.30–2pm, Board Room (Sociology Department)
Michaelmas term
Introductory topics
1. AI is in everything, for better or worse
In this session we will explore the wide range of locations in which generative AI technologies and services now intersect with academic and personal life for both staff and students. These locations include but are not limited to: Google search and translate, Microsoft Windows and the Office suite of programs, Adobe software (including Acrobat’s AI summary but also AI editing on the Creative Cloud suite of programs), Meta AI chatbots, research tools like Semantic Scholar, and data analysis software packages. We’ll also look briefly to other sectors in which generative AI is being used differently—its potential use in HR and hiring, customer service, and healthcare, for example. As well as delineating where generative AI is available for use, we’ll ask what the implications of such use are for our own academic practice. We will therefore consider not only the locations of AI technologies, but the functions of these technologies in the context of our everyday lives.
2. AI ‘generation’: considering output quality
In this session we’ll consider the ‘generative’ nature of AI technologies, exploring how they work in practice, and how the ideas of both ‘generation’ and ‘artificial intelligence’ are complicated discursive constructs. We will discuss the difficulty of understanding what occurs within the ‘black box’ (e.g. are ‘reasoning models’ actually reasoning?), as well as looking to empirical work which critiques the ‘black box’ as a metaphor that obscures technological function, and interrogate the purpose and limitations of Turing tests and CAPTCHAs. We’ll also explore the technical realities of generative AI—‘training’ and ‘prompting’ large language models is a process that raises ethical and normative questions, and we’ll focus particularly on three of these, which pertain to data privacy, intellectual property (in both inputs and outputs), and bias. As the design and use of generative AI raises these fundamental challenges, we’ll ask how and if the function of these technologies contradict or complement the aims of academic practice and particular standards and expectations within higher education, including citational practices, research ethics, and the process of peer review and examination.
3. Truth and falsehood in a world of AI
In this session we will think about how AI affects our sense of truth and falsehood in various ways, including the increased ease of spreading misinformation, the ‘liar’s dividend’, and the impact of industry PR efforts to create ‘AI hype’ (both positive and negative). We will also consider the dangers of ‘AI psychosis’ and the assumed authoritative status of AI (especially ChatGPT and Grok) With this in mind, we’ll ask what we might realistically hope to use generative AI for in our academic work, and which other claims or ideas about the productive potential of these technologies we might wish to critically interrogate. We will also consider how students and researchers should navigate the broader epistemic context created by the widespread use and awareness of generative AI, and how our (in)actions might have the potential to contribute to wider discussions and public sphere understandings of these technologies.
4. Personal and professional relationships in a world of AI
In this session we’ll consider the micro-social interactions being shaped by generative AI, and the impact of these interactions on our teaching and learning. We’ll highlight a wide range of ways that generative AI ‘chatbots’ can be used in personal settings, including discussion of Character.ai/Replika and the use of ChatGPT to interpret/review/plan/draft messages or conversational scripts. We’ll also consider how other forms of social interaction might be shaped by both the idea of—or the actual use of—generative AI. This could include concern over CoPilot screenshots of private messages, transcribing of conversations, technologies such as Humane/Friend which are designed to listen and respond to auditory environments, AI glasses which record and/or shape wearers’ sight, and the relative ease of creating deepfakes, including non-consensual pornographic content, of others that a user has photograph and video footage of. How do all these technological affordances and potential use cases shape the environment in which we work with one another? What might we do to respond to these shifts in our everyday working lives
Lent term
A deeper dive into the realities AI, and what is rendered invisible in many of these discussions
1. The AI bubble: too big to fail, too limited to succeed?
In this session we will consider the economic context in which generative AI development is taking place. As generative AI technologies become normalised and commonplace in everyday academic life, it is worth considering the political and economic structures upon which they rely. We will discuss VC funding subsidising costs, the exposure of the US stock market (and in particular the largest companies: Microsoft, Alphabet, Nvidia, etc) to AI, AI’s failure to generate transformative change in most industries, the severe limitations of the technology (including agentic AI) in performing many tasks, the failure of AI firms to generate profit, the diminishing returns observed on technological improvement, the rising costs of running models, and the way AI firms have begun wedding themselves to state bureaucratic infrastructure (e.g. through the memorandum signed by the UK government). With this in mind, we’ll discuss whether ‘future proofing’ academia means training ourselves and each other for careers in AI dominated industries, or whether we should be sceptical that generative AI sector will continue to grow and transform higher education and professional spheres.
2. AI and the environment
In this session we will consider the realities of using generative AI in relation to a range of environmental implications, including energy use, cooling, creation of fixed capital, the impact of data centres on local communities (including water access and noise pollution), and e-waste disposal. In particular, we will address estimates of the scale of emissions (and disputes over this). We’ll consider too the comparisons made in attempts to contextualise and justify this environmental impact, and the types of methodologies that are often used to make these claims. With this wide range of environmental impacts in mind, we’ll ask what ‘responsible’ AI use might look like in educational settings, especially in higher education contexts where climate commitments are institutionally made.
3. AI and labour
In this session we will turn to consider the many communities implicated in making AI ‘work’, with particular attention paid to those often rendered invisible in public-sphere discourses of the technology. We’ll consider here the chains of labour that make the technologies useable, from manual manufacturing work, to low paid ‘data training’ roles, to workers who imitate AI at a distance so that it is usable, to the wide range of professionals (from those with computational skills to those with sales and marketing roles) working in ‘big tech’ firms, to those in other professional settings in charge of acquiring such technologies and making them palatable and usable in specific employment contexts. Many—and often all—of these stages preclude the labour of users, whose everyday interactions with the technology can also be read as a form of labour, in terms of their provision of data and training input to the technological services they engage with. Considering these labour-based assemblages in relation to educational settings raises a range of questions. Teachers and learners are choosing to engage with and use AI technologies and services to reduce their own labour, but does this choice appear different when such labour reduction efforts are placed in context of the labour of others? And how is such labour differentially distributed with regards to geographic location, social class, gender, racial identity, disability and histories of colonial resource extraction? We’ll also consider the arguments made by those who believe that students should now be learning to labour with AI in order to prepare themselves for a workforce in which AI skills are not only desired, but required, and ask whether this belief changes the nature of academic labour itself.
4. AI and the production of knowledge and culture
In this session we will turn to consider how generative AI outputs—which can be textual, verbal, musical, visual, photorealistic and video based—raise questions about both knowledge production and cultural production. These capacities of generative AI technologies—to create ‘new’ outputs based on user inputs—raise particular questions for industries concerned with labouring to create cultural or knowledge products, which often are physically intangible but play an important role in reflecting, strengthening, critiquing and/or challenging particular societal fields, forces and relations. Does the goal of ‘knowledge production’ now change if generative AI technologies are able to probabilistically generate outputs in fractions of a second? How are the concepts of ‘expertise’ and ‘creativity’ challenged by the capacities of these technologies? As part of this discussion, we will consider the importance of understanding the meaning of terms, concepts and debates, and contrastingly, the importance of sensing and experiencing aspects of the physical world. We will discuss whether generative AI technologies, who have no access to these aspects of human experience, can be viewed as the same type of knowledge or cultural creator.
Easter term
Placing AI into political, ideological and historic context
1. The ideology and politics of AI
In this session we will consider the underpinning ideologies of Silicon Valley and techno-utopians, the political affiliations/leanings of AI leaders (such as Sam Altman, Mark Zuckerberg and Elon Musk and their involvement in lobbying. We will discuss how ideological positions can become ‘encoded’ into technologies and consider whether engagement with these ideologies helps us to further understand how and why the technologies function in the manner that they do. Such ideologies have long histories, and we will look, therefore, to the twentieth century to trace the development of what is now known as ‘AI’ over time, and in context. We’ll also discuss the variety of regulatory, civil society and academic industries that have grown around the AI sector. How are different groups seeking to regulate, enable or challenge such technologies? How do we see our responsibility and roles within the academy as aligning with or separate to any of these groups and their motivations?
2. A brave new world of AI?
In this session we’ll explore some of the similarities and differences between AI and prior technological revolutions (the industrial revolution, electrification, personal computing, the internet, social media, crypto, etc), as well as some of the ways that communities have responded to these changes, sometimes by refusal (e.g. Luddism) or through embracing the technologies in question. A key focus of this session will be asking how and if these historical parallels are accurate and whether they might obscure important differences between generative AI and prior technological ‘revolutions’. Exploring and making sense of the accuracy of these parallels matters because an uncritical reliance upon them might shape how we make sense of, accept, or reject the incorporation of generative AI into academia.