Academics’ uses of GenAI – what is appropriate … and what is not?

Lego minifig with red shirt and blue pants with an open head. Inside the open head is another mini-minifig , which is controlling the big minifig.
Image from Lance Chang | unsplash.com

Although universities remain keen to tackle inappropriate uses of generative AI (GenAI) by students, it is becoming increasingly clear that we need to talk much more openly about academics’ (mis)use of these technologies.

Many Research Whisperer readers might have begun to hear stories of GenAI being used to write academic papers and research proposals, to analyse research data, to write peer reviews and many other key tasks that university researchers and teachers find themselves involved in.

To date there has been very little open discussion about this fast-developing aspect of academic work. Aside from sporadic social media posts expressing dismay at fake references appearing in publications, the full extent of academics’ reliance on GenAI remains largely unspoken about in public – an uneasy matter of ‘don’t ask, don’t tell’.

Yet such furtiveness is beginning to undermine in confidence some of the fundamental tenets of academic work. Clearly, growing numbers of academics are now turning to GenAI to help them complete work that only a few years ago they would not dream about outsourcing to someone else.

For the vast majority, GenAI undoubtedly offers a way of coping with ever-growing workloads, pressures to publish, mounting piles of marking, and the overall stress of contemporary academic work.

Yet, this creeping use of GenAI clearly comes at a cost. By turning to GenAI to manage workloads, are we simply covering up structural problems with contemporary academic work that need to be properly addressed? Conversely, how are these surreptitious uses of GenAI beginning to alter the moral and ethical integrity basis of academic work?

These are all things that the academic community needs to openly start talking about. When does GenAI use cross the line from being a way of ‘working smarter, not harder’ to undermining academic integrity and professional standards? What academic tasks might we agree are wholly appropriate to outsource to GenAI? What guidelines should exist around the co-creation of academic content with GenAI? What aspects of our work do we collectively consider must remain the preserve of human expert academics?

In an effort to get a collective conversation going, we are running a very short survey asking academics to tell us about their GenAI ‘grey areas’ – i.e. how you have been using GenAI in ways that you feel unsure about … or perhaps suspect that some of your colleagues might not wholly approve of.

Inspired by the tech journalist Brian Merchant’s work canvassing testimonies from different professions how GenAI is ruining their work, we are running a totally anonymous and very short survey where colleagues can share ‘grey area’ examples of what they have being doing with GenAI and what has motivated such uses.

If you are able to share your own examples we’d be really grateful – the more that this fast-changing aspect of academic work can be talked about openly then the sooner we can get on with the hard work of collectively working out some ground rules and shared expectations.

Here is a link to the survey – we promise that it won’t take more than a few minutes to complete!

GenAI is not going to disappear from academia for some time yet – it is up to us to work out together how we want to make best use of the technology in ways that strengthen (rather than compromise) our academic work.

——————————

Neil Selwyn has been researching and writing about digital education since the mid-1990s. He is currently a professor at Monash University, Melbourne. Recent books include: ‘Digital Degrowth: radically rethinking our digital futures‘ (Polity 2025), ‘Critical Data Literacies’ (MIT Press 2023, w/ Luci Pangrazio), and ‘Facial Recognition’ (Polity 2022, w/ Mark Andrejevic). 

Bluesky:  @neilselwyn.bsky.social
Mastadon: @neil_selwyn@mstdn.social

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.