The role of GenAI in academic interactions

Introducing two recent white papers about the use of GenAI in academia

computer

In January 2025 I posted two white papers on the use of Generative Artificial Intelligence on my website. I also commented on its use in our CYGNA end of 2024 celebration, where some groups used it to create academic Christmas carols.

Most of the discussion about GenAI in academia to date has focused on why’s and how’s of (not) using it for an increasingly wide range of teaching, research, and service/leadership tasks. However, in these three posts I focus primarily on the (mis)use of GenAI in human interaction.

Academic etiquette & service in the age of GenAI

The first post is a new instalment in my series of blogposts about academic etiquette. In it, I first discuss the etiquette involved when asking someone to do “academic service work” for you. Like many senior academics with a public profile, I am overwhelmed with requests for this type of work. I am often baffled by how complete strangers feel free to ask me to do things for them that – if I accepted them all – would easily fill my entire working week.

Personally, I would only ever ask these favours from people that I had built up a relationship with over several years, or where I was able to demonstrate clearly how the activity would benefit them too. Where possible, I would offer to do something for hem, however symbolic it might be. Remember you are asking them a favour!

Recently, however, there seems to be something darker going on with these academic service requests. They are starting to sound a bit “off” and leave me with an uncomfortable and “icky” feeling. I very strongly suspect the senders have asked ChatGPT to “write an email flattering a famous professor into [doing something for me]”.

In the past, I have complained about not getting any response when helping others (see: Thank You: The most underused words in academia?). But now I am starting to wonder what’s worse: knowing that someone doesn’t value you enough to give you a response or knowing that they are happy to trick you into thinking they do. The former is just rude, the latter is deceptive and rude!

   

Using GenAI in social media engagement

The second post analyses of the 2024 LinkedIn Rewind materials I received from coauthor.studio. This service offers to help you to post more, save time, and turn insights into influential content. But that’s never been what my LinkedIn engagement has been about. I post when I have something to say. Writing posts so that they are more influential would take all the joy out of LinkedIn engagement and would make it a job. No thanks!

Even if you accept this premise, my analysis shows that there is good reason to be sceptical about the materials that are created. They are glib, generic, and superficial, and often factually incorrect, or at least ambiguous. They also make you sound like you are a bit of a prat who is very full of themselves.

I may know what is incorrect in the LinkedIn Rewind text, but other academics who use AI to automatically summarize my work do not. Hence, the use of AI will only aggravate the problem of inaccurate referencing that I signalled nearly 25 years ago in Are referencing errors undermining our scholarship and credibility? Go through this cycle of AI interpretation after AI interpretation a few times and we’ll end up with an avalanche of Chinese whispers.

Finally, there appears to be a worrying tendency to create a self-centred narrative that erases anyone but the creator, even if most posts include co-creators or are in fact about someone else. This makes me worry that the use of GenAI reinforces competition, celebrating the individual hero genius scientist (see also my discussion about the need for team builders here: Supporting Early Career Researchers).

ChatGPT and Christmas carols

Finally in the last post, I reflect on the use of ChatGPT in a fun interaction task we were given in our CYGNA end of year celebration: creating academic Christmas carols based on perennial favourite Christmas songs. We were offered a choice of "Twelve days of Christmas", "All I want for Christmas", and "Last Christmas" by Wham.

Several of the groups used ChatGPT to write (part of) the carol. This definitely generated the most elaborate carols. My instant gut reaction though was “how sad”. We are given a task that is meant to help us to get to know each other, and we turn it into a chore to be efficiently fulfilled by a machine. Knowing my CYGNA sisters, I am sure these groups still had a lot of fun. But I was happy being part of a group that used a human process emphasising the collective nature of our wishes, changing the "I (don't) want" to "We (don't) want".

Finally, another group spent most of their time getting to know each other and then created their super-short carol in the last few minutes. I think they got their priorities right 😁.

🎵12 paper rejections, 11 revise and resubmits, 10 PhD Theses to evaluate, 9 committee meetings, 8 research grant applications, 7 hours of lectures, 6 weeks of marking, 5 student references, 4 papers for reviewing, 3 hours of sleeping, 2 student meltdowns, and a day to rest ourselves!! 🎵

Related video

Related blogposts

academic etiquette positive academia email communication gender language citation analysis journal submission