Apr 15 2024

An open discussion was held recently between PhD researchers and staff from the EDC, the Doctoral School, the library, and CeDAS. The conversation, organised and chaired by Dr Laura Christie, aimed to learn how researchers currently use AI within their research, to consider research integrity/ethical applications of AI, and to inform wider discussions happening across the university around AI use and guidance. Dr Christie began by introducing the aims of the conversation, reviewing the current university guidance on the use of AI in assessment, and reflecting on the survey that was sent out to participants ahead of the discussion. 80% of those who completed the survey were using some form of AI in their research, which underlines the need for these conversations and for guidance that reflects the complexity and nuances of this rapidly evolving technology.

The discussion itself ranged through many diverse and fascinating topics relating to the use of AI in research. We considered the large-scale changes in many sectors prompted by recent advances in AI use, and reflected on similar historical changes in education tools and how society has adapted to incorporate and even rely on this new technology. We discussed the ever-increasing variety and proliferation of AI tools, and wondered whether in future these would coalesce, and several key tools would emerge for different aspects of research.

Many researchers shared their different experiences of using AI tools to aid planning, reading, writing, translation, editing, and other aspects of research. Some researchers raised valid points about the skills gained from pursuing PhD research and the need to critically assess any outputs that AI had generated or contributed to. Others noted that relying on AI to save time or to generate these aspects of research may neglect key skills needed for future research or employment. Part of the discussion focused on the use of AI as a partnership or collaboration, noting that for such a partnership to be successful, the user must have a good understanding of the tools used alongside a resilient prior knowledge of the subject with which to judge the validity of material suggested or generated. Alongside this, good writing skills and creativity are essential to use these tools to their full advantage. This prompted an interesting reflection about the role of the creator in this partnership and how collaborating with AI can shift the balance somewhat, making the creator more of an editor.

The many advantages of using AI tools for some aspects of research were also highlighted throughout the discussion. One potential use of such software was to summarise published research to triage vast amounts of reading and find papers/chapters most useful for deeper analysis. It was noted, however, that often such AI-generated summaries are quite bland, and outsourcing this stage of the research process can mean that essential points from the literature are missed. Another highlighted advantage of AI tools was for use in translation or editing, especially for neurodiverse researchers or those working in an additional or second language. This led to an interesting discussion about the effects of such use on language more broadly, considering the inherent biases in LLMs and the many omissions of already marginalised parts of language from the suggestions generated by AI language tools.

The conversation also considered data security and ethical uses of AI, and a key concern was raised about how to cite material generated by or produced in collaboration with AI. Some noted that they were required to provide full citations in this situation, as with any other source, while others explained the scavenging culture of research within their discipline and felt that these referencing regulations were less applicable in their contexts. Sheffield University’s ‘Acknowledge, Describe, Evidence’ template for declaring the use of AI in assessment provided an interesting example of how other universities have tackled this issue well, and could be a good model for RHUL’s guidance in future. We also considered the impact of varying levels of access to research repositories across different AI tools and how this affects the information presented to users, especially within the PhD context, wherein a diverse, holistic appraisal of the research field is needed.

The conversation concluded with a discussion of AI in teaching, and the RHUL guidance on spotting plagiarism in student work was praised for its in-depth advice and helpful tips. This led to a conversation about teaching AI skills for the future, and how such classes should be embedded into the curriculum at all levels, rather than provided as optional, isolated courses. The discussion overall was incredibly useful and insightful to see how PhD researchers are currently using AI, the concerns and issues surrounding this use, and the potential for improving research with the correct, ethical use of such tools in the future. The contributions of those who attended and shared their experiences, ideas, and questions were all greatly appreciated, and we look forward to holding similar events in future to inform the development of the university’s guidance around this topic.