By Earle Abrahamson
Artificial Intelligence and Academic Integrity both abbreviated to AI, have recently become synonymous with modern higher education learning. The discovery of ChatGPT as a tool to creatively generate text has led the HE sector, into a search for solutions. ChatGPT and AI are not new concepts but rather repurposed ones. This blog explores the obstacles and opportunities in using AI and presents dilemmas to provoke different conversations within the wider SoTL community.
The title for this blog purposefully uses the term generation. Whilst this may be seen by many as semantics, it serves to explain a key difference between written and generated text. The former relies on human intuition, insight, experience, reflection and critique whilst the latter refers to a machine’s ability to generate text through programming. This blog does not dispute the use of AI but rather exposes some of the challenges the HE sector faces in finding solutions to pedagogical issues. The content of this blog builds upon the ideas and use of AI software a pedagogical tool as outlined in Chen and Teh’s blog – Learning Design Reconsidered with AI Educational Companionship. In their blog Chen and Teh foreground the use of generative AI as a useful and necessary inclusion in the pedagogic toolkit. They position AI within the practice of SoTL. Whilst acknowledging SoTL practice, I submit that there are greater questions, for SoTL, as a movement to consider. I relate some of these questions within this blog piece.
The global concern around the effective and responsible use of AI is evident. The increased attention afforded to rethinking regulations on academic integrity, supporting students, and understanding the value of AI in learning is now taking centre stage in many universities. Some have opted to ban the use of AI in academic work, others have opted to embrace the use of AI in supporting learning, whilst others are undecided on which direction to follow. What is common is that AI is here to stay and provides insight into what is humanly possible. Knowing that AI is being used in multiple ways provides a challenge for the future of SoTL. In this sense, I return to the words of Gary Poole, co-founding editor of Teaching and Learning Inquiry (TLI), who questioned who gets to do SoTL. At the time of asking it reflected the cultural and social inequalities that exist within the wider SoTL community. Taken in modern context, this question provokes a more covert analysis of who authenticates SoTL outputs. Connecting this idea with Felten’s (2013) pillars of good SoTL practice, we need to revisit what “in partnership means”. Felten made clear he was referring to students, however advances in technology and AI disrupt this idea and provide new insights into partnership with machines. AI prides itself on being able to search text, stitch text patterns together and present outputs that mirror the instructions received. It does not think, critically analyse or claim to be human in the design of the text. It can, if commanded, build a relatively useful coherency within the text.
What is the problem?
If we accept that AI is capable of generating text, we must prepare for the future of scholarship. SoTL is predicated on sharing inquiry appropriately and encourages critical analysis into what works within different contexts. If, in the future, the author of the SoTL publications is a machine, and to the untrained eye, navigates the hurdles to publication or conference presentation, we arrive at a very different level of inquiry. Is enough being done to detect AI in the written text? How do SoTL journals combat the perceived threat around authenticity in submissions? What software will journal use to detect machine generated text? What future proofing or instructional support does the SoTL community need to develop in order to engage with the SoTL of the future? Imagine a world where the review process for publication or abstract acceptance is designed and generated by a computer that acts on prompts within the text. Imagine a future that is void of human analysis but primed through human programming of machine code? AI may serve to present solutions and prompts for writing if used properly. As we embark on a journey into the unknown and collectively explore the boundaries of AI in different, HE contexts, so too should we explore the questions that accompany the future lived and learned experience of developing SoTL output. Although this blog has attempted to disrupt some imagination and critical thought within written text formats, AI is equally effective in designing and generating poetry, podcasts and a number of artefacts. In fact, many of you may be silently thinking whether this blog post may have been generated by a machine. I assure you I am human but use insight, intuition and personal passion to direct future SoTL focus.
Will the next SoTL generation begin to challenge visions of the possible and design a SoTL that evolves to direct inquiry into what should matter most in evolving HE contexts? How could generative AI platforms become useful prompts to fuel SoTL inquiry at micro, macro and meta levels? Perhaps the real challenge is to consider the role of integrity and responsibility in using AI within educational settings?
Felten, P. (2013). Principles of good practice in SoTL. Teaching and learning inquiry, 1(1), 121-125.