top of page

Referencing GenAI use in Project Work – I Changed my Mind!

This blog provides a quick reflection of the possible benefits of integrating GenAI with project work based on two successive implementations. It also provides a discussion of the practicality of students referencing where and how GenAI was used.


My first GenAI study into assessment integrity revealed that project work was a great opportunity for educational integration. Based on this analysis, I was one of the first to put my money where my mouth is, and I applied it to a multidisciplinary, work-integrated-learning-based project subject. Indeed, it was a great experience and was central to the decision to lead the AAIEEC project work cluster where sixteen members collaborated to develop a comprehensive framework. I look forward to sharing more on that when our work is published.


While the full details of my first implementation are available in Chapter 13 of the book ‘Artificial Intelligence Applications in Higher Education’, my goal here is to discuss the recommendations I made. A book takes a lot longer to publish than a journal paper, so I have had the opportunity to build my GenAI expertise and run the subject again. Having the opportunity to read the chapter after such a long delay has helped me reflect.


In the chapter, I make a few recommendations that I continue to agree with:

1. Open-ended project work is definitely a great opportunity for educational integration. Students get to develop strong critical thinking skills, and they get an opportunity to develop their AI literacy skills, gaining an understanding that it is a co-intelligence rather than a tool that can replace them to undertake all the work. Most importantly, they get an opportunity to start understanding the limitations of the technology and position themselves amongst all the hype. If determined to embrace it, they can produce better project outcomes if used in combination with traditional approaches.

2. As there is no correct solution, students do eventually develop an understanding of the need for evaluative judgment, even if they learn the lesson the hard way.

3. It provides an opportunity to develop an ethical understanding of its use in terms of real-world application

4. To use the technology effectively, they need guidance and training. Just because they might use the technology, they generally don’t know how to use it in the best way possible for a given task. As GenAI is relatively new, this may change over time.



Referencing: I have changed my mind!


However, there is one recommendation I devote some time to in the chapter, and that is on referencing. I had this idea, which is common, that even though GenAI permission was granted, students should reference where and when it was used. The central issue is that once students start using it, you don’t know where AI use ends and student use begins. Students struggled to reference it, markers struggled to distinguish the difference, and it had me in a reference focussed spin. I spent some time writing about how I would try and make students reference GenAI use better.


As a note: The context of this discussion on referencing is based completely on the written component of the assessment for this project subject. I am not referring to referencing in every scenario, just the one given or those that resemble my implementation.


Between writing that chapter and now, my use of GenAI on a daily basis has grown. This includes how I use it in writing tasks. Between Grammarly and ChatGPT and their power to enhance ideas and writing quality, I realised that the notion of referencing all use is a rather messy thing to do.


Students are producing a large document, and if they use it to improve their writing and ideas the way I want them to, my original idea is just unfeasible. As the guidelines of the assessment design state that GenAI use is expected, I have started to realise that in terms of practicality, there is no need to distinguish the student and AI components. At the end of the day, it’s the evaluative judgment that is important, and that becomes the focus of validity. Regarding the solution, it's their evaluative judgment that the solution is correct and the best possible and that all ideas generated are backed with strong evidence from traditional sources that confirm this. It is this referencing of traditional sources that provides meaning to all the writing, regardless of whether it came from a student or AI, that is most important. Well, at least for now, until GenAI can do that task more competently than it does now. Additionally, in terms of writing, again, what is more important to me now is the student's evaluative judgment in the editing process. In almost all cases, AI-generated content needs editing, which becomes more of an assessable competency.


Our updated academic integrity paper showed that any unsupervised assessment needs to assume that AI will be used in ways that you may or may not deem acceptable. This aligns with the University of Sydney assessment strategy, which I agree with as the most practical one forward. The strategy, in a nutshell, considers one lane for secured assessment and the other lane for unsecured assessments where we need to consider AI use will be a given. Given that the assessment is not supervised and, therefore, insecure, when I ran this subject again this year, I simply assumed everything could have been sourced from GenAI, requiring students to acknowledge its use but focus on referencing traditional sources.


It highlights the importance of experience and that what works on paper or in one's mind may not be the best practical approach, highlighting the importance of reflection and being prepared to adapt approaches and thinking.


Sasha Nikolic

University of Wollongong

31/10/2024


To gain full insights into my first implementation and to discover insights from others, you may consider the following book, which has just been released:




24 views0 comments

Comments


bottom of page