You have /5 articles left.
Sign up for a free account or log in.
I did not think I would witness more doubt and despair about the work of teaching than what was going on during the pandemic, but I now believe that the presence of large language models has surpassed that period in terms of triggering negative emotions among those working directly with students.
My chief response is sympathy. The feeling that you are putting great effort into delivering experiences that should be—and have been—meaningful to students, but which students are bypassing almost entirely by employing ChatGPT or their ilk, is simply awful.
In speaking with teachers and professors and witnessing many comments online, I’m hearing not so much anger at students for making these choices, but frustration over a situation which makes these choices understandable from the student’s point of view, and impossible to respond to from the instructor’s point of view. Failing everything that seems like it was GPT-generated won’t work, particularly given that some of the GPT work objectively would pass the assessment criteria.
Entirely retooling one’s assignments and assessment practices on the fly is not possible. I know this because I did retool these things prior to ChatGPT and it took me at least eight semesters. Consider, too, the additional challenge that the shifting capabilities of large language models makes for a consistently shifting target.
What has been done in the past doesn’t work, and there is not enough time to arrive at the future.
There are other complicating factors. For one, I sense many instructors feel that they are facing this challenge largely alone. They may be seeking out resources and insights from others, or have received some guidance from their institution or disciplinary organizations, but tangible resources are tough to come by.
At the same time, I hear a lot of frustration among instructors who view their institutions as pushing hard toward an AI-mediated future while this technology is ruining the present. Many are justifiably suspicious of the AI-mediated future and believe that their reluctance to embrace the technology is looked at unkindly at administrative levels above them.
One current instructor told me that their administration seems to believe that AI will someday soon be used in all aspects of student and faculty work, so why should they bother policing students who use ChatGPT to do their work for them? If this is the end that the institution supports, why resist?
Of course, this instructor vented these frustrations because they firmly believe that it is vital for students to have genuine learning experiences and ChatGPT was interfering with this. I had been helping them think through changes to both their assignments and their assessment practices, and the instructor reported that while this seemed to help, the AI-generated work was still coming in and it was “killing” their spirit.
One of the reasons I’ve always found teaching so fulfilling is that the work feels deeply personal, but I tried to remind this instructor that it is important not to take the work too personally.
This has always been true. The number of times I have read a piece of student work and lamented something along the lines of “Why did they do this to me?” when an assignment came in and it reflected a failure to sup from the bounty which I thought I’d provided is uncountable.
Students have always been failing to live up to our well-meaning, but sometimes outsized expectations.
The biggest difference now is the increased opportunity and lowered cost of treating schoolwork as a purely transactional exchange, a product—whatever its provenance—for a grade.
Feeling some measure of anger and hurt over this is understandable and natural.
Having exhausted my store of advice for this instructor, I only had one more thing to offer, the truth, “It’s not your fault.”
That truth does not solve the problem at hand, but that doesn’t make it any less true.
In addition to feeling anger and hurt being understandable and natural, let me add a third descriptor … necessary. These are the signs of a status quo that is unacceptable and a knowledge that there is no past to return to that will provide an escape from the present challenges.
Not knowing what the answer would be, I asked this instructor I was working with to go back to student work from prior to the arrival of ChatGPT and see if they could establish a kind of “disengagement index” that is, as best they could, to quantify the percentage of students who were basically going through motions before ChatGPT, and compare that number to the ChatGPT era.
I also asked them to try to quantify how many students were doing truly excellent work both before and after ChatGPT. This instructor had done a lot of work in changing their assessment criteria to account for ChatGPT’s abilities, and I wanted them to compare the best work students were doing before and after this change.
I truly had no idea what this little experiment would reveal, but in our next conversation, the instructor came back to me and said that, all things considered, the disengagement index was largely unchanged over time. It was the obviousness of the ChatGPT-produced work that proved so frustrating, rather than the overall volume. Going through the motions used to require at least some work. Now even that wasn’t the case.
The news on excellence was well … excellent. The instructor said, “The best work is better,” by which they meant more original, more interesting, more clearly the byproduct of rigorous engagement. It seemed clear that changing the assessment criteria had spurred some students to more learning.
A different lens on the evidence showed that at least in the case of this one instructor, they’d adapted to the changed landscape quite successfully, all things considered. They just hadn’t seen it because of those understandable frustrations with the ChatGPT-produced work some students were turning in.
“It’s just such a waste,” they said, and it is, but progress is being made, and progress will continue.