My professor, the robot
We all know someone, or may even be the person, who uses ChatGPT for every assignment. Cheating done by college students is one thing—but when professors rely on generative AI such as ChatGPT, it's a bigger issue.
Last week, a second year student (who wishes to remain anonymous) was shocked to see small text at the corners of an instructional sheet for an important assignment: “This work was created through the assistance of ChatGPT.” From there, nothing else she read was of very good quality. The instructions were unclear, there was no information on how the assignment was to be graded, and statistical information given to assist the work was dated. Yet her professor felt comfortable assigning that to students—a professor with a zero-tolerance policy for AI cheating outlined in his syllabus.
Examples of hypocrisy like this are not rare at the collegiate level. Just as more students use generative AI to complete assignments, professors will use it to grade them, and to create more.
To be clear, nobody should be using generative AI for coursework, whether you're a student or a teacher. But for a professor to use it to create an entire assignment, adding on to their students work load, giving them more stress in this busy season, and not even having enough respect for their job to put some effort into the work that makes up a student's grade? It's just lazy, and there's no excuse for it.
Beyond singular professors and classes, the wider problem concerning AI existing in a college setting, outside of being a tool to cheat with, is that it risks legitimizing the use of generative AI in academics. A professor is supposed to be an example of integrity in academia that students can look to for guidance on how to fit into the academic world. They’re supposed to be the people students are looking to show their best work to, and someone who a student will want to impress. But if a student sees that their professor is comfortable with outsourcing their thinking, over time students will see this type of behavior as normal, and the laziness will be passed down through generations. And with the structural difficulties and factual inaccuracies of generative AIs like ChatGPT, this situation developing into a pattern doesn't bode well for the future of academia.
Professors engaging in this behavior could also influence students in a much more personal way. Hearing the story above, a reasonable first thought might be, “Well, if my professor can use AI in this class, why can't I?” It's a valid line of thinking. If I received such a blatant sign from a professor that they don't care about what they're teaching, why should I care either?
It's a futile thing to work on an assignment that you know your professor didn't even care enough about to make, and extremely discouraging to know that if they've been making assignments with AI, they've probably been grading your work with AI as well. Why should students put effort into doing the work if professors won't put effort into grading it?
If we are not careful, soon enough ChatGPT will be acting as student and professor, grading itself based on its own standards.