328
Audio & Video Production322
Software Development243
Automation & Workflow209
Marketing & Growth195
AI Infrastructure & MLOps150
Writing & Content Creation199
Data & Analytics125
Customer Support123
Design & Creative150
Photography & Imaging141
Voice & Speech131
Sales & Outreach114
Operations & Admin89
Education & Learning121
An Earth science instructor says tools like ChatGPT make it easier to submit AI-written work, pushing teachers to spend more time checking for cheating.
In short: Some college instructors say tools like ChatGPT are making it harder to tell what students actually learned, especially in online classes.
A part-time college Earth science instructor writing in Ars Technica says generative AI tools, such as ChatGPT, have become the most demoralizing problem he has faced as a teacher. He says the job has shifted from teaching to spending large amounts of time investigating whether assignments were written by a student or by an AI tool.
This is especially hard in asynchronous online courses, which use recorded lessons instead of live class meetings. Without in-person supervision, students can submit work that looks real even if they did not do it. The author points to a College Board survey of 600 high school students where 84 percent said they had used generative AI for schoolwork.
He describes a change in student answers over time. For one question meant to test thinking skills, he reviewed 279 responses going back to 2019. Before ChatGPT, about one in three students reached the key idea. Over the last two years, more than half did, and the author says the wording now often matches what ChatGPT produces.
The author argues this hurts “formative assessments,” which are low-stakes assignments used to spot confusion early. He compares AI-written work to using a forklift in a weight room (the weights move, but you do not get stronger). He also says many anti-cheating fixes, like oral exams or supervised handwritten tests, are not practical for asynchronous online classes.
Schools may respond by cutting back on take-home writing and shifting to more supervised testing, which could make online programs less flexible for working students and students with disabilities. Another open question is whether colleges will standardize rules for acceptable AI use, since there is no reliable test that proves a student used an AI tool.
Source: Arstechnica