The best way to prevent AI cheating in schools

This balanced approach will uphold the values of academia and better prepare our students for a future where AI is an integral part of life

SHARE

Artificial intelligence (AI) has ushered in a new era of possibilities and challenges across various sectors, including education. While AI offers innovative tools for learning and research, it also presents a problem for academic integrity.

The traditional methods of preventing cheating are becoming obsolete, and the AI detectors designed to catch these new forms of academic dishonesty are far from foolproof. This calls for a comprehensive reevaluation of how we approach academic integrity in the age of AI.

AI detectors, such as Turnitin's new software, have been marketed as the next frontier to combat academic dishonesty. However, these tools are far from infallible. They operate on predictive algorithms that can yield false positives, flagging innocent students and casting doubt on their integrity. This is not just a theoretical concern; there have been instances where students were wrongly accused based on these algorithms. Such incidents tarnish the academic records of innocent students and erode the trust between educators and students. So much so that in its blog containing tips for educators, OpenAI (the company behind ChatGPT) has officially admitted that AI writing detectors don't work.

Moreover, the rapid evolution of AI technology is outpacing the capabilities of these detection tools. As AI-generated content becomes more sophisticated, the effectiveness of detection tools diminishes. The technology is essentially in an arms race, where advancements on one side necessitate countermeasures on the other, leading to a never-ending cycle of escalation without a clear resolution.

The use of AI detection tools also raises ethical questions. For instance, these tools can disproportionately affect students with English as a second language, flagging their work as suspicious even when no cheating has occurred. This adds an extra layer of complexity and unfairness to the already fraught issue of academic integrity. Given these limitations and ethical concerns, a compelling argument exists for rethinking our approach.

The best way forward is to assume that students are using AI, so we must challenge students in new ways.

Educators could assign essays to be written at home, perhaps encouraging AI tools like ChatGPT. The students could then be asked to improve, critique, or defend their work in a controlled classroom environment without the aid of AI. This approach not only levels the playing field but also enhances critical thinking skills, as students need to understand the content they have generated with the help of AI.

They could give students a project encouraging them to use AI for data analysis or content generation. The next step would be an in-class presentation where students must explain their methodology, AI's role, and how they verified or modified the AI's output. This would not only test their understanding but also their ability to collaborate with AI responsibly.

Students could be tasked with using an AI tool to generate arguments for a debate topic and then create counter-arguments themselves. In a classroom setting, they would then have to defend their human-generated arguments against the AI-generated ones, demonstrating a deep understanding of the topic.

After using AI to draft essays at home, students could participate in a real-time, in-class peer review session. They would exchange papers and critique each other's work, focusing on how well the AI-generated content was integrated and whether it was critically analyzed and improved upon by the human author.

Much like math students are required to show their work to get full credit, students in humanities could be asked to provide "track changes" documentation or a reflective essay detailing how they modified or improved upon AI-generated content. This would offer insights into their thought process and ensure they engaged critically with the material.

These are not new concepts; we've seen similar shifts in other disciplines. The introduction of calculators in classrooms led to a change in how mathematics is taught and assessed. Simple arithmetic took a backseat to more complex problem-solving and conceptual understanding, which calculators couldn't easily solve for students. Likewise, integrating AI into the academic landscape should lead to a reevaluation of what skills and knowledge we value and assess in students.

The limitations of AI detectors and the potential for innovative assessment strategies underscore the need for comprehensive policies that are informed by the capabilities and limitations of AI. Educational institutions should develop guidelines that clearly outline the acceptable use of AI in academic work and the procedures for verifying the integrity of such work. These policies should be developed in consultation with educational technology experts, ethicists, and legal advisors to ensure they are both effective and equitable.

As we stand at the intersection of AI and education, it's clear that our traditional approaches to academic integrity are due for an overhaul. Rather than relying on imperfect AI detectors, we should embrace the technology's potential to enrich our educational systems while developing robust methods and policies to ensure academic integrity.

This balanced approach will uphold the values of academia and better prepare our students for a future where AI is an integral part of life. By doing so, we can navigate the complexities of this new frontier with the nuance and sophistication it demands.

More in People