TUCSON, Ariz. (13 News) – The University of Arizona considers unauthorized use of artificial intelligence cheating, but the school has disabled AI detection features on its plagiarism software due to reliability concerns and false positives.

The policy creates a challenge for students and educators navigating learning and grading in an era where AI tools are increasingly accessible.

The university says every instructor needs their own AI policy communicated clearly to students, but clarity remains difficult on the subject.

RECOMMENDED FOR YOU

Associate Professor Steve Bethard, an expert in machine learning at the UA College of Information Science, said AI has changed how educators assess student effort.

โ€œIt has reduced our ability to judge effort. I can no longer say hey write me an essay and then I say oh someone whoโ€™s written put a bunch of effort in so now I can grade them based on that. No, because we know there are now models that can do it super cheap,โ€ Bethard said.

Bethard is developing methods to use and grade around AI in classrooms. For some assignments, he prohibits AI tools and designs tasks where unauthorized use would be obvious.

โ€œThere are some assignments where Iโ€™ll say donโ€™t use these tools and typically I design these assignments in such of way that if they do use those tools, it will be obvious to me that they have,โ€ he said.

Creative detection methods emerge

Bethard uses techniques like embedding hidden prompts in assignments that would trigger AI responses with specific characteristics, such as turtle metaphors, making unauthorized AI use detectable when students copy responses directly.

In his courses, students are often encouraged to use AI for coding tasks as long as they cite the assistance, since AI skills will benefit their future careers.

High stakes for academic dishonesty

Unauthorized AI use at the University of Arizona and other schools, including area high schools, is treated as academic dishonesty and could result in failing grades, suspension, or expulsion.

Current AI detection software lacks the accuracy needed to serve as a standalone proof of cheating.

โ€œYou would need to provide evidence, and if your evidence was I ran one of these online checkers and it said thereโ€™s an 80 percent chance this is ChatGPT thatโ€™s not very good evidence,โ€ Bethard said.

The detection tools cannot provide definitive results about whether students used AI assistance.

โ€œTheyโ€™re not 100% โ€˜Yes this student used itโ€™ or 100% โ€˜No, they didnโ€™t,โ€™ so any decision based on them has to recognize that,โ€ he said.

Students seek ways to prove original work

Students can run AI detection tools on their own work to demonstrate authenticity, Bethard suggested.

If multiple detection tools produce different results on student work, it could serve as evidence that the work was original, since the tools would likely agree if AI had been used.

Academic integrity cases still rely on traditional methods of evaluation, with AI detection having limited influence on final judgments.

โ€œItโ€™s a new world, and itโ€™s exciting because they can do a lot of stuff, but itโ€™s also scary because now we have to double-check everything,โ€ Bethard said.

Universities are exploring contracts with AI companies to provide standardized models for all students, similar to how schools provide email accounts.

=======================================================

Are you streaming 13 News?

Watch a free live stream of Tucson Now and 13 News at TucsonNow.Live.

Be sure to download the free Tucson Now app, which you can find on Apple and Google.

You can submit your breaking news or weather images here.

Finished reading? There’s more to explore.


Leave a Reply

Your email address will not be published. Required fields are marked *