Skip to main content

We often talk about the need for students to have voice and choice in their learning. We want them to engage in meaningful, productive struggle as they do projects and engage in problem-solving. But where does AI fit into this? In past articles, we’ve explored how AI can function as a co-creation tool. We looked at what it means to use AI in PBL. But what about the issue of academic integrity? In this week’s article, we explore what it means to promote academic integrity in an age of generative AI.

What does academic integrity look like in an age of AI?Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify


The Challenge to Academic Integrity

One of the most common questions I hear about generative AI is, “How do we keep students from using it to cheat?” I’ve heard this question from teachers all around the U.S. in the keynotes and workshops I conduct. I see this question arise with the preservice teachers in my cohort who want students to develop as writers. I’ve seen TikTok, Instagram, and YouTube videos from frustrated teachers who have had enough with copy and pasted ChatGPT work. I’ve seen hacks where teachers put some text in white and then use it as an AI detection process (the student doesn’t read the text but the AI picks up on the direction to include a random element that seems unrelated).

Collectively, educators are frustrated. We want students to develop as writers but we also want them to engage in learning through writing and we know that an entirely AI-generated text will short-circuit that learning process. Some teachers have responded by going old school and bringing back the blue books. Some have moved away from writing and shifted to audio or video responses. Some teachers have embraced AI as an integrated part of the writing process.

Generative AI has introduced new challenges to academic integrity. It’s easier than ever to create original content with minimal human effort. Simply type in a well-crafted prompt and you end up with a text that can pass for student work. It’s harder than ever to detect AI-generated content and educational institutions are scrambling with how to respond to this new threat to academic integrity. And yet . . . it’s not an entirely new threat. Sometimes it helps to look backward at the larger issue of plagiarism and consider how things are similar and different than they were in the past.


Academic Integrity Has Always Been an Issue

However, this isn’t a new phenomenon. Certain students have always leveraged whatever available technology they could acquire to cheat in school. When I was in the fifth grade, I fell behind on a research report, so I copied an entire section straight from the encyclopedia. I reworded a few sentences to try to pass it off as my own voice but my teacher caught me and I learned a valuable lesson. We didn’t have the internet. My computer interaction was limited to avoiding dysentary on the Oregon Trail when we visited the school’s computer lab and we met as groups of four in front of a single Apple II.

When I was in college, I knew of certain students who would buy entire papers off the internet. The technology had changed but the impulse to cheat was the same. As a new teacher, I had students who would copy and paste entire paragraphs from websites and try to pass it off as their own work. When earlier forms of AI came out (like PhotoMath) students used these tools to complete entire math homework assignments. We scrambled to figure out how to address this as a department.

However, it’s easy to miss just how often cheating happens without any technological aids at all. In my Pre-Calculus class, we regularly traded math homework and copied problems from one another, making slight changes so the teacher wouldn’t catch us. I’m not proud of this but I share it because it’s a reminder that cheating isn’t limited to technology. When a student doesn’t do any work within a group project but still earns the same grade, that’s also a form of cheating. When a student peaks at another student’s test, that’s tech-free cheating. Cheating isn’t dependent on technology but it does change the way students cheat.


Technology Changes the Way Students Cheat

Technology does not lead to cheating. But it does change the way that students cheat on their work. In terms of generative AI, cheating has become faster and harder to detect. For over a decade, technologists have honed plagiarism detectors to a high level of accuracy and teachers often integrated these tools into their learning management systems. That’s no longer the case.

Over the last year, we have seen a proliferation of AI-detection programs detect AI by analyzing a piece of writing and detecting whether it was human generated. Within a minute, you receive a score describing how much of the work has been generated by AI. Oddly enough, these programs are a form of AI. The complex algorithms look at a series of factors to determine if something was AI generated. So, we are using AI to catch AI. Sounds a bit like Blade Runner.

AI-detection programs look for patterns in our writing. Human thought tends to be more logical but also contains random digressions. In other words, we tend to take random rabbit trails. Human writers tend to have distinct styles and tones that are shaped by their experiences, personality, and background, whereas AI-generated writing may be more generic and lacking in personality. We also use more colloquial language, like the aforementioned rabbit trails. We tend to change verb tenses more often as well.

I’ve tested dozens of these programs with abysmal results. I used unpublished writing of my own, a series of student pieces (with permission), and a bunch of A.I. prompts generated by ChatGPT. I then used some pieces that contain a hybrid of both. In each case, I found that these algorithms struggled to determine the AI-generated prompts when they were a human-A.I. hybrid. But more alarming, there were many false positives. The AI kept identifying unpublished human work as A.I.-generated.

To put it in perspective, imagine a teacher with six class periods that each have 30 students. Throughout a semester, the students each submit five essays. If that teacher uses an AI detection software with a success rate of 94%, this will still mean that up to 54 students might be either falsely accused of cheating or getting away with cheating.

When we focus accountability on “catching cheaters,” we entrust advanced algorithms to judge the academic integrity of our students. Imagine being a student who wrote something entirely from scratch only to find that you failed a class and faced academic probation because a robot sucks at determining what is human. So, where do we go from here?


A Proactive Approach to Academic Integrity

One solution is to focus on a proactive approach to academic integrity. We can start by teaching students what it means to use AI as a co-creation tool in a way that avoids cognitive atrophy. With older students, we can integrate AI into the writing process in a way that requires students to do more than merely copy and paste text. When I think back to the time I cheated in my fifth grade research paper, I legitimately thought I had changed enough of the text to make it my own. I didn’t see it as wrong and I was shocked when I was called up to the front of the class after school one day.

Similarly, many of my students who had copied and pasted full paragraphs into the blog posts assumed that if they had changed a few words, they were good. It was a chance for me to model the research process and help them learn to paraphrase.

Model the Co-Creation Process in Writing

Part of how we can combat this is by modelling how to use AI ethically throughout the entire writing process. Here’s an example:


During prewriting, students can create sketchnotes, drawings and text by hand in a journal. These low-tech options focus on writing as a way of “making learning visible.” Afterward, students might use AI to clarify misconceptions. Here, they use a chatbot as a Q&A tool to ask questions and get follow-up answers. You might use a process like the FACTS Cycle to help students learn how to engage in prompt engineering.

Prompt engineering in generative AI involves designing and fine-tuning the questions or instructions you provide to the AI system to get specific, desired responses. It’s like crafting the right query to get the best answer. By carefully constructing prompts, you can guide the AI to generate content that meets your needs, whether it’s writing, problem-solving, or other tasks, making it a valuable tool for various applications.

Phase 1: Formulate the Question

In this first phase, students formulate their initial question (often called a prompt). Often, this is written in the form of a question. As a teacher, you might provide some sample questions for students to use. Or you might use sentence frames. The following are some sentence frames students might use when asking questions about 18th Century imperialism:

  • How did 19th-century imperialism impact _________?
  • What were the motivations behind _________?
  • Can you explain the key events that led to _________?
  • In what ways did 19th century imperialism shape __________?
  • How did 19th century imperialism contribute to ____________?
  • How did _________ respond to the imperialist movements of the 19th century?

Notice that these are largely question-based. But consider the initial prompt formula you might use in a computer coding / programming course. The following are actually a set of prompts that I had ChatGPT create (unlike the previous examples).

  • I’m working on a program that aims to [describe the program’s goal], and I’m wondering if someone could take a look at my code to see if there are any improvements I can make with ____________?
  • I’ve encountered an issue in my code where [describe the problem or error], and I’m not sure how to resolve it. Could someone please provide guidance or suggestions?
  • I’m trying to optimize my code for [mention the specific optimization goal], but I’m not sure if I’m following best practices. Could someone review my code and offer tips for optimization in ________?
  • I’ve implemented a new feature in my code that involves [describe the feature], and I’d appreciate feedback on whether it’s well-structured and efficient.
  • I’m working on a project that involves [briefly explain the project], and I’d like a second pair of eyes to review my code for readability and adherence to coding conventions. Any feedback would be greatly appreciated.

Phase 2: Acquire the AI Tool

If students feel satisfied with their prompt, they can consider which A.I. platforms they want to use. Some students might use the same prompt on multiple platforms and compare and contrast their results. Others might stick to one platform and use it for the entire process. Here, students consider the pros and cons of each A.I. platform to see which one fits their prompt the best. As mentioned before, I love using the Consensus App because it pulls from a library of peer reviewed articles and is less likely to fall into the trap of hallucinations (where AI makes things up entirely).

Phase 3: Create Context

Chatbots tend to struggle with context due to inherent limitations in their natural language processing capabilities. Their initial training data is essentially everything they’ve been allowed to process. Thus the chatbots fail to retain a true memory of past interactions, resulting in each user input being treated in isolation. It’s a bit like interacting with someone who only has memory of interacting with you.

Phase 4: Type the Prompt

After prepping the chatbot with the initial context, type up the prompt from Phase One and submit it. This might be a time to do a last revision or rewrite. I’ve noticed that sometimes students struggle with citing sample text. In other words, they might have a prompt and want to include a section of text to be analyzed. When they copy and paste it, they don’t always provide an explanation of what they are citing. So, again, that context piece becomes critical.

Phase 5: Scrutinize the Answer

In this phase, students learn how to critique the answers from the generative AI.The following are a few areas that students might scrutinize:

  • Bias Analysis:
    • Check for any favoritism, prejudice, or discrimination in the response.
    • Identify instances where the chatbot may exhibit biases toward specific groups or perspectives.
    • Look for loaded language and word choice.
  • Relevance Assessment:
    • Determine if the response directly addresses your initial prompt.
    • Assess whether the content is related to the topic at hand or if it contains irrelevant information.
    • Check to see if the chatbot truly understood the context or if it needs to be revised.
  • Factual Accuracy Check:
    • Verify the accuracy of information provided in the response by cross-referencing with reliable sources.
    • Highlight any inaccuracies or misleading statements.
    • If you are doing scholarly research, look for peer-reviewed sources. If it’s more informal, look up the information online. Talk to experts (if possible) and compare it to your own prior knowledge. While A.I. has been experiencing less frequent “hallucinations,” it still happens.
  • Text Construction Evaluation:
    • Examine the clarity, coherence, and readability of the response.
    • Look for grammatical errors, awkward phrasing, or structural issues that hinder comprehension.
    • Consider the style of the text organization. Is it linear? Is it chronological? Does it move logically? Or is it more connective? Is it missing something?
  • Tone and Language Analysis:
    • Assess the tone and language used in the response to ensure it aligns with the context you have given it. Pay close attention to your ultimate audience and application.
    • Identify any instances of disrespectful or offensive language and evaluate overall professionalism.


During research, students might generate a list of research questions but then use AI to get feedback on the questions they have crafted. They might ask for some sample questions or even sentence frames to help with the research process. As they read various articles, they might go back to the Consensus App to clarify information. Or they might use AI to summarize a key concept that they are struggling to understand. They might use AI to change the reading level of a text they find online (especially public domain primary sources) or use it clarify vocabulary words.


Students might start with a hand-drawn web or note cards that they manipulate in a physical format. Or they might create their own outline but use A.I. to expand it. One of my favorite AI prompts is, “Look at this outline and tell me what I might be missing” or “Is there a different way I might organize this essay?”

You might also have students create their own outline first and then have a chatbot create a second outline and have students compare and contrast both approaches.


In this phase, students start with their own text but then they use AI to do certain autofill functions or to offer feedback in the moment for grammar. Other times, you might have students start with an AI-generated text and heavily edit what is there. I’ll be sharing a color-coded system later in this article.


Students might use generative AI to give feedback on their writing. They might use an non-generative feedback tools (like Grammarly) to provide specific suggestions for grammar, tone, etc. The idea is to use AI as a form of instant, practical feedback but for students to retain their voice along the way.

If we want students to demonstrate academic integrity, they need to know what it looks like to use generative AI ethically. Over time, this grows into a mindset and a habit. But even then there will be times when students still use AI to cheat — which is why we need to get to the heart of why cheating occurs.


Getting to the Heart of Why Students Cheat

As mentioned before, sometimes students are cheating without even realizing it. But other times, it’s intentional. If we want to be proactive, it helps to ask the question, “Why are students cheating?” The following are some of the questions I ask myself as a professor:

  • Is it a lack of scaffolds and supports? What types of supports could I make available for students?
  • Is it a lack of clarity on the directions? One solution that always helps is to have students annotate assignment directions through Google Docs so that I can answer questions as we go.
  • Is the issue motivation? Are students bored with the assignment? Do they see it as meaningless busy work?
  • Are students struggling with productive struggle? Did they give up too easily and they are now using AI to do the work for them so they don’t have to struggle?
  • If it’s homework, are they busy with extra activities and struggle with time management?
  • Is it an issue of self-efficacy? Are students worried about getting a bad grade?

These questions don’t excuse cheating. It’s always wrong for a student to cheat. But by getting to the heart of why they cheat, I can then tackle some of these issues by changing my course systems and structures. But this proactive approach will never truly eradicate cheating.

While we can focus on crafting meaningful and engaging assignments, it’s naive to think that we, as educators, can prevent students from trying to cheat. Students are going to look for shortcuts. This was true a century ago and it will be true as we move through the age of generative AI.

There are no easy answers in terms of fixing the academic integrity issue. Our best bet as educators is to lean into both trust and transparency. Trust without without transparency leads to naive optimism. This is what happens when a teacher says, “If this is an engaging lesson, I trust that students won’t cheat.” Unfortunately, some students will cheat. That’s a reality. Transparency without trust leads to a punitive form of surveillance. This is what happens when teachers focus on catching cheaters by using algorithms and programs. This is often leads to a disastrous loss of trust where students are accused of cheating and have to prove their innocence instead of having due process.

So, in the case of writing, a trust-based transparency might be to have students do all of their writing on a Google Document and color code text that was generated by AI. This creates an immediate visual showing how one uses generative AI in writing. The use of the time stamps adds an extra layer of transparency.  Here’s how it works:

  • Blue: AI-generated text
  • Green: AI Generated but Revised by Human
  • Pink: Human Generated but Edited by AI (think Grammarly or Spell Check)
  • Black: Human Generated (with no modifications)

As a professor, I can look at an assignment and see, in a clearly visual way, the interplay between AI and human. I can see the way an AI-generated idea sparked an entirely new line of thinking that then led to something fully human. I can also see how students created significant modifications in their work.


Sparking a Dialogue About Academic Integrity

Ultimately, students will need to demonstrate academic integrity based on a well-developed set of ethics. We can provide tight policies and guidelines but we also need students to internalize an ethical standard around AI and academic integrity as well. One way we can do this is through setting up Socratic Dialogues about AI and academic integrity. If you’re not familiar with the Socratic process, here’s a quick refresher:

You might anchor it on a text about an academic cheating scandal or a recent article about AI and the challenge of cheating. They might read a fiction text on the subject, such as “The Great Automatic Grammatizator” by Roald Dahl. In this short story, an author chooses the AI, not for the money, but because the machine created something better and faster in her own style. This could be a great jumping off place for a conversation about academic integrity, automation, and the need for human authenticity.

You might start with open-ended questions and let the conversation go:

  1. How do you define academic integrity?
  2. What is the difference between getting help and cheating?
  3. How do you think AI is currently impacting academic integrity in schools and universities?
  4. Can you provide examples of how AI technologies, such as text-generating algorithms or plagiarism detection software, can be both beneficial and detrimental to academic integrity?
  5. What ethical considerations should educators and students take into account when using AI-powered tools in the classroom?
  6. How might the widespread use of AI in education affect traditional notions of academic honesty and originality?
  7. How can students responsibly navigate the use of AI tools to enhance their learning experiences without compromising academic integrity?
  8. What role do you think policy makers, educational institutions, and technology developers should play in addressing the challenges posed by AI to academic integrity?
  9. How have you used AI as a learning tool?
  10. What is the danger in allowing AI to do too much of the thinking for you?


Ultimately, cheating has been around forever and it’s not going away any time soon. Meanwhile, AI will continue to evolve and change the potential approaches to things like plagiarism. There is no quick fix solution. But by teaching students how to use AI ethically and focusing on proactive approaches that incorporate trust and transparency, we can help students embrace academic integrity as a habit and a mindset.


Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:


John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me


  • Cristina Lourenço says:

    Dear John Spencer,
    I hope this note finds you well. I wanted to take a moment to express my deep gratitude for the incredible resources you provide on your website.
    Your articles and books have been an invaluable asset to teachers around the world, including myself.
    Your dedication to sharing knowledge and supporting educators is truly inspiring.
    The insights and practical advice you offer not only enhance our teaching practices but also reignite our passion for education. Thank you for your generosity and for making such a positive impact on the global teaching community.
    With heartfelt appreciation,
    Cristina Lourenço-Brasilia-Brazil

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.