Skip to main content

Back in December, I showed ChatGPT to a friend of mine who is also a professor.

“I’m not worried about AI in my humanities courses,” she said.

“Not at all?”

She shook her head. “I know of colleagues who are going back to the blue books and banning devices. Or they’re looking into programs that can detect ChatGPT in an essay. But I’m just wondering how we might need to transform the essay.”

We then talked about Socrates and his concerns about writing.

One of the main reasons was that Socrates believed that writing would cause people to rely too much on the written word, rather than their own memories and understanding. He believed that people who read a text would only be able to interpret it in the way that the author intended, rather than engaging in a dialogue with the ideas presented and coming to their own conclusions. Moreover, Socrates was concerned that writing could be used to spread false ideas and opinions, and that it could be used to manipulate people.

Sound familiar? These are many of the same concerns people have with AI.

“I’ve been through this before,” she adds. “When I realized students could just download whole essays, I started requiring students to do pre-writing that they turned in. I changed to high-interest prompts that you couldn’t find online. Now I see that ChatGPT can generate responses to those high-interests prompts and I’m going to think hard about how treat AI as a tool.”

Together, we planned out a solution that would include blending together AI-generated and student-generated text. It was similar to what I describe later in this article. The essay isn’t dead but it is changing. It will continue to evolve in the upcoming years. For now, the use of AI is forcing us to ask, “When is AI a learning tool and when is it cheating?”

How will AI change the way we write?When Is It Cheating?

When I was a new middle school teacher, I had several teachers warn me not to have my students use spellcheck. If we let students use spellcheck, students would grow dependent on the tool and they would become awful spellers. I had similar concerns as well. If we relied too heavily on technology to fix spelling mistakes, would students ever bother to use correct spelling?

That semester, I had students submit a writing sample. I then counted the words and the number of spelling errors to find the rate of spelling mistakes. I then had students do a handwritten assessment at the end of the semester. There was a significant decrease in the number of spelling mistakes when comparing the initial student samples to the samples at the close of the semester. It turned out this tool for cheating was actually providing students with immediate feedback on their spelling. Instead of mindlessly clicking on the spellcheck, they were internalizing the feedback.

We now use spell check all the time. What was once a tool for “cheating” is now a tool we use for writing.

What was once considered cheating is now a toolThe truth is students are already using AI in their writing. We don’t tend to think of spell check as AI. But it is a primitive example of a smart algorithm. While spell check software is not as advanced as the newer generations of AI, it still relies on machine learning and pattern recognition to improve its accuracy over time. Some spell check software may also use natural language processing techniques to detect contextual errors, such as correctly spelled but misused words. If it seems as though your spell check and grammar checks on Word and Google Docs have improved over the years, it’s because they have.

Students are already using more advanced AI in every phase of the writing process. When doing research, the auto-fill option in Google narrows down the search for students. When typing in a Google Document, the auto-fill option will often complete sentences for students. As students edit their work, the grammar check offers suggestions for what needs to change. Certain students might even use Grammarly to polish their writing in the editing phase. The AI here is so subtle that we sometimes miss it. But machine learning is already fueling aspects of the student writing process.

Note that all of these tools have been considered cheating at some point. The same is true for calculators in math and for spreadsheets in statistics. Every technological advancement has been considered a form of cheating at first. However, eventually, these tools become essential elements to the learning and creative processes.

Somehow, ChatGPT feels different. As a newer generation of AI, it is built on deep learning. This new generation of AI relies on algorithms designed to mirror the human brain. That’s part of why ChatGPT feels so human. Deep learning models learn from massive amounts of data sets and engage in pattern recognition in a way that’s not explicitly programmed. In other words, the algorithm is learning and can now make predictions and generate entirely new ideas. The term “deep” in deep learning refers to the use of multiple layers in a neural network, allowing the system to learn and represent increasingly complex features at each layer. If a spell check is one-layer deep, ChatGPT is multilayered.

So if it feels like ChatGPT is more akin to cheating than previous AI, it’s because it functions in a way that more closely mirrors human thinking. Clippy was cute and even acted a bit human it its tone but current chatbots can feel as though you are actually talking to a person.

So where does that leave us with cheating? When is AI simply a tool to enhance learning and when is it co-opting and replacing a vital part of the learning process? It can help to think of it on a continuum. I love the way Matt Miller, from Ditch that Textbook conceptualizes it:

As Miller describes, “We’re going to have to draw a line — as educators, as schools, even as school districts — to determine what we’re going to allow and what we aren’t.” I love the last question about how students might use AI in the future because it might vary from task to task. In writing blog posts, I might consult ChatGPT for ideas or even use it to explain a definition (where I then modify and re-write it). However, I wouldn’t want ChatGPT to write this. I want it to be my own voice. On the other hand, I could see the appeal of AI to answer my emails or even create a first draft of technical writing after I generate an outline. The truth is we are all going to use AI in a blended way.

 

A Blended Approach to Essay Writing

This blended approach moves away from the either/or options of embracing Artificial Intelligence or blocking it entirely. Instead, it focuses on using AI wisely to enhance the learning while also embracing the human elements.

Blended Venn Diagram - Human on the left, Machine on the right, Blended in the middleA blended approach might include a mix of hand-written and AI-generated writing. Students can create sketchnotes and blend together drawings and text in an interactive notebook or journal. These low-tech options focus on writing as a way of “making learning visible.” Here, students choose old school tools because the simplicity provides more flexiblity for deeper thinking.

But these same students might also use a chatbot to generate new ideas or use an AI-generated response that they then modify and amplify. They might use AI as the starting place and add their own unique voice to the finished product. This could happen in the same class period. Students in a blended mode can shift back and forth between the human and artificial intelligence.

Part of learning how to use AI wisely is finding the best contexts for using AI. This starts by recognizing the strengths of both the human and machine learning. Machine learning does well with synthesizing information from multiple sources and generating text that’s clear and readable. Machine learning can also help with editing and ideation. But if machine learning offers artificial intelligence, humans offer authentic intelligence. We do well with voice and tone (especially humor). We understand context and can write with empathy. While AI does well with creativity, we are better divergent thinkers.

The following is a blended approach that combines human thinking with artificial intelligence.

 

Initial Conceptual Understanding

I sometimes think about this as the pre-pre-writing. It’s the phase where students access their prior knowledge before asking questions and engaging in research. A blended approach might start with a K-W-L chart on a shared document. Students jot down everything they know about the topic. Next, they write down everything they want to know about the topic. At this point, instead of moving into research, students would use a chatbot (like ChatGPT) to ask questions and learn about the topic. The dynamic nature of a chatbot allows for clarifying questions that are tailored to a student’s understanding.

For this example, I’ll give a social studies essay asking, “How susceptible are we to having a World War III?”as a part of their unit on World War I.

So a student might ask, “What were the causes of World War I?”

The answer ChatGPT gives is:

  1. Militarism: The arms race between the major powers in Europe, particularly Germany and Britain, had been escalating for many years before the war. The belief in the power of military might led to an increased emphasis on building up national armed forces.
  2. Alliances: A system of alliances had developed between the major powers of Europe, which meant that if one country went to war, its allies would also be drawn in. The main alliances were the Triple Entente (France, Russia, and Britain) and the Triple Alliance (Germany, Austria-Hungary, and Italy).
  3. Imperialism: The competition for colonies and territories around the world led to tensions between European powers, particularly Germany and Britain.
  4. Nationalism: The belief in national superiority and the desire for self-determination led to tensions and conflicts between ethnic and national groups, particularly in the Balkans.
  5. Assassination of Archduke Franz Ferdinand: The assassination of the heir to the Austro-Hungarian throne, Archduke Franz Ferdinand, by a Serbian nationalist in June 1914, provided the trigger for the outbreak of war.

The student might ask ChatGPT to simplify it with a prompt like, “Explain it to someone in the eighth grade.”

  1. Militarism: This means that the countries in Europe were always trying to build up their armies and make them as strong as possible. It was kind of like a competition to see who could have the biggest, strongest military.
  2. Alliances: Some countries in Europe had made agreements with each other that if one of them went to war, the others would help them out. It was like a team sport, where if one player gets into trouble, the others come to their aid.
  3. Imperialism: This means that some countries wanted to take over other parts of the world, like colonies in Africa or Asia. This caused tension because different countries wanted the same territories, and they were willing to fight for them.
  4. Nationalism: This is when people feel very strongly about their own country and think it’s better than any other. This led to conflict between different groups of people, especially in the Balkans, where there were many different nationalities and ethnic groups.
  5. Assassination of Archduke Franz Ferdinand: This was the event that triggered the start of the war. Archduke Franz Ferdinand was the heir to the throne of Austria-Hungary, and he was assassinated by a group of Serbian nationalists. This set off a chain of events that eventually led to war.

At this point, the student might ask follow-up questions, like “Why would an assassination lead to a whole world at war?” Or perhaps this person is confused by militarism and asks, “Can you give me examples of militarism?” Or maybe they focus on the U.S. and say, “Did Americans care about World War I?” It could be a random question like, “What did they call World War I before World War II happened?”

At this point, students can add to their KWL chart. They might engage in some initial research and view videos or read articles. The process would be dynamic.

Research

In this phase, students would engage in online research by generating questions and finding answers. They might start with an initial set of questions but then amplify it by asking a chatbot to come up with research questions for them. During the online research, they would read articles and add to their research chart (with questions, answers, and citations) but also use the chatbot to clarify misunderstandings. Students might work in teams to compare their information and even go entirely human with a Socratic Seminar. In the end, students would create a hand-drawn sketchnote showing the connections between ideas. In other words, how did nationalism relate to imperialism?

Organizing Ideas and Outlining

In this phase, students would brainstorm ideas and organize them into a coherent outline. They might do a mind map or organize their ideas with sticky notes. At some point, students would create an initial outline for their essay. For sake of transparency, they would screenshot the initial outline and then ask for the chatbot to create an outline. Then, after comparing the outlines, they would modify their own outline. Students might even generate multiple outlines using the regenerate responses button on ChatGPT.

Writing

In this phase, students could take their initial outline and ask for the chatbot to generate the actual text. They would take an initial screenshot with a time stamp and then copy and paste the text into a shared document (Google Document). From here, students would modify the text to add their own voice. They would need to add additional sentences and perhaps even break up paragraphs. Using their research chart, students would add facts and citations that they then explain. The initial chatbot text would be black but the human text would be a color of the students’ choice.

Editing and Revision

As students move toward revision, they could engage in a 20-minute peer feedback process:

A key aspect of editing and revision is asking, “how is this being received?” or “how do actual humans respond to this piece?” Most of the feedback could be the type that humans do well, such as voice, engagement, tone, and clarity. But students could also ask for specific feedback from the chatbot. It might be something like, “How can I make my argumentation better?” or “What are some changes I could do to make the essay flow more smoothly.” Students might engage in a one-on-one writing conference with the teacher but then move back to the AI for additional targeted feedback.

Adding Multimedia

If students want to transform their essay, they could add a human touch by doing a video or audio essay. You can give students examples of video essays like those of the Nerdwriter YouTube channel. Here, they combine images, video, and text with their distinctly human voice. They might sketch a few slides to illustrate key points or even animate these in the style of Common Craft videos. Again, this approach blends together technology with the human touch. But students can use AI as a tool to generate images based on command prompts. They might also ask a chatbot to come up with ideas for images or videos to use alongside their voice.

 

What About Accountability?

Notice that this approach shifts accountability from surveillance and punishments and toward trust and transparency. Students use AI-generated text but then they screenshot it (which then has the time stamp) and copy and paste it into a Google Document. They then modify the AI-generated text with a color-coded process that makes it easy to visualize how much of the text is human-generated. In using this process, I’ve found that students have re-arranged paragraphs, added entirely new paragraphs, and amplified their writing far beyond the initial AI-generated text.

I mention this because I’ve already had several people reach out to me asking if I would test their AI-detection software. These programs promise to detect cheating by analyzing a piece of writing and detecting whether or not it was human-generated. Within a minute, you receive a score describing how much of the work has been generated by AI. Think of it as a Turn It In on steroids. Oddly enough, these programs are a form of AI. The complex algorithms look at a series of factors to determine if something was AI-generated.

It starts by examining semantic coherence. Human thought tends to be more logical but also contains random digressions. In other words, we tend to take random rabbit trails. It also looks at tone and style. Human writers tend to have distinct styles and tones that are shaped by their experiences, personality, and background, whereas AI-generated writing may be more generic and lacking in personality. We also use more colloquial language, like the aforementioned rabbit trails. We tend to change verb tenses more often as well. Finally, these detection programs look at text complexity. Human language tends to be more complex and varied than AI-generated language, which may be more formulaic or repetitive. An AI detector may analyze factors such as sentence length, vocabulary, and syntax to determine if the writing is consistent with human language.

I’ve tested out three of these programs with abysmal results. I used unpublished writing of my own, a series of student pieces, and a bunch of AI prompts generated by ChatGPT. I then used some pieces that contain a hybrid of both. In each case, I found that these algorithms struggled to determine the AI-generated prompts when they were a human-AI hybrid. But more alarming, there were many false positives. The AI kept identifying unpublished human work as AI-generated.

This is a disturbing trend as we think about “catching cheaters” in an age of AI. We are essentially entrusting advanced algorithms to judge the academic integrity of our students. Imagine being a student who wrote something entirely from scratch only to find that you failed a class and faced academic probation because the algorithm sucks at determining what is human. This approach relies on surveillance, detection, and punishment. Even as the algorithms improve in detecting AI-generated text, I’m not sure this is the direction schools should emphasize.

Fortunately, there’s a more human approach to accountability. It’s the trust and transparency approach that my professor friend brought up when she first heard about ChatGPT. Instead of panicking and moving into a lockdown approach, she asked, “How can we have students use the tools and make their thinking visible?”

Rethinking accountability. On the left hand side is surveillance and punishments and then there's an arrow to trust and transparency. Cautions for Students Using AI

If you log into ChatGPT, the home screen makes it clear what AI does well and what it does poorly. I love the fact that the technology makes it clear, from the start, what some of its limitations might be. However, there are a few more limitations about ChatGPT that students should consider.

  • ChatGPT is often dated. Its neural network relies on information that stops at 2021. This means ChatGPT lacks understanding of emerging knowledge. For example, when I asked a prompt about Russia and Ukraine, the response lacked any current information about the current Russian invasion of Ukraine.
  • ChatGPT can be inaccurate. It will make things up to fill in the gaps. I was recently talking to someone who works at MIT and she described some of the inaccurate responses she’s gotten from ChatGPT. This could be due to misinformation in the vast data set it pulls from. But it might also be an unintended consequence of the inherent creativity in A.I. When a tool has the potential to generate new content, there is always the potential that the new content might contain misinformation.
  • ChatGPT may contained biased content. Like all machine learning models, ChatGPT may reflect the biases in its training data. This means that it may give responses that reflect societal biases, such as gender or racial biases, even if unintentionally. Back in 2016, Microsoft introduced an AI bot named Tay. Within hours, Tay began posting sexist and racist rants on Twitter. So, what happened? It turns out the machine learning began to learn what it means to be human based on interactions with people on Twitter. As trolls and bots spammed Tay with offensive content, the AI learned to be racist and sexist. While this is an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” AI because it pulls its data from the larger culture. Many of the AI systems used the Enron data files as an initial language training. The emails, which were in public domain, contained a more authentic form of speech. But it was also a form of speech that skewed conservative and male because Enron was a Texas-based energy company.
  • ChatGPT lacks contextual knowledge. While ChatGPT can analyze the words in a given sentence or paragraph, it may not always understand the context in which those words are used. This can lead to responses that are technically correct but don’t make sense in the larger conversation. If a student writes a personal narrative, they know the context better than any AI could possibly understand. When writing about local issues for a school newspaper or blog, the AI won’t have the local knowledge that a student journalism team demonstrates. This is why it’s critical that students learn how to contextualize knowledge.
  • ChatGPT requires an understanding of command prompts. This sounds simple but it’s easy to miss. ChatGPT isn’t a mind reader, so if students use it to answer questions, they need to become really good at designing their command prompts.
  • ChatGPT lacks empathy.  ChatGPT may not be able to understand or recognize the emotional context of a conversation. This can lead to inappropriate or insensitive responses. So, it might give insensitive feedback when a student uses it for the revision process. It might lack awareness and empathy when students ask questions and engage in research (consider a student with a
  • Chat GPT lacks common sense: I’m not sure how to describe this but some of the answers I’ve gotten on ChatGPT seem silly and nonsensical. ChatGPT’s responses are based solely on the patterns and associations it has learned from text data. It may not always have the common sense or practical knowledge to understand the context of a conversation or provide accurate responses.
  • ChatGPT might not be eco-friendly. Deep learning requires an immense amount of processing power. As AI becomes more pervasive, there’s the potential it could accelerate climate change. Wired Magazine described it this way, “deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy.” On the other hand, certain technologists have looked toward AI as a potential solution for making power grids more efficient and reducing the amount of energy we collectively consume.

We can’t predict what writing will look like in a world dominated by Artificial Intelligence. Deeper learning machines, such as ChatGPT, are still in their earliest phases of development. Machine learning will grow more advanced and complex in the upcoming decades. For now, many AI tools can’t be used in a K-12 setting. ChatGPT, for example, requires users to be 18 or older to sign up. But we do know that AI is growing rapidly and many of the tools currently used outside of school will have an educational equivalent that is both CIPA and COPPA compliant.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

22 Comments

  • Nathalie says:

    Hi John, thank you for your article. I‘m very fascinated by AI and try to show the different issues to my students. Reading your article I was thinking that maybe in writing at school we should go for interviews as AI can help us but we have to develop questions the way we can use the answers and in the interaction with the person interviewed we will have to use our sense of humour and have to be flexible on our reactions and maybe new questions. Or experienced reports. But looking at my kids I adore their special ways of formulating and wouldn‘t want to stop this development only because there is an AI. Writing fiction essays they‘re expressing themselves. And I should have used AI to correct this text before posting as English isn‘t my motherlanguage but I stand to it and maybe faults will be the future charm of texts…

    • John Spencer says:

      I think a blended approach means there will be times they use AI and other times when they use just paper and pencil or a Google Doc. Just because AI *can* automate elements of writing doesn’t mean it always *should* do so.

  • Judy Jester says:

    I think it has great potential for use with students, but what about COPPA when its terms of use stipulate that you have to be 18 to create an account?

    • John Spencer says:

      At the close of the blog post, I wrote, “ChatGPT, for example, requires users to be 18 or older to sign up. But we do know that AI is growing rapidly and many of the tools currently used outside of school will have an educational equivalent that is both CIPA and COPPA compliant.” For now, it can’t be used formally in a school setting without parental permission. I think the parent permission aspect would be the biggest feature. At the same time, we also know that students at all levels will use it, so we need to be aware of that. My guess is that a version of ChatGPT will emerge in the near future with specific K12 guardrails.

  • Will Mayall says:

    ChatGPT is only a glimpse into the future of AI. Instead of focusing on its current abilities, it is important to consider its impact on education and society as a whole. The transformative effects it will have are undeniable.

    The Q&A format you showed is a fundamental way of learning, which is currently lacking in traditional education. Imagine having a conversational tool that could both ask and answer questions and guide learning, customized to your individual progress. This would allow teachers to spend more time engaging with students, rather than on the technical aspects of the current educational system.

    While AI is in its infancy, it is advancing rapidly. The potential for this technology to revolutionize education is exciting.

  • Vinay Mehta says:

    Intelligence plus character-that is the goal of true education, said Martin Luther King. I believe it is time for teachers and coaches at schools or educational institutes to put more emphasis on character-building through various activities like essay writing. rather than plainly scoring the essays based on their content. AI technology can help us put the right perspective on the education system – to make better humans!

  • HM says:

    People always point to the panic of the past to somehow undermine the concerns of the present, but that’s the wrong approach. Instead, of pointing to Socrates’concerns and the calculator as examples of panic, why not ask if those concerns had merit. It’s all a cost-benefit analysis. Writing probably did deteriorate our memory (my father who never learned to write in his own language or English has a MUCH better memory than me or my siblings) but obviously, it has many more benefits than disadvantages. Calculators did make students less adept at math. It is well known today that U.S. students are woefully inept and cannot hold a candle to the students of 50 years ago. Writing, reading, and math skills are atrocious, and grade inflation is a fact, not an opinion. Try to spin AI however you want, but it’s just another step in outsourcing the thinking that students should be doing. As usual, though, academia won’t take steps to do the right thing (find ways around AI). Instead, they’ll throw their hands up in the air and capitulate/contribute to the rapidly deteriorating standards.

    • John Spencer says:

      I think it’s more complicated. While I think Socrates had a great point, I wouldn’t say that students are woefully inept in math and ELA. There’s a great article on this topic relating to what “A Nation at Risk” got wrong: https://www.npr.org/sections/ed/2018/04/29/604986823/what-a-nation-at-risk-got-wrong-and-right-about-u-s-schools

      I can tell you that I see more students taking, passing, and deeply understanding statistics and calculus than ever before. I contrast this to my parents’ generation where they did Trigonometry as seniors. Meanwhile, there’s no evidence, causal or correlational, that calculators made students worse at math. Perhaps worse at computational fluency but short mental math exercises have been proven to help in that area far more than handwritten long division. Math is about more than mere computation.

      I’m not trying to spin AI as a good or bad. If you look at my recent article on the topic, I specifically mention that there are always tradeoffs and Postman was right in his assessment that we can’t predict how technology will change our world.

      As for Socrates, I’m not a fan of him. He advocated for authoritarian government despite benefiting from the freedoms of democracy. He spoke about the need for hard work but then lived as a member of the privileged class with slaves. Plus, he was a pedophile. I can respect the Socratic Method (though it wasn’t invented by him) and I enjoy his dialogues. But I think he was a hypocrite and I have no respect for him.

  • Richard says:

    This is sensational, John. If it’s fine with you I’ll feature this article in my LinekdIn Newsletter, including the article link and link to the (exceptional) video on YT. Good promo for you.
    Cheers.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.