Skip to main content

The explosion of generative AI has created significant challenges and sparked new opportunities for our students. So, how do we decide when students should and should not use AI? In this article and podcast, I explore seven key areas you might consider as you craft the policies and design the systems within your school.

A robot on the left and the sentence, "How do we help students think wisely about artificial intelligence?" Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify

 

Avoiding The Two Dead Ends

We are a solid year and a half into this big cultural moment where we are recognizing the power of Artificial Intelligence. We see it in the prevelance of chatbots like ChatGPT and Bard. We notice it with the AI image generators. We see this in schools with tools like Curipod and Magic School AI. But in nearly every app we use, we see the sparkle symbol that’s become synonymous with AI.

On a side note, that symbol speaks volumes. Right now, AI feels like magic. It can do amazing things we couldn’t imagine. But we also can’t predict how it will change our world. For some, it feels like a revolutionary tool that will make life better. For others, the magic feels like dark magic. But regardless of our feelings and our thoughts, there is a collective agreement that AI is changing our world.

The bad news is that the rules have changed. The exciting news is that our students will get to re-write the rules. In many cases, students have slipped into two opposing dead ends: the Techno-Futurism approach and the Lock It and Block It approach. For what it’s worth, I’ve fallen into both traps as well (like the time I tried to “go paperless”).




 

The answer is neither the Lock It and Block It approach or the Techno-Futurism approach. Instead, it’s a blended approach that focuses on the the overlap of AI and the human voice.

This venn diagram is an overlap of AI and the human voice with the word "blended" in the middleThe best creators are going to know how to use A.I. in a way that still allows them to retain their humanity. This feels like a daunting task, but I’m inspired by a phenomenon in competitive chess. A.I. will nearly always beat a human. But when you do chess via teams, the fully automated A.I. teams rarely win. Neither do the all-human teams. The winning teams are nearly always the combination of A.I. and human. If that’s true of an isolated system like chess, how much more true will that be in a complicated world where the systems are constantly evolving?

Our students will need to use AI wisely. Consider a programmer. She might outsource the easier code to A.I. and focus on the most challenging code herself. She might double-check her work with A.I. She might ask the A.I. for help with certain questions or ideas. She might even start with the A.I. code and then edit it to make it more efficient or to take it in a new direction.

So, what does that mean for K-12 institutions? How do we define Acceptable Use? I don’t have any easy, clear-cut answers. But I do have a seven things we might want to consider.

 

1. Start with the Learning Targets

Over the last year and a half, I’ve had the honor of delivering keynotes and conducting workshops on AI and education. One of the questions I get asked most often is, “When is it okay for students to use AI?”

Often, the goal here is to nail down a specific school-wide policy that every teacher can adhere to. Some schools have even developed a table or chart with “acceptable help” and “unacceptable help” for AI usage. While I love these charts for the clarity and intentionality they offer, I wonder if it might work best to use that type of chart on individual assignments or projects rather than as a singular school-wide policy.

In the long run, we want students to learn how to use AI ethically and wisely. But this requires students to think critically about the context of the task at hand. If you’re teaching a coding class, you might want to be tight with students on using generative AI to create any kind of code. You might want students to learn how to code by hand first and then, after mastering the language, use AI-generated code as a time-saving device.

By contrast, if you’re teaching a health class where a student develops an app, you might not care if they use generative AI to help write the code. Instead, your focus is on helping students design a health campaign based on healthy habits. You might not have time to teach students to code by hand. You might not care about coding by hand. The app is merely a way for students to demonstrate their understanding of a health standard.

If you’re teaching an art class, you might not want AI-generated images but you might embrace AI-generated images in a history class where students work on making infographics to demonstrate their understanding of macroeconomics principles.

It might feel like cheating for a student in a film class to use AI for video editing but the AI-generated jump cuts might save loads of time in a science class where students demonstrate their learning in a video. In a film class, it’s critical for students to learn how to edit by hand in order to tell a story. In science, AI-generated jump cuts allow students to create videos quickly so they can focus on the science content.

This isn’t new. Technology has always helped us save time and money in doing creative work.

Technology makes the creative process faster and cheaperAs an eighth grader, when I made a slide presentation, I had to find all the pictures in books and magazines, take photos of those pictures with a camera, and take the film to Thrifty’s Drugstore to get my slides for the carousel. I don’t miss that. Okay, I do miss the cylindrical ice cream scoops from Thrifty’s. It was a big thing in California. But I now make slides using paper, pens, an Apple Pencil, and Photoshop. It’s way easier and faster.

The danger with automation is that the AI can do so much of the work that students miss out on the learning. This is why it’s still vital for students to take notes by hand or do prototyping with cardboard and duct tape.

For this reason, my first AI question I ask is, “What is the learning outcome and how does AI fit within that?” If my goal is for students to learn how to write original code in a programming class, I might say, “We will use AI to give feedback on the code” or “We will use AI to create exemplars.”

If I’m teaching a history lesson, I might want students to use AI as a question and answer tool to build background knowledge. But if I’m teaching a History Mystery lesson where students have to make predictions and test their answers, I might go entirely tech-free and embrace confusion and productive struggle.

So, what does this mean for schools crafting universal policies for all students? I’m on an AI committee at my university. We are rewriting our university policy that will include in our syllabi. I don’t have any easy answers for crafting an airtight policy that also allows for contextual flexibility. However, I think there are a few things we can often agree on:

  • Give educators leeway in how their students use AI
  • Research how AI is used in different disciplines, domains, and industries and allow students to learn how to use it wisely
  • Make sure educators give clear expectations for how students can use AI within a given assignment
  • Require students to share when and how they have used AI

The challenge is in creating a policy that is universal for an entire school but allows for flexibility given the context and learning targets of specific lessons. A simple statement might be, “Generative AI may only be used on graded assignments when the teacher has granted explicit permission.”

 

2. Be Cognizant of the Policies and Regulations

After thinking through the learning targets and the use of AI, we need to consider the policies that govern any kind of technology integration. Here in the U.S., we need to consider a few key policies:

  1. Family Educational Rights and Privacy Act (FERPA): FERPA protects the privacy of student education records. It grants parents rights to their children’s education records, which transfer to the student, or “eligible student,” at age 18 or upon entering a postsecondary institution at any age. When using AI tools that process student data, we, as educators need to ensure these tools comply with FERPA. This has big implications for using AI with things like creating IEPs, giving feedback on student work, or writing a letter of recommendation.
  2. Children’s Online Privacy Protection Act (COPPA): COPPA imposes requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age. We need to ensure that any AI tool used in class is COPPA compliant, especially when these tools collect data from students. It’s important that we pay close attention to the Terms of Services and the age limits of different AI apps.
  3. Children’s Internet Protection Act (CIPA): CIPA requires K-12 schools and libraries in the U.S. to use internet filters and implement policies to protect children from harmful online content as a condition of receiving federal funding. If AI tools are used to access internet resources or incorporate online research, teachers need to ensure that these tools do not bypass the school’s internet filters. AI applications should be vetted for their ability to filter and block access to inappropriate content.
  4. District Policies and Acceptable Use Policies (AUP): School districts often have their own set of policies regarding technology use, including acceptable use policies (AUPs) that outline what is considered appropriate use of school technology and internet access. Teachers should review their district’s AUP to understand limitations and guidelines for AI tool use. This review helps ensure that the integration of AI into teaching and learning aligns with district standards for ethical and responsible technology use.
  5. Americans with Disabilities Act (ADA) Compliance: Legislation such as the Americans with Disabilities Act (ADA) and Section 508 of the Rehabilitation Act requires that educational materials and technologies are accessible to all students, including those with disabilities. When selecting AI tools, teachers need to ensure these technologies are accessible to students with disabilities, supporting a range of learning styles and needs. AI tools should not create barriers to learning but should enhance accessibility and inclusivity.

The policy level is the baseline level of compliance. But beyond policies like FERPA and COPPA, there are broader concerns about data security and privacy with the use of technology in education. We need to be cognizant of how these AI tools use and store data.

We need students to examine the bias, fairness, and the impact of AI information (which we’ll address in point number six).
As educators, we need to be aware of the biases in AI tools and strive to use AI in ways that promote fairness and equity. But we can also invite students into the conversation as well.

 

3. Bring Students into the Conversation

Ben Farrell is the Assistant Head of School / Director of the Upper School at the New England Innovation Academy. In early December of 2022, when students began using ChatGPT, he didn’t create a schoolwide ban. He didn’t accuse students of cheating. Instead, he asked questions and invited students into a dialogue about what it might mean to use A.I. in a way that’s ethical and responsible.

As Farrell describes it in a podcast interview from last year, “It’s crucial to empower them to have open discussions, and I feel strongly about that. In the conversation we had, I think a lot of wisdom emerged. We observed a range of opinions, like bumpers in a bowling alley. One student remarked that this could be the ‘death of original thought,’ expressing concern for the impact on creativity. On the other hand, some students wondered if we could use this technology for everything, questioning the need for traditional written papers. So, there’s a spectrum of viewpoints to consider.”

From there, they worked together to define an actual policy for generative A.I. In the upcoming months, they plan to have more conversations and revise their policy as the technology evolves and the context changes.

While so many schools rushed to ban ChatGPT, Farrell asked students, “What does this mean for the future of your work? How does this impact your creative process?” Then, he listened.

As he describes it, “Students will inevitably discuss the topic elsewhere. If we can’t facilitate such discussions in a school setting, I believe we’re missing out on a valuable opportunity. Of course, different schools and school systems have their unique perspectives on this matter. However, if we can encourage and participate in these conversations, that’s what’s truly exciting and beneficial for everyone involved.”

Note that this process is messy and chaotic. It’s easier to try and craft a clear-cut policy for A.I. within a school. But a dialogue is more human – and ultimately more practical. If we can invite students into a conversation and listen with open minds, we are more likely to craft a solution that fits the needs of our students. The result is an adaptable policy that schools can modify as they learn more about the effects of A.I. on learning.

One of the best ways to engage in these conversations is through the use of a Socratic Seminar.




A Socratic Seminar doesn’t have to begin with a non-fiction anchor text. Fiction can be a great starting place for these kinds of conversations. It would be fascinating to do a Socratic Seminar using “The Great Automatic Grammatizator” by Roald Dahl as an anchor text and see what themes emerge about automation, art, AI, and commerce.

There is one part in the short story where the author chose the machine, not for the money, but because the machine created something better and faster in her own style. To be honest, I have felt that in illustrations. I am not capable of the quality or the range in artistic styles of the AI. As someone who loves to draw, this is hard for me.

 

4. Consider Human Development and Age Appropriateness

When I was earning my bachelor’s degree, I read Neil Postman’s book The Disappearance of Childhood. Postman argued that the rise of electronic media, especially television, had eroded the boundaries between childhood and adulthood, as children became increasingly exposed to the same content as adults. To be clear, this wasn’t ideologically conservative or liberal. It was about being developmentally appropriate.

While Postman didn’t live to see the era of smart phones and algorithms, his critique is still relevant today. What does it mean for a 12-year-old to use a device and a series of apps aimed at adults? In what ways have the distinctions between childhood and adulthood been eroded by our technology? And what does that mean as we shift toward more advanced forms of A.I.?

Most AI tools have been designed by grown ups for grown ups. So, as we use AI tools, we need to ask, “What does a child this age need and has this tool been developed in a developmentally appropriate way?”

Consider the role of AI tutors. We need to know how exactly the tools adapts to the developmental needs of students. If you hired a human tutor, you would likely ask what experience that person had working with children of a certain age range. The same is true of A.I. tutors. As educators, we need to ask how the machine learning algorithms had been trained to engage with children of various age levels. We need to know what safeguards have been put in place to make sure that the content is age-appropriate.

In this respect, I’ve been encouraged by the Khan Academy. In interviewing Salmon Khan on my podcast, I was struck by the intentionality they had with issues of bias, human development, and aligning the A.I. to learning theory as they developed Khanmigo.

 

5. Focus on Trust and Transparency

A blended approach shifts accountability away from surveillance and punishments and toward trust and transparency. The focus here is on students showing exactly how they are using AI in their creative work.

For example, students might use AI-generated text but it is timestamped in a shared document (like a Google Document). They then modify the A.I.-generated text with a color-coded process that makes it easy to visualize how much of the text is human-generated. In using this process, I’ve found that students have re-arranged paragraphs, added entirely new paragraphs, and amplified their writing far beyond the initial A.I.-generated text.

I contrast this to AI detection software, which involve an algorithm testing a text to see if it is AI generated. In other words, they use AI to catch AI – not unlike Blade Runner.

Unfortunately, these tools are never 100% accurate. Let’s take a 94% accurate detection software. Seems pretty good, right? Now let’s assume that throughout a semester, the students each submit five essays. If that teacher uses an AI detection software with a success rate of 94%, this will still mean that up to 54 students might be either falsely accused of cheating or getting away with cheating.

When we focus accountability on “catching cheaters,” we entrust advanced algorithms to judge the academic integrity of our students. Imagine being a student who wrote something entirely from scratch only to find that you failed a class and faced academic probation because a robot sucks at determining what is human. Even as the algorithms improve in detecting AI-generated text, this approach leans heavily into Lock It and Block It approach.

When we start with trust and transparency, we ask students to show their work and we treat the mistakes they make as learning opportunities. Along the way, they discover how to use AI ethically and appropriately.

 

6. Model the Process

Over the last year, I’ve had the opportunity to work with educators who are incorporating generative A.I. into their classroom practice. Whether it’s working with the pre-service teachers at my university or in working with current teachers in the workshops and professional development I lead, I have been inspired by the creative ways teachers are using generative A.I. for student learning. I’m struck by the intentionality and creativity inherent in the process. But I’ve also noticed a surprising trend. Students don’t always know how to craft relevant prompts for the A.I. chatbots.

We can model acceptable use of AI by incorporating something like the FACTS Prompt Engineering Cycle:




We can also model acceptable use of AI in our writing assignments or by integrating AI into project-based learning.

 

7. Emphasize the Human Element

Earlier I mentioned my frustration with AI-generated imagery. It feels disheartening to think that I will never have the artistic range of an AI.

And yet . . . I am careful to use the term “AI-generated images” rather than “AI art.” Art is about empathy, curiosity, and our own unique voice. And this uniqueness is often the result of our imperfections.  In other words, imperfections are what make art great. It’s why live shows can sound better than studio albums and live drummers can sound better than drum machines. It’s why the quirky illustration style of Peter Reynolds is so iconic. And it’s part of why a homemade pie with a few imperfect crimped edges beats a frozen pie from a factory. In the end, it’s the defiant choice of an artist, fully aware of all imperfections, choosing to push forward, that I find so compelling. And it is within those very imperfections that we can find a work authentic.

In the future, our students will need to become really good at what AI can’t do (empathy, contextual understanding, curiosity) and really different at what AI can do (divergent thinking, individual voice).

I’ve used the metaphor of ice cream to describe how students will need to take the vanilla and create their own flavor.




Recently, I worked with a group of educators from the Oregon Writing Project. Together, we rewrote common writing prompts to emphasize empathy, contextual understanding, curiosity, divergent thinking, and voice (or personal perspectives). Our goal wasn’t to create an AI-proof prompt but to design prompts that required the human element.

But it goes deeper than this. Over the next decade, we will need to engage in hard questions around what aspects of learning should be timeless and what areas should evolve. We’ll need to focus on the overlap of “next practices” and “best practices.” We will need to help students embrace a blended approach that emphasizes the human element while using the technology wisely.

A great starting place by exploring your Graduate Profile and asking, “What do we want students to know when they leave our institution?”

We need to ask the question, “Who do we want students to be?” before asking “How should our students use AI?” It’s at this moment that we can determine how AI fits into the picture. To be clear, no one has the one right answer for acceptable use. Our approaches will vary as the time, context, and tools change. It’s never going to be perfect but that’s okay. If we keep the human element at the forefront we will be well on our way to using AI wisely.

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

2 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.