Skip to main content

The Artificial Intelligence revolution is here. That might sound like hyperbole. After all, the world looks the same. The revolution didn’t arrive with Skynet and robots or with Blade Running cyborgs. It’s been subtle. Auto-correct here. Grammarly suggestion there. An auto-fill option in G-mail and in Google Searches. A small chat at the bottom right hand corner.

And most recently with Chat GPT, an artificial intelligence chatbot that answers questions. If you’re imagining Siri or Alexa or even Clippy (Rest in Peace, Clippy), it’s so much more than that. ChatGPT can write original essays, compose song lyrics, create stories, and generate original lines of computer code. AI can a make up a recipe for apple cobbler and then a workout plan to help you burn off the calories from that cobbler.

For a deeper dive into this reality, check out the sketch video I created:




 

But it’s not just a single chatbot. This AI Revolution is happening all around us. However, it often feels invisible. Artificial intelligence fuels our vehicles navigation systems and the smart devices in our homes. It’s personalizing how we experience social media and music apps on our smart phones, how we interact with online stores, and how we play video games. But It’s also quietly shaping our financial institutions, the supply chains in our restaurants, our city planning, and the ways our products are made.

How Will Schools Respond?

Coming out of the winter break, many teachers and school leaders are wrestling with this new AI Revolution. I’ve already had several messages about upcoming keynotes specifically wondering if I would address the question of how AI might transform education. The short answer is, “Yes, I’ll lead the discussion. I’ll share some ideas. But, no, I can’t predict how it will disrupt education.” I’m not sure any of us can.

As I do walk-throughs and coaching, I’m noticing that some educators are excited about the possibilities. Others are skeptical. Still others are worried. But collectively, we are all wrestling with big questions.

  • What is the difference between cheating and using AI?
  • What happens to student voice?
  • What does this mean for information literacy?
  • What does the future of education look like in a world of AI?

There are no easy answer to this. But here are three different approaches schools might take. I’ve written about it in-depth here.

Approach #1: Techno Futurism

The first trap is Techno Futurism. It’s an uncritical embrace of AI to transform education. We’ve already seen bold predictions about AI destroying the essay or even replacing teachers.  This approach seems to have an almost giddy delight in the disruption that AI might cause.

Techno futurism asks, “What learning tasks can Artificial Intelligence replace?” But I think this misses the point. Simply because technology can replace something doesn’t mean it should or even will replace a task. I wrote an article last month describing why we might want students to still do some of the tasks that AI might replace.

When the pandemic hit, I asked my students to do a show and tell activity about a healthy way they were handling the social isolation. They talked of gardening, cooking, painting pictures, journaling, sewing clothes, playing music, computer coding, and doing word puzzles. Every one of these things could have been automated by machines.

And yet . . .  these activities were lifelines during a global pandemic.

On a more academic level, we know that certain low-tech strategies are still valuable for deeper learning. An AI can produce a nearly perfect essay. But writing is still necessary as a tool for making meaning. Often, we learn through writing.

A hand drawn sketch note helps create the synaptic connections needed to move the information from short term to long term memory. In other words, we need lo-fi tools and hands-on learning in a high-tech world.

information processing diagramTechno-Futurism’s uncritical embrace of AI fails to grapple with the hard questions of the harm AI might cause to our social, civic, and human systems. It also tends to ignore some of the equity issues about access to AI as well as the policy issues around the platforms. For example, ChatGPT has a rule that “You must be 18 years or older and able to form a binding contract with OpenAI to use the Services. If you use the Services on behalf of another person or entity, you must have the authority to accept the Terms on their behalf.” Here in the U.S., we need to explore how ChatGPT connects to COPPA and CIPA.

It’s easy, then, to fall into the second trap of Lock It and Block It.

Approach #2: Lock It and Block It

The second trap is the Lock It and Block It approach. This happens every time a school says, “We’ll just block that site from the network.” I’ve seen this approach happen in schools where they block sites like YouTube, online gaming, and social media. In terms of ChatGPT, I’m already seeing people say, “I’ll just make students handwrite all their essays in class.”

But actually, AI has some great potential uses.

In building background knowledge, students can ask specific questions and get personalized answers. They can ask follow up questions and engage in a dialogue. Look at this example of p-values. It starts with a general question then asks for something more simplified. It then gets into questions about real-world applications.

In writing, there might be times when students start out with AI and then modify and rework it to have their own unique voice and style.  They can use AI to help create an outline or revise their work to be more grammatically correct.

AI has the power to help synthesize certain information, clarify misconceptions, and provide tutorials for skill practice. It can help us get unstuck during writer’s block. In other words, it’s a tool. The Lock and Block It trap leads to heavy-handed surveillance and reactive discipline practices. Meanwhile, students fail to learn how to use a powerful tool wisely.

But there is a third way. It’s what I call the Vintage Innovation approach.

Approach #3: Vintage Innovation

Vintage innovation is a shift away from the flashy and new and over to different and better. It’s the counterintuitive idea that the best way to prepare students for the future is by empowering them in the present. Our students need to develop the soft skills that machine’s lack, like collaboration and empathy. In a world of constant change, our students need to be divergent thinkers. In an era of automation, our students might just need lo-fi tools. In a world of artificial intelligence, our students need to think philosophically. In a sea of instant information, our students need to slow down to be critical thinkers and curators. Vintage innovation is not a nostalgic call for the good old days. Nor is it a reactionary rejection of all things tech-related. Instead, it’s a both/and mindset. It’s the overlap of the “tried and true” and the “never tried.” It’s a mash-up of cutting-edge tech and old-school tools.

It’s the overlap of timeless skills in new contexts. Vintage innovation is what happens when engineers use origami to design new spacecraft and robotics; and engineers are studying nature for innovative designs. As a teacher, it’s what happens when you do sketchnote videos mashing up hand-drawn sketches with digital tools or blend Socratic seminars with podcasting. It’s that service learning project that combines the deeply human hands-on learning with photojournalism. It’s the old idea of commonplace books to explore new relevant research. It’s a design project that includes duct tape and cardboard and sticky notes and markers but also digital modeling.

With vintage innovation, we can avoid both the techno-futurism trap and the lock it and block it trap by asking, “What does it mean for us to use AI wisely?” and “How do we think critically about the AI as a tool?”

If you are a teacher, you are an innovator. You’re an experimenter trying new things. You’re the architect designing new learning opportunities. Apps change. Gadgets break. Technology grows obsolete. But one thing remains. Teachers always change the world. Artificial intelligence isn’t going to change that.

For a deeper dive into vintage innovation, check out the book I wrote or the online course I developed on this topic.

Developing Human Skills in an AI World

This vintage innovation approach focuses on the essential human skills that students will need in a world of AI. In Human Work in an Age of Smart Machines Jamie Merisotis argues that we should begin with the question, “What is it that we, as humans, can do that machines can’t do?” It might mean developing empathy, thinking divergently, curiosity, engaging in finding your own unique lens, or coming up with innovative solutions.

Text: What can we do that machines can't replace?

If you play around with ChatGPT, you’ll notice that it tends to struggle with context, with empathy, with humor, and with divergent thinking. On the other hand, it is great at synthesizing and organizing. It reminds me of Data from Star Trek: The Next Generation. The android is precisely who you want managing information and sharing updates. He’s amazing at providing critical details. But he’s not the one you want leading the entire crew. That requires Picard. In other words, Data should be informing but not driving decisions.

Be data-informed, not data-driven.If I lost you on the TNG reference, I’m sorry. The point is that we need to cultivate essential human skills in an age of smart machines. Many schools are already developing Graduate Profiles. These are usually in the form of a document that outlines the knowledge, skills, and competencies that students should possess upon completing their K-12 education. They often serve as a blueprint for curriculum design, assessment, and instruction. In other words, these graduate profiles are looking at the human skills that students will need in an age of AI and machine learning.

However, we don’t develop these skills by simply learning about them. They require practice and project-based learning is a great way for students to evelop these essential skills.

In a world of AI, our students need PBLPBL Can Help Students Develop These Human Skills

The goal of project-based learning is to engage students in meaningful and authentic learning experiences that are relevant to their lives and interests. This sense of relevance helps them move away from the temptation to cheat with a chatbot. But the design is also centers on human skills that the AI can’t do. PBL also helps students to develop critical social-emotional learning skills. It’s an idea Mike Kaechele and Matinga Ragatz explored in-depth in Pulse of PBL.

Project-based learning is a teaching method that focuses on active, experiential learning through the completion of real-world projects. Students are given a problem or challenge to solve, and they work collaboratively to design, create, and present their solution. When coupled with design thinking, they develop deeper empathy as well. Check out the video below for more information:




Think of it this way. PBL is the pedagogical framework, where students are learning through the project. Design thinking is the creative framework, where students move through a deliberate and iterative process that focuses on empathy and problem-solving, When you combine design thinking and project-based learning, students engage in authentic projects that lead to deeper learning and ultimately help students develop the human skills they’ll need in an unpredictable world of AI.

How Would You Use Artificial Intelligence in a Project?

If you’re not familiar with PBL, check out the video below:




 

So, as we think about student projects, we will likely anchor student work on human skills like collaboration, creativity, empathy, and divergent thinking. However, we want to avoid the techno-futurism trap of outsourcing too much of the thinking to the AI.

Notice that the previously mentioned question “What can we do that machines can’t do?” is similar to the techno-futurist question of “What can we do that machines can replace?” But there’s a subtle difference. The focus here is the deeply human skills first and then the recognition that we still need to engage in learning tasks that can be automated but shouldn’t always be automated.

In other words, students will still come up with questions, create content, generate ideas, sketch out plans, and give feedback even if it’s something that an AI could do. These things are both fun and necessary. At the same time, we can leverage the power of AI in projects. Here’s a sample of ideas:

  • Generating additional questions: Toward the beginning of a project, a student might start with a list of research questions they have. They can then go to AI to get a list of additional questions. Or they could use AI to refine their questions to be more specific. If they’re asking interview questions, tehy could ask the AI to refine their questions to be more open-ended or convey more critical thinking. Notice how they’re not outsourcing the inquiry but they are using AI as a tool.
  • Clarifying misconceptions during research: Sometimes students struggle with conceptual understanding. AI can function in a similar way to Wikipedia, in that it’s not the best source but it is a great starting place when students are trying to develop a schema.
  • Restating research in simpler terms: If students are doing text-based research, they might see a website with great research. They’ve looked at the reliability of the source and explored the bias. Unfortunately, the source contains technical language and dense grammatical structures. Students can use AI to simplify the language.
  • Navigating ideas: After students have engaged in a deep dive brainstorm, they can go to AI and ask for additional ideas. Students can then analyze these ideas and incorporate them into their design.
  • Generating project plans: ChatGPT is really good at taking a larger task and breaking it down into smaller tasks. After they have navigated ideas, students can use AI as a starting place for a project plan with dates and deadlines. They can then modify this based on their skill level, group dynamics, etc.
  • Prototyping: If students are writing code, they might start with AI and then modify the code to make it better. They could mash up two examples. In this way, the AI functions like an exemplar within a project. The critical idea is that it should occur after students have engaged in ideation.
  • Coming up with group roles: Students can use AI as a starting place for group roles and then modify them to fit the group. Afterward, they can negotiate norms and consequences for breaking norms. The group can then use AI to create group contracts with norms, roles, and consequences.
  • Project management: Students can take the tasks and the progress they’ve made and use AI to help them determine what to do next and what they might need to change to stay on schedule.
  • Receiving feedback: I’ve been surprised at how well AI does in giving quality feedback. While peer feedback should remain a student-to-student endeavor, groups sometimes fall victim to groupthink. AI is a great tool for helping avoid the groupthink.

These are just a few ideas and they’re based largely on how I might use AI within PBL. Like I mentioned before, we need to be cognizant of the policies and permissions in using AI with students. But I think it’s important that we ask, “How will students someday use AI in different industries and how can we anticipate it as we design PBL units?”

This is why it’s also important that we lean into the expertise of technology coaches and specialists who can help us think critically about the medium as well as the librarians who will help define media literacy and information literacy in an age of AI.

 

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

One Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.