Three years ago, I stood in a high-tech lab at a technical university and stared at the giant flat screen monitor. The professor asked me, “Can you tell me which of these samples was generated via Artificial Intelligence?”
At the top, a student had asked a complex question related an application of calculus for calculating fluid flow and heat transfer. Below, a graduate student and an AI chat bot had both answered the question.
I studied both answers over and over again. I looked for clues in the syntax. I tried to find the humanity within the the answers. Finally, I shook my head. I couldn’t tell the difference. Both options were clear and concise with just a touch of colloquial friendliness while still remaining slightly cold and objective.
“Are they both the AI?” I asked.
“It’s easy to think that. The grad student is an engineer and we tend to be a little robotic,” he answered.
The correct answer was B. I stood there in the lab baffled by the progression of machine learning. This wasn’t merely “close enough.” This was it. Artificial Intelligence had arrived and I felt unnerved.
He then continued, “The students can’t tell the difference either. To them, it’s just a chat with an expert.” My head swirled with questions of the Turing Test, to deep fakes, and to the future of humanity. I’m pretty sure an image from Blade Runner popped in there as well. But more than anything else, I had nagging question, “Will AI replace me?” After all, assessment tends to be a significant aspect of what I do as an educator. But this bot could provide feedback in seconds. And it was good. Really good.
Later, after completing a workshop on student voice and choice, I started wondering if AI would replace assessment or merely amplify it and transform it. Maybe I could spend less time writing out feedback and more time doing one-on-one conferences with that human, dynamic element. Or maybe my role as a professor would change entirely. Perhaps I would vet the AI feedback to make sure it’s on-target. Or maybe I would shift toward being more of a course architect and designer. The university would save money provide a cheaper college experience with even less of a human element.
The truth is I can’t predict how AI and machine learning will change education. None of us can. Decades ago, Neil Postman reminded us that while it’s important to think critically about technology, we can rarely predict its impact on society. Nobody knew the printing press would lead to the rise of the nation-state. Nobody knew the Industrial Revolution would spark climate change. In my own lifetime, I saw bold predictions about “stranger danger” in social media but nobody could have predicted the role of filter bubbles, echo chambers, and the effect these would have on democracy around the world.
We often overreact about certain elements of a new technology and fail to understand the larger social implications of other areas. Right now, I’ve seen bold predictions about the death of journalism, English Language Arts classes, and software engineers. I’m not so sure. None of us are. All of these fields will change but often in ways that surprise us.
Listen to the Podcast
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).
The AI Revolution Is Already Here
Right now, we are having a cultural moment where we are recognizing the power of Artificial Intelligence. It’s been a slow build. Auto-correct here. Grammarly suggestion there. An auto-fill option in G-mail and in Google Searches. A small chat at the bottom left-hand corner with AI who will help you get that refund on the flight you purchased from Expedia. But it always felt small and in the margins. But then ChatGPT came along and suddenly we are left asking, “What does this mean for the future of work? What does mean for teaching and learning?”
The revolution is happening fast. In just over a week, Chat GPT has surpassed a million users. To put that in perspective, it took Instagram two years to reach that benchmark. And this is just the beginning.
I played around with it for hours this week. I started with a typical critical thinking writing prompt I used to ask my students to complete in our social studies class. Could the AI actually compare and contrast 19th century imperialism to 21st century globalization? Absolutely. The answer was creative and exhibited strong critical thinking. I added a parameter. “Explain it to someone who is 14.” This was the answer:
Imperialism and globalization are similar in that they both involve the expansion of a country’s power and influence over other countries. However, there are some key differences between the two.
19th century imperialism was a form of aggressive expansionism in which powerful countries, such as Britain, France, and Germany, used their military and economic might to conquer and control other nations. This often involved the exploitation of natural resources and the exploitation of local people.
On the other hand, 21st century globalization is a more complex and interconnected process in which countries and businesses around the world are increasingly linked through trade, technology, and other forms of exchange. Globalization has brought many benefits, such as increased economic growth and access to new markets, but it has also sparked debates about inequality and the impact on local cultures.
In summary, imperialism was a one-sided process in which powerful countries imposed their will on others, while globalization is a more complex and interconnected process that involves both cooperation and competition between countries.
If I read this, I would think it was a from a high-achieving 8th grader. What would this look like if I asked this question for a warm-up? Would students simply plug it into the ChatGPT and everyone gets pretty much the same answer written in a different way? What would we miss along the way?
I’m also an avid illustrator, so I decided to play around with the AI options in creating pictures. I used an app that generated a picture of me as a Viking from the past. I wasn’t sure if they would go with historical Viking or the football team. The AI went historical. This was the result. I think it’s a little too flattering. Doesn’t really capture the forehead wrinkles or bags under my eyes. But still, pretty cool.
Again, I started wondering about what this means for the future. It always felt like creativity — and visual creativity in particular — was a distinctly human endeavor. But I have to admit that the AI did a pretty good job with the portrait.
I’ve been following AI fairly closely for a few years. I’ve seen how it’s wrecking havoc in math classes where students use Photo Math for homework. I’ve paid attention to some of the applications in engineering. But this week, I kept thinking, “What does all of this mean for the future of education? And more importantly, how will we respond?”
I’d like to share two different routes that both lead to a dead end.
Dead End #1: The Golden Promise of Futurism
When I was in middle school, one of my teachers called the entire class up to the front of the room. He held up a shiny golden disc in wild excitement.
“This will change education forever,” he said, eyes gleaming. “Someday, you’ll be able to pick up one of these discs and learn exactly what you need to learn. No more taking notes from a teacher. You won’t have to learn from someone like me.”
I felt uneasy about learning from a golden disc instead of a human. I liked my teacher. I liked how he would change his explanations on the spot just by reading our body language. I liked the way he made us laugh. I enjoyed the inefficient way he got off-topic and we randomly learned things that weren’t in the textbook. Why would I want to replace him with a golden disc?
“This is the future of education, kids,” he said with a grin. “This will change things forever.”
But it did not change education forever. I haven’t seen a laser disc in decades and I feel pretty confident that my own kids’ teachers will not be replaced by golden discs. Not now. Not ever.
And yet, this is the same sentiment I have seen in many iterations. I’ve watched as one-to-one devices, laptops, and adaptive learning programs have all promised to replace teachers. Now, it’s the promise of AI. But I don’t buy it. Teaching will always be a deeply relational endeavor filled with mistakes and missteps and false starts. That’s the beauty of it.
Dead End #2: The Lock It and Block It Approach
“Mr. Spencer, I can’t talk to my group members,” a student called out.
“What do you mean?” I asked.
“Look, the chat’s gone,” Carlos pointed to his computer.
“Mine, too,” another student chimed in.
“Yeah, it’s not showing up for me, either.” I was met with a chorus of “mine won’t work as well.”
“No worries, just send an email.”
This time, their emails were rejected. Nothing seemed to work.
“I got it to work on my phone,” another student said. “I just had to use my personal G-mail.”
That afternoon, I met with tech support and learned that they had disabled chat and limited emails back and forth due to bullying that had occurred in another classroom. This is an example of second route of blocking the tech. My students were working on a global collaboration project and suddenly couldn’t engage in the project.
I’ve seen this approach happen in schools where they block sites like YouTube, online gaming, and social media. The process here focuses on surveillance, accountability measures, and a blocking of all new tech. In terms of ChatGPT, I’m already seeing English teachers saying things like, “I’ll just make students handwrite all their essays in class.” But as Mary Beth Hertz brought up on Twitter, this can be ableist. Some students simply need to type their work.
A Third Way — The Human Approach
The danger of futurism is that it fixates on novelty rather than sustainability. It also ignores the shadow sides of technology and the harmful disruptions that happen in society. The danger of the lock and block approach is that it ignores the external context and can lead to a place of irrelevance and heavy-handed surveillance. So, where does that leave us?
There’s a third option.
This last February, I read a book called Human Work in an Age of Smart Machines. In this groundbreaking work, the author argues that we should begin with the question, “What is it that we, as humans, can do that machines can’t do?” It might mean developing empathy, thinking divergently, curiosity, engaging in finding your own unique lens, or coming up with innovative solutions.
Here’s the good news. This is precisely what schools all around the world are doing by developing a Graduate Profiles. These are usually in the form of a document that outlines the knowledge, skills, and competencies that students should possess upon completing their K-12 education. They often serve as a blueprint for curriculum design, assessment, and instruction.
In other words, these graduate profiles are looking at the human skills that students will need in an age of AI and machine learning. But it can’t simply remain a document that schools have on a website or a poster on a classroom wall. If we’re going to develop these deeply human skills, we will need to rethink the type of learning that students engage in. We’ll need to de-emphasize (and perhaps even scrap) standardized testing and heavy-handed accountability measures.
I explored this idea in-depth in my book Vintage Innovation. I started out with the idea that AJ and I had shared in Empower that the ladder is now a maze. For years, we learned the formula that we should work hard in school, graduate from a university, and climb a corporate ladder. This was never the reality for everyone. Some folks found success in trade schools. Others as entrepreneurs. Meanwhile, systemic racism, gender bias, and injustice created barriers for millions of marginalized people. Still, it was the formula we were taught. But with the changes in automation, machine learning, globalization, and AI, the ladder is now a maze.
Pundits have said, “We need to prepare students for the jobs that don’t exist yet.” But we’ve already seen how quickly this falls into trap of futurism. A decade ago, it was all about teaching every student how to code. Now ChatGPT can produce seemingly flawless code in seconds.
We can’t predict what the maze will look like. So, where do we go if can’t predict the future?
The answer can be found in taking a vintage innovation approach. With vintage innovation, we avoid the extremes of a reactionary “just block it all” approach as well as the naivety of futurism. Here, we mash up the old school and the new tech. We overlay best and next practices. We ask, “what do students need in a world of AI? What does it mean to do human work in a tech-centric world?”
Vintage innovation is the process of taking old, established ideas and concepts and updating them in a new and innovative way. This can involve reusing, repurposing, or adapting older technologies or designs in order to create something new and improved. Vintage innovation often draws on the past for inspiration and incorporates elements of nostalgia, while also incorporating modern technology and design principles. An example would be blending together the older craft of origami with digital modeling and emerging research in math.
Note: that last paragraph was written by ChatGPT in answer to the question, “What is vintage innovation?” I promise that if I ever use the app, I’ll let you know.
What Does Vintage Innovation Look Like with AI?
What if we use ChatGPT as a tool? I know that sounds strange but all new technology comes across as “cheating” in the beginning. I remember teachers telling me to disable spell check because it was cheating. When I was a kid, I had teachers who said that word processing would make it too easy to modify text. They preferred typewriters instead. I still meet teachers who say that starting with Wikipedia is a bad idea. They view that as cheating.
In truth, I can see how creators might use ChatGPT as a part of the writing process in a way that’s not all that different from going to Wikipedia first. They’ll take the initial 300 words and rework it, add to it, find their own ideas, and ultimately make it their own. In that sense, writing will be more like collage art — which, in many respects, it already is. Your single, solitary voice is distinctly yours but it’s also part of a chorus shaped by what your read and listen to and who you talk to and all sorts of elements of your culture. When you write, you are never writing alone.
Yesterday, I took one of my most popular writing prompts and plugged it into the AI. The answer I got was boring. It was solid but boring.
So, I changed it up and asked for it to be funnier. The result was . . . meh.
But what if I used this as a starting place? I could add my own to make it more creative and perhaps even slightly funnier. My revisions are in bold.
- Take over the world. But maybe start out small. Perhaps an exoplanet? Or just take over Fresno. Yeah, start out small with Fresno and then go big.
- Steal the moon. I mean, not our moon, of course. I need the moon if I’m going to keep surfing. I’m thinking maybe Titan or Io? Perhaps Callisto? Nobody ever pays attention to Callisto.
- Create a shrink ray but one that only makes clothes shrink so that everyone in Fresno thinks they gained ten pounds overnight.
- Build a giant robot navy. All the villains do an army. We’re going with a solid robot navy.
- Train army of moderately sized genetically-engineered hamsters.
- Hijack Santa’s sleigh and replace all the presents with leftover AOL CDs
- Create a secret underground lair with a moat full of that weird Midwestern Jell-O Salad that you’re grandma used to make with the coconut and walnuts. While we are at it, let’s replace the carpet with hardwood floors. Maybe the Property Brothers have some ideas?
- Come up with a ridiculous and over-the-top villainous name like Kyle.
- Brainwash all the puppies in the world (they make great henchmen) so that they act like cats and their owners can experience the rejection normally dished out by their feline companions
- Build a time machine and go back in time to raise baby Batman to be a healthy, well-adjusted adult without any chip on his shoulder. Then attack Gotham City. They’ll be defenseless without the Caped Crusader.
So I look at this new list and it’s still heavily shaped by AI. But that actually led to a creative constraint that forced me to think divergently. In other words, I had to think outside the box by thinking inside the box:
While I began with AI, I quickly moved into the human element and focused on the things I can do: adding my own voice, having a touch of my quirky humor, and making a personal reference to where I grew up (Fresno). This is a small example of choosing a vintage innovation approach. But beyond this, I think the bigger shift needs to be toward the learning experiences that help students develop those deeply human skills.
Empowering Students for the Future
We often talk about what it means to move from compliance to engagement. It’s the idea of creating an environment where students want to learn rather than have to learn. This is that top level of engagement, where students are highly committed and highly focused. But I wonder how often we stop there, with committed and focused students who aren’t getting the chance to own their learning. If we want students to be creative, self-directed learners, we need to go beyond student engagement and into empowerment. They need to own the learning. Students need to shift from merely compliance or engagement and into empowerment.
In other words, the best way to prepare students for the future is by empowering them in the present. When this occurs, students develop those critical human skills that they will need as they navigate an uncertain future.
This is why I love project-based learning. Project-based learning is a teaching method that focuses on active, experiential learning through the completion of real-world projects. Students are given a problem or challenge to solve, and they work collaboratively to design, create, and present their solution. When coupled with design thinking, they develop deeper empathy as well. Check out the video below for more information:
The goal of project-based learning is to engage students in meaningful and authentic learning experiences that are relevant to their lives and interests. This sense of relevance helps them move away from the temptation to cheat with a chatbot. But the design is also centers on human skills that the AI can’t do. PBL also helps students to develop critical social-emotional learning skills. It’s an idea Mike Kaechele and Matinga Ragatz explored in-depth in Pulse of PBL.
Project-based learning is different from traditional teaching methods that rely solely on lectures, textbooks, and tests. Instead, it focuses on students taking an active role in their own learning, with the teacher serving as a facilitator and mentor. This approach can be applied to a wide range of subjects and age levels, and can be customized to meet the needs and goals of individual learners.
If we want students to be curious, we can engage in inquiry-based learning. It might be something as simple as a Wonder Day or Wonder Week project.
We can ask, “What is it that students can learn that can’t be replaced by AI?” And the result might be the deeply human connection of a Socratic Seminar, where they learn how to communicate, listen, and engage in nuanced conversation.
The Artificial Intelligence Revolution is just beginning to happen. We’ll need to re-imagine what education looks like in the future. My hope is that educational leaders invite librarians into these discussions. For years, they’ve been thinking hard about issues like media literacy, information architecture, accessibility, copyright, creativity, surveillance, and what makes us human. Talk to them.
In the upcoming years, we’ll need to think about how we can help students develop these critical human skills. It might be inquiry-based learning or project-based learning. But it might also be game-based learning. It might be a lo-fi makerspace. It might be an epic, face-to-face science lab or a sketchnote video, or an interview with community members. Notice that none of these ideas are new. These are the things teachers are already doing when they empower their students with voice and choice. So am I nervous about AI? Absolutely. But am I hopeful? Most definitely. Because I know that teachers will always be at the heart of innovation.
Check Out Vintage Innovation
You can find more about this idea in my book Vintage Innovation.
Looking for more? Check this out.
Join my email list and get the weekly tips, tools, and insights all geared toward making innovation a reality in your classroom.