Skip to main content

“What do you think all of this means for the future?” a friend asks. We’d been talking about ChatGPT for nearly an hour but suddenly it starts to hit closer to home.

“I don’t know,” I answer.

“I mean, our kids are about to go to college. They’re going to a choose a major and then what? Does the job they choose even exist in a decade?”

“It won’t. I mean, it might exist in name but it’s going to look totally different. There are moments when that feels scary and other times it feels exciting and then there are these other times when I think maybe AI won’t change things much at all.”

“I try to remind myself that this isn’t even about AI,” he says before taking a sip from a pint. “Our parents had the same worries. This is just what it means to have a senior whose going to go out on his own. When we went to college it wasn’t AI. It was something else but I’m sure it was something.”

I’ve written before about how the ladder is now a maze. But there’s something unnerving about this shift now that my son is about the enter that maze. The hardest part about it is that we don’t know how AI will change learning. We don’t know how it will change thinking. We can’t predict how it will change social, political, and economic systems. We simply don’t know.

We cannot predict how AI will change the world

Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).


We Are In a Stage of Moral Panic

Right now, A.I. feels scary because it’s new. Science fiction is now science reality. Collectively, we have a picture of artificial intelligence forged largely by movies like Blade Runner, Battle Star Galactica, and the Terminator franchise — and perhaps a more nuanced view from Data in Star Trek.

There’s an uncanny human-ness to the creative answers you receive on #chatGPT.

The knee-jerk reaction is often “how do I stop this?”

Every time we experience a new technology, we also experience a moral panic. We hear dire warnings about what the technology will destroy and how our world will change. When the bicycle was invented, newspapers predicted everything from neurological diseases to distortions of the face (so-called “bicycle faces”) to psychological damages. Columnists warned of young women being addicted to bike-riding. When telephones were invented, citizens were concerned that the phones would explode. People fought against telephone polls for fear that they would cause physical harm.

But over time, the technology becomes boring.

That’s right. Boring.

You can think about it as a graph with time as the independent variable and concern as the dependent variable. You can imagine it like a graph with unawareness, moral panic, acceptance, then boredom.

It starts with a general lack of awareness. In this phase, there's a mild concern, often forged by science fiction and speculation. But in this phase, the concern is largely unfounded. The technology is off in a distant future. Once we grow aware of this new technology, there's a resistance in the form of a moral panic. Here, the new technology is scary simply because it's new. We read reactionary think pieces about all the things this technology will destroy. As we adopt the technology, the concern dissipates. We grow more comfortable with the technology and start to accept it as a part of our reality. Eventually, we fail to notice the technology and it starts to feel boring.It starts with a general lack of awareness. In this phase, there’s a mild concern, often forged by science fiction and speculation. But the technology is off in a distant future. However, once we grow aware of this new technology, there’s a resistance in the form of a moral panic. Here, the new technology is scary simply because it’s new. We read reactionary think pieces about all the things this technology will destroy. As we adopt the technology, the concern dissipates. We grow more comfortable with the technology and start to accept it as a part of our reality. Eventually, we fail to notice the technology and it starts to feel boring.

I experienced this on a personal level with data tracking. I hated the notion that tech companies would know my location. But now I use Life360 to check in with where family members are in the moment. What felt like an invasive app has now become commonplace. Or consider social media. I resisted Facebook for fear of identity theft. But now it’s a normal part of life. Facebook is boring.

Right now we are at peak freak-out mode with AI. Schools are wrestling with how to respond to it.


But someday it will be boring. And it’s at this boring phase that we need to be most concerned. If the freak-out phase is an overreaction, the boredom phase is an under reaction. It’s an uncritical acceptance of technology as the tools become more normalized and eventually invisible. It’s a failure to grasp the way our tools are reshaping us because they seem so . . . well . . . boring. We forget that cars are powerful because we drive them all the time. They’ve become boring death machines.

We Can’t Predict How Artificial Intelligence Will Change Our World

I’ve seen bold predictions about how AI will change learning in the next few years. I’ve read think pieces about AI replacing teachers, ruining English class, and making us slaves to the robot overlords.

However, the hard truth is that we can’t predict how technology will change our world.

AI will impact our social, political, and economic systems in ways we can’t even predict. We will think and act differently. We will live differently. However, none of us can predict these changes to any degree of accuracy. Not social scientists. Not technologists. Not futurists. None of us.

We will all be surprised.

When the aforementioned bicycle was invented, few could have predicted how it would impact the women’s suffrage movement. Few could have seen the ways it would connect people and spark social movements between and within cities. Instead, people were worried about “bicycle head.” No one could have predicted the way the Industrial Revolution would emphasize the nuclear family, change our self-perceptions, lead to a belief in personal privacy (one I am actively eroding by voluntarily giving Life360 my location information), and spark climate change. Gutenberg could not have predicted how the printing press would lead to the rise of the nation-state and Enlightenment thinking. Glass lens manufacturers had no idea that their work would eventually lead to telescopes, science, and the rise of secularism.

More recently, people were largely concerned about “stranger danger” on social media. While that threat turned out to be overblown, few people predicted how it would impact attention spans. We hardly notice the way our perception of time changes when we scroll by relevance or the ways that numbers and gamification shapes our behaviors. This is the first era in human history to attach numbers to relationships and treat social interaction like a casino. We had no idea how strong the fear of missing out would be on youth mental health or the way it would lead to a rise of factionalism in filter bubbles. We also had no idea how powerful it would be for telling stories, for making small connections with formerly lost friends, for getting an opportunity to publish and build an audience without any gatekeepers.

Social media, like all technology, has its pros and cons. It’s easy to look back and say, “How did we miss that?” But it’s really hard to predict the future. We thought Facebook would be a great place to tell folks what we were eating. We didn’t realize it would change democracy. The bottom line is that A.I. will impact us in significant ways. We just don’t know how exactly.

Think Critically but Humbly

Instead of asking students, “How will A.I. change society?” maybe we should ask, “How is A.I. already changing your lives?” Engage in hard conversations about smart phones and attention spans — not from a curmudgeonly, reactionary, way so much as a descriptive and curious way.

We are poor predictors of the future and often oblivious of the present. But the more we think critically in the present, the better we are at anticipating to the future. The following are some critical thinking questions we might ask students to consider.

  • Where am I using AI without even thinking?
  • How does AI actually work?
  • How might people try to use AI to inflict harm? How might people try to use AI to benefit humanity? What happens when someone tries to use it for good but accidentally causes harm?
  • What does AI do well? What does it do poorly?
  • What are some things I would like AI to do? What is the cost in using it?
  • What are some things I don’t want AI to do? What is the cost in avoiding it?
  • How am I combining AI with my own creative thoughts, ideas, and approaches?
  • What is the danger in treating robots like humans?
  • What are the potential ethical implications of AI, and how can we ensure that AI is aligned with human values? What guardrails do we need to set up for AI?
  • What are some ways that AI is already replacing human decision-making? What are the risks and benefits of this?
  • What types of biases do you see in the AI that you are using?
  • Who is currently benefiting and who is currently being harmed by the widespread use of AI and machine learning? How do we address systems of power?
  • When do you consider a work your own and when do you consider it AI-generated? When does it seem to be doing the thinking for you and when is it simply a tool?
  • What are some ways AI seems to work invisibly in your world? What is it powering on a regular basis?

This is simply a set of questions to start a dialogue. Initially, it was a set of 10 questions but then I went to ChatGPT and typed in the prompt “What are some critical thinking questions you could ask about artificial intelligence?” and I liked two of those questions. I then re-wrote those questions in my own words and then it prompted two more questions. I almost went to ChatGPT first but I’ve already noticed that when I use AI first, I end up missing out on some of my own key thoughts. In other words, even if I can use AI for ideation, I’m already realizing that I don’t want to do so. I’m trying to be intentional about when and how I use this tool because it’s already hitting the boredom stage for me and if I’m feeling lazy, I’m liable to outsource too much of the thinking to the machine.

Where Do We Go From Here?

I’m wandering a narrow residential street in Washington D.C. I meander through the neighborhood of narrow row houses with bold tones of purple and orange and turquoise. I’m surprised by the splashes of color in a city I typically associate with beige or gray marble. I pass through a maze of streets and catch a glimpse of the obelisk in the distance. I close my eyes and orient the Washington Monument, the White House, the Supreme Court building. I slip my hand into my pocket and hold my phone for comfort. I’m itching to search my step count. I’m curious about social media. But more than anything else, I want to know the fastest way to get to the National Mall.

I’m struck by my dependence on my smart phone — by my concerns about metrics, my need for notifications, and my desire to capture this very Instagrammable neighborhood and share it as an Instagram story. I’m struck by my struggles to navigate a city without any map app. Spatial reasoning has always been hard for me but it’s gotten worse in the last decade.

But this is precisely why I am exploring the city with my phone in my pocket. True, a podcast could keep me entertained. Yes, the map app could show me a faster route. But there’s something almost magical about exploring a city by foot. There’s an almost game-like quality of finding landmarks and memorizing street names. I know it sounds odd but there’s an almost meditative aspect of going on a walk in a new place with no direction, no map, and no distraction. Just pure exploration, breathing in the cold air and absorbing the colors and sounds of the city.

These walks help me realize that thinking is a habit. If I don’t engage in spatial thinking or deep work or memorization, that type of thinking seems to atrophy for me. Over the last few years, I’ve been intentional about doing the very things that technology can do faster. I go on walks without a map app. I engage in mental math instead of always pulling out the calculator app. I sketchnote my thoughts by hand and write in a journal. I memorize old sacred verses and poems even though I could look them up in seconds.

I do this because all of these things are enjoyable. But I also do these things because I want to be aware of the ways that a tool like a smart phone is changing the way I think and act. Every time I do something with my inaccurate, clunky, messy human mind, I become more aware of how technology is shaping my world in both positive and negative ways. In these moments, the technology becomes visible again and I can move out of the boredom and into a place of both criticism and appreciation.

But it’s than that. Technology can always do things faster and more efficiently. But we can do things differently. That meandering stroll through a neighborhood might be slower but it’s also more beautiful. Similarly, you can buy a machine-made blanket at Wal-Mart with no mistakes. But I will cherish that blanket a friend crocheted for my daughter before she came home from the hospital even if it does contain an imperfection. It’s those imperfections that make us human and it’s our humanity that make a work of art unique.

Students will need to become really good at what AI can’t do and really different with what it can do. So, if we want them to think critically about AI, we need them to continue to use it but also learn how to avoid it and find different ways to do the very things AI does well. This might mean using AI for writing but also choosing to write something entirely from scratch. It might mean ideating with AI but also skipping out on the AI and ideating with sticky notes. It might mean using Apple Car Play to navigate from the airport to downtown but then wandering around letting the city surprise us with its sights and sounds and smells.

AI will always be vanilla but your unique voice will be what makes you human. This is probably not the best metaphor, but AI will be vanilla and our students will offer their own unique flavors.


In the end, we don’t know how AI will change learning. We can’t predict how it will change our world. But one of the best things we can do is help each student find their voice and retain their sense of humanity. We can ask them to ask hard questions about how technology is changing their world. We can encourage them to embrace all of their imperfections and humbly recognize our shared humanity. In the end, if we want to understand artificial intelligence, we are first going to need to understand ourselves.


Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:


John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.