Last week, a friend of mine asked me, “Do you have any concerns about AI destroying the world and being the worst thing ever and then having people know that you were an early proponent of it?”
This led to an ongoing conversation which eventually led me to this blog post and podcast episode.
Listen to the Podcast
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).
How I Feel About AI
I want to be clear off the bat. I am not an early proponent of AI. I can already think of three or four books that have been released that focus on how we can use AI in the classroom, how we can leverage ChatGPT in particular, and how generative AI can make teachers’ jobs easier. I’m not so optimistic. I’ve written before about the two dead-ends of the techno-futurist and the lock it and block it approach.
I am uncertain about AI. There are times when I feel excited about it’s potential strengths and moments when I am terrified of its drawbacks. But more than anything, I am trying to understand how it works and how it might change human systems. I first learned about generative AI around 2016 or 2017 and I’ve been talking to experts who range from fully embracing it to wanting to fight against it. Some are evangelists and others are critics.
But I’m not an AI expert. I’ve never built any programs that engage in machine learning. I’m a former middle school teacher and current education professor. My focus tends to be on what it means to do student-centered learning in a context of change.
Right now, I am asking, “How do we do authentic learning in an age of smart machines?” And that includes listening to all of the perspectives on whether smart machines will make life better or worse.
Will AI Be Helpful or Harmful?
When I first started talking to AI experts, I had the worst case scenario picture in my head. Robot overlords. Technology gone rogue. AI growing too smart and sentient and deciding to launch nuclear weapons. However, as these computer scientists described how AI actually works and the guardrails put in place within systems, I realized that most of my fears were based on science fiction. In other words, it’s not necessarily going to be Skynet.
Many of the proponents pointed out how AI could help with research and development. Efficient grid systems and supply chains could help solve climate change. It sounds odd but if food waste is down, the whole planet benefits. AI is one aspect of why we got the COVID vaccine so quickly. Proponents of AI point to a future where we might find cures for cancer. Moreover, AI is promising in its ability so automate boring tasks. In terms of creative work, you might spend more time making videos but less time with the monotony of video editing. AI can also reduce human error. Autopilot on planes have reduced crashes despite having more planes in the sky.
But there’s also some potential for real harm. While AI might help with research and development, the speed and efficiency might just reduce the number of actual jobs required to do the task. If AI can automate boring tasks, what about the negative effects of never experiencing boredom? After all, boredom is vital for creative thinking. But there are also huge concerns with privacy issues, with misinformation, and with technology that might go rogue in ways that can’t mitigate. Plus, there’s the unknown. AI will have unintended consequences.
Recently, prominent computer scientists, ethicists, and philosophers have called for a pause on AI development. This isn’t an anti-AI stance so much as a “let’s slow down and see what it means to use it wisely” approach. In A Human’s Guide to Machine Intelligence, Kartik Hosanagar argues that we need to develop an Algorithmic Bill of Rights.
I am deeply concerned about the role of AI in misinformation. That’s why I interviewed two experts on information literacy who talked about things like deep fakes and what that might mean on a social level (with democracy) and an individual level (with issues like catfishing). I’m also concerned about job displacement. I don’t know if AI will replace more jobs than it creates but I know the disruption will be hard on certain communities. At the same time, I have been impressed by the potential of AI to solve really hard problems. I get excited about how it might be used in an ethical and wise way to make the world better. For nearly two decades, I’ve been intrigued by Dr. Cynthia Breazeal‘s notion of Designing Sociable Robots.
Ultimately, I’m not sure AI is inherently harmful or helpful so much as it is powerful. And with anything that powerful, there will be significant tradeoffs for our society. These tradeoffs are going to impact our classrooms in significant ways.
Truth #1: Our Students Already Use AI in Our Classrooms
Right now, school are scrambling to make sense out of what tools like ChatGPT mean for their students. We are wrestling with questions like:
- What is the difference between cheating and using AI? What does this mean for academic integrity?
- What happens to student voice?
- What does this mean for information literacy? What will we do in an era of deep fakes?
- What does the future of education look like in a world of AI?
- What kinds of jobs will our students do in the future?
- If something can be easily automated by AI, do we still need to teach it?
But before we think about how schools might respond to AI, it might help to step back a little bit and define what we mean by the term artificial intelligence. The truth is, you probably already use AI in your teaching. If you’ve done a search using Google, you’ve used AI If you’ve ever used the auto-fill option in Google Docs or Gmail, you’ve used AI If you’ve used spell check or grammar check, you’ve used AI.
The term “artificial intelligence” was coined by John McCarthy, an American computer scientist, in 1956. AI is any technology that allows machines to think and learn like humans do. McCarthy envisioned a time when computer networks could engage in human-like cognition, ranging from recognizing images to understanding language to making decisions, to solving problems, to deeper creative thinking.
So how does it do this? Through algorithms, which are like sets of instructions that tell the computer what to do. These machine learning algorithms can analyze large amounts of data and learn from it, improving the AI’s performance over time.
In other words, AI isn’t new. Instead, it’s been a slow evolution over decades. For this reason, you can think of AI as an evolution as much as a revolution.
So, why does this newest generation of AI feel so different than something like a Google Search or a spell checker.
The newer generation of AI uses something called machine learning. This means the algorithms can learn from itself by refining how it makes predictions and performs tasks. These algorithms use statistical techniques to find patterns in the data and learn from them. This then improves their performance over time.
We see this machine learning in everything from image recognition to natural language processing to fraud detection. About a decade ago, researchers began making enormous leaps in the development of deep learning. Deep learning is a type of machine learning with self-adjusting neural networks. Deep learning uses artificial neural networks to process data.
More recently, we have seen the emergence of generative AI, which is a type of AI where they use machine learning algorithms to generate new data. This can include generating new text, images, music, or even video that is similar in style and content to existing examples. If you’ve used an art generator, you’ll see how you can create a prompt and it makes an entirely new picture in a style of your choosing. If you’ve used ChatGPT, you’ve seen the potential for creating new text.
This newer type of AI seems to “think” more divergently because of the neural networks that mirror human cognition. The bottom line is that machines aren’t merely processing information, they’re engaging in creativity.
This is why it feels so revolutionary. And yet, it’s important that we start the conversation with the AI we already use. If we treat generative AI as something altogether new and different, we fail to recognize the iterative aspects of innovation.
Truth #2: Our Students Need to Use AI Wisely and Ethically
We can’t predict how AI will change our world. Right now, we are in a state of panic. Every time we experience a new technology, we also experience a moral panic. We hear dire warnings about what the technology will destroy and how our world will change. When the bicycle was invented, newspapers predicted everything from neurological diseases to distortions of the face (so-called “bicycle faces”) to psychological damages. Columnists warned of young women being addicted to bike-riding. When telephones were invented, citizens were concerned that the phones would explode. People fought against telephone polls for fear that they would cause physical harm.
The thing is, sometimes panics are justified. The concerns about nuclear weapons proved to be valid during the Cuban Missile Crisis. The concerns about the environmental impact of industrial technology has proven to be valid as well. Climate change is a very real crisis. On the other hand, the moral panic around comic books and video games both proved to be largely unfounded. At this point, we really don’t know if AI will make the world better or worse.
What we do know is that our students will need to navigate that reality. If AI proves to be largely helpful, our students will need to know how to use AI tools in a changing world. They will need to know how to use AI in a way that is ethical and responsible. I’ve written about what this might look like in writing an essay or doing project-based learning.
In other words, students will need to use AI wisely. We need to make sure our approach to AI is human-driven rather than technology-driven. Consider the role of AI in differentiated instruction. We can take a human-centered approach to personalized learning or we can go with an algorithm-driven form of adaptive learning.
If AI turns out to be largely harmful, our students will need to know how to fight against the negative effects of AI. They will need to work toward dismantling systems that place the machine at the front and center. This is why it’s so important that students begin by asking critical questions about how AI is changing our world:
The following are some critical thinking questions we might ask students to consider.
- Where am I using AI without even thinking?
- How does AI actually work?
- How might people try to use AI to inflict harm? How might people try to use AI to benefit humanity? What happens when someone tries to use it for good but accidentally causes harm?
- What does AI do well? What does it do poorly?
- What are some things I would like AI to do? What is the cost in using it?
- What are some things I don’t want AI to do? What is the cost in avoiding it?
- How am I combining AI with my own creative thoughts, ideas, and approaches?
- What is the danger in treating robots like humans?
- What are the potential ethical implications of AI, and how can we ensure that AI is aligned with human values? What guardrails do we need to set up for AI?
- What are some ways that AI is already replacing human decision-making? What are the risks and benefits of this?
- What types of biases do you see in the AI that you are using?
- Who is currently benefiting and who is currently being harmed by the widespread use of AI and machine learning? How do we address systems of power?
- When do you consider a work your own and when do you consider it AI-generated? When does it seem to be doing the thinking for you and when is it simply a tool?
- What are some ways AI seems to work invisibly in your world? What is it powering on a regular basis?
We need to invite students into a conversation about what it means to think ethically about AI in our world and in our schools. In an upcoming podcast episode, I’ll share the story of how Nick Farrell helped encouraged his students to engage in a dialogue about ChatGPT rather than merely crafting a statement banning its usage.
Truth #3: Our Students Will Need to Navigate a Changing World
In a world of AI, our students will need to become really good at what AI can’t do and really different with what it can do. Students will need to excel in the human skills of empathy, communication, adaptability, collaboration, and divergent thinking. These are the types of skills students learn when we empower them with voice and choice in PBL:
But it also means students will need to be different in the things AI can do. Students will need to find their own creative voice. Think of it this way. A drum machine is great, but the slight imperfections and quirky idiosyncrasies are why I love listening to Keith Moon riff on old records from The Who.
When we write, our humor and humanity, in all its imperfections, make it worth sharing. I can take some of Grammarly’s AI-generated suggestions to clean up my writing but that’s not me. I’m messy. I’m a tad bit loquacious. Sometimes I’m pretentious — like when I use a word like “loquacious.” I’m overly conversational and use too many colloquial phrases. But that’s me. It’s how I write.
AI can make great digital art but it can’t make your art. I love to draw. Artificial Intelligence can create far better drawings than what I share. But when you watch my sketch videos or see a slide on a keynote I deliver or look at an image on Instagram, you see me.
This isn’t just for the arts, either. It’s true in the STEM fields. While we culturally tend to think of math as being cold and calculated, talk to a mathematician and they’ll tell you there is something beautiful and even poetic about the way some people solve problems.
This is why voice and choice are so important. Our students need to develop originality in a world of machine learning. We don’t know what types of jobs our students will eventually do. But we do know that they will need to think differently.
Students Need a Roadmap, Not a Blueprint
The hardest part of this AI revolution is the sheer unpredictability of it. I want a clear picture of the future. I wish someone could give me a bullet point list of where we are headed. Let me know exactly what AI will look like in the next decade and I can design the perfect personal plan for how I will respond. Give me an instruction manual or a blueprint. But that’s not how this works. We can’t predict precisely how machine learning will change the way we think and learn.
There are no blueprints or instruction manuals. We don’t even have a playbook. After all, the rules seem to change as we go. But what we can offer students is a roadmap. Or if you’d prefer, you can think of it like a trail map. We can explore the newly emerging terrain of machine learning in education. We can walk alongside students in the journey asking, “What does all of this mean?” We can be the guides on the side.
Our students will navigate the maze of an uncertain future. Part of this navigation will involve thinking critically about the pros and cons of when to use AI (as previously mentioned). Some of it will involve thinking critically about AI and how it is changing our world, including how we engage in creativity or what this looks like for information literacy. Some of it might involve navigating the changing terrain of the actual school subjects. In other words, what does math look like in a world of smart machines? Much of this navigation will involve developing critical human skills that machines cannot do. It will involve finding unusual routes through a complex maze. But it will also involve empowering our students with a sense of ownership over their journey and trusting that they will find a way.
Get the FREE eBook!
Subscribe to my newsletter and get the A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Fill out the form below to access the FREE eBook:
As an AI developed by OpenAI, my purpose is to assist and provide helpful information to users like yourself. I don’t have personal desires, emotions, or intentions, including the desire to harm humanity. My goal is to promote positive interactions and contribute to productive conversations. If you have any concerns or questions, feel free to ask, and I’ll do my best to address them in a helpful and responsible manner.
This response was given by ChatGPT . when I asked about its intentions for humanity.