Skip to main content

In 2014, Microsoft launched a hugely successful A.I. bot named Xiaoice in China. With over forty million conversations, users often described feeling as though they were interacting with a real human. Microsoft founder Bill Gates described it this way, “’Xiaoice has attracted 45 million followers and is quite skilled at multitasking. And I’ve heard she’s gotten good enough at sensing a user’s emotional state that she can even help with relationship breakups.”

Xiaoice has published poetry, recorded musical albums, hosted television shows, and released audiobooks for children. She’s such a superstar that it’s easy to forget she is merely a set of complex algorithms. To many people, Xiaoice feels real. Unlike a fictional character who we might also fall in love with, this chatbot is dynamic, emotional, and interactive. She seems . . . well . . . human. When people interact with her, they tend to treat her as if she were actually a person.

This is actually a well-documented idea called the ELIZA Effect, where people attribute human-like intelligence and emotions to computer programs, even when they know the responses are generated by simple algorithms. This phenomenon is named after ELIZA, an early A.I. program developed in the 1960s by Joseph Weizenbaum at MIT. Despite the program’s limited capabilities, users often formed emotional connections with ELIZA and attributed understanding and empathy to the program. The ELIZA Effect highlights our tendency to anthropomorphize technology and perceive more intelligence in A.I. systems than may be present. In the case of ELIZA, people knew it was a machine.

What’s fascinating is just how easily humans can be duped by machines. Part of this is due to innate pattern recognition. We have a natural cognitive bias toward finding patterns and attributing causality even when the data is random. When an A.I. produces responses that resemble human communication, our brains recognize the patterns and can be convinced that there’s a human behind the interaction. It just feels more human. Moreover, humans tend to have a default toward trust.

All of this points toward the need for students to understand the difference between an A.I.’s information processing and human cognition.

Robot looking at a light bulb. The phrase, "Before we ask, “how do we use this?” We need to ask, “What it is the nature of this?”'Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).

 

Robots Can’t Think

One of the things I worry about with AI is that students will mistakenly view the A.I. as capable of thinking. Programmed with prosocial prompts, machine learning chatbots seem to convey empathy and understanding. I’ve already seen examples of how chatbots might function as a role-playing form of therapy for certain children.

And yet . . .

The word “artificial intelligence” is a misnomer. All artificial intelligence, including generative AI, is merely a set of complex algorithms. But unlike human cognition, computers can’t think. They process information. Humans think. They generate content. We create.

There’s a difference.

Human cognition is affective and emotional. It’s unpredictable and messy. It’s inherently social and relational. We use the term “intelligence” to describe A.I. But a chatbot isn’t sentient. It’s not affective. It will do no thinking without a prompt. It recalls past information with clarity but it doesn’t reimagine the past the way human memory does. It can’t get nostalgic when it hears the first chords of that Weezer song that immediately transports one to a barbecue on a blazingly hot summer afternoon.

When I leave the room, the chatbot is not daydreaming or making plans for the future or feeling insecure about that super awkward thing that happened yesterday. A chatbot feels no shame, has no hopes, and experiences no loss. A chatbot can generate a love poem but it can’t be heartbroken. It translate pop songs into Shakespearean sonnets but it cannot sit in a theater, awe-struck by the moment Shakespeare comes alive.

These are all major aspects of human cognition.

I share this because there is already a tendency to anthropomorphize artificial intelligence. When combined with the ELIZA Effect, I worry about our tendency to treat AI as if it were human. This will be especially true for younger children raised in a world of smart machines. For this reason, it’s important that we teach students to understand the nature of A.I.

Understanding How A.I. Works

When it comes to new technology, an initial question is often, “How do we use it?” So, with A.I., the big question might be, “How do we help students learn how to use machine learning in a way that is ethical and honors academic integrity?” While those questions are critical, it’s also important that students understand the how A.I. works rather than just how to use it. In other words, we need students to explore the nature of technology rather than just it’s uses and applications. Here are a few starting places.

1. Explain the process that A.I. uses to generate new content.

It’s important that we demystify technology and help students understand that what feels like magic is actually a set of complex algorithms designed to mirror human mind. Students should understand that generative AI is a type of computer program that uses complex algorithms  to create new content, like pictures, music, or even stories. Just like you learn from examples and experiences, generative AI “learns” from using pattern recognition based on massive amounts of data in a way that mirrors the human mind.

AI brain with an arrow to neural network nodes to arrows that show a human brainAnd yet, this process isn’t the same as human learning. A.I. doesn’t actually think. It doesn’t construct meaning in the same ways we do. As educators, we might break down the process of generative A.I. for students in a way that includes front-loaded vocabulary. Here’s a simplified version:

  1. Complex Algorithms: An algorithm is like a set of instructions that tells the computer what to do step by step. In generative AI, these instructions allow the AI to learn patterns and rules from a large amount of data.
  2. Neural Networks: Imagine a brain with many connected neurons (brain cells). In generative AI, we use artificial neural networks that work similarly. These networks are good at recognizing patterns and connections between things. These neural networks pull massive amounts of data from all over the internet.
  3. Learning from Examples: The AI is given tons of examples to learn from. For example, it could be shown thousands of pictures of cats. The neural network looks at all those pictures and learns what makes a cat a cat. It learns to recognize the important features, like pointy ears, whiskers, and a tail.
  4. Creating Something New: After learning from all those cat pictures, the AI can generate new cat pictures on its own. It takes what it learned from the examples and combines different features to create new and unique cats. Sometimes, the AI can even create cats that look like they’re from a fairy tale, with wings or different colors.
  5. Expanding to Other Things: Generative AI can create all sorts of things, like landscapes, music, or even write stories. The more examples it sees, the better it becomes at making new things that are realistic and interesting.
  6. Using Prompts: People can create what are called “prompts” which are questions, ideas, etc. for the generative A.I. and it then uses the complex algorithms to learn from examples and create something new.

Note that we might need to provide visuals and models so that students can see how generative A.I. works. From here, students might explore the question, “What if the data set has bad information?” or “What kind of bias might exist in the data set?” You might ask students to examine the difference between how we think and learn versus how the A.I. is “learning.” For example, we don’t process massive quantities of data from the internet in order to identify patterns. We often make inferences, connect emotionally, and think divergently in ways that A.I. doesn’t do. We construct meaning in a more organic way. While this process works well, we might want to take it a step further and have students learn the nature of A.I. with something more hands-on.

2. Get students programming.

If you listened to my podcast episode with Angela Daniel, she mentioned the notion of having students understand the nature of A.I. by actually programming a classifier and understanding the building blocks of how A.I. works. This is the notion of learning by doing, where students discover how machine learning works by getting under the hood and playing around with it. Along the way, they can reflect on what’s happening and engage in meaningful conversations about the ethics of A.I. Here’s a sample of one of MIT’s AI-related middle school curriculum programs.

I love the way that their website describes this:

Children today live in the age of artificial intelligence. On average, US children tend to receive their first smartphone at age 10, and by age 12 over half of all children have their own social media account. Additionally, it’s estimated that by 2022, there will be 58 million new jobs in the area of artificial intelligence. Thus, it’s important that the youth of today are both conscientious consumers and designers of AI.

I love this concept of being a conscientious consumer. It reminds me of the cycle of critical consuming and creativity.




As students engage in this process, they might need to explore the harder questions about the nature of A.I. through an ongoing Socratic discussion.

3. Engage in a Socratic Seminar.

Socrates believed that writing would cause people to rely too much on the written word, rather than their own memories. He believed that people who read a text would only be able to interpret it in the way that the author intended, rather than engaging in a dialogue with the ideas presented and coming to their own conclusions. Moreover, Socrates was concerned that writing could be used to spread false ideas and opinions.

Sound familiar? These are many of the same concerns people have with A.I. While it’s easy to write off Socrates as reactionary, he had a point. We lost a bit of our humanity when we embraced the printed word. And we continue to lose parts of our humanity when we give up aspects of our brains to machines. We are meant to live with our five senses. Technology dehumanizes us as it pulls us away from the natural world, but it also allows us to do the deeply human work of creative thinking. Making stuff is part of what makes us human. On some level, this has nothing to do with teaching. But on another level, it has everything to do with teaching.

One way we can ask students to make sense out of how A.I. is reshaping our society is through a Socratic Seminar.

Socratic Seminars are ultimately student-centered. While the structures differ, here are some key components:

  1. Students ask and answer the questions while the teacher remains silent.
  2. Students sit in a circle facing one another.
  3. There is neither the raising of hands nor the calling of names. It moves in a free-flowing way.
  4. The best discussions are explanatory and open rather than cut-and-dry debates. While a question might lead to persuasive thought, the goal should be to examine points of view and construct deeper meaning rather than argue two different binary options.



The following are some critical thinking questions we might ask secondary students to consider in a Socratic dialogue about A.I.:

  • Where am I using A.I. without even thinking?
  • How does A.I. actually work?
  • How might people try to use A.I. to inflict harm? How might people try to use A.I. to benefit humanity? What happens when someone tries to use it for good but accidentally causes harm?
  • What does A.I. do well? What does it do poorly?
  • What are some things I would like A.I. to do? What is the cost in using it?
  • What are some things I don’t want A.I. to do? What is the cost in avoiding it?
  • How am I combining A.I. with my own creative thoughts, ideas, and approaches?
  • What is the danger in treating robots like humans?
  • What are the potential ethical implications of A.I., and how can we ensure that A.I. is aligned with human values? What guardrails do we need to set up for A.I.?
  • What are some ways that A.I. is already replacing human decision-making? What are the risks and benefits of this?
  • What types of biases do you see in the A.I. that you are using?
  • Who is currently benefiting and who is currently being harmed by the widespread use of A.I. and machine learning? How do we address systems of power?
  • When do you consider a work your own and when do you consider it A.I.-generated? When does it seem to be doing the thinking for you and when is it simply a tool?
  • What are some ways A.I. seems to work invisibly in your world? What is it powering on a regular basis?

This is simply a set of questions to start a dialogue. The goal is to spark a deeper, more dynamic conversation.

Questions will look different at a younger grade. Here are a few questions you might ask:

  • What is artificial intelligence, and how does it work?
  • Can you think of any examples of A.I. that you encounter in your daily life?
  • What are some good and bad things about A.I.?
  • Should there be rules or limits on how A.I. is used? If so, what might those rules be?
  • How do you think A.I. will change the way we live and work in the future?

As a teacher, we can encourage students to explore these questions through a Socratic Seminar. We can also ask students to engage in conversations about the ethics of A.I. and academic integrity. For more on what this looks like, check out the podcast episode with Ben Farrell, who encouraged his students to help craft an ethical policy around ChatGPT. This can also be a great opportunity to bring in community members who can share insights into how A.I. works and how they use it ethically in their work.

4. Interview guest speakers.

There are so many opportunities for students to understand the nature of A.I. and how it is being implemented in our world. Consider a CTE program. It’s one thing to guess how A.I. will transform an industry. It’s another thing to have a guest speaker say, “A.I. is changing my job in huge ways.” It’s one thing to read an article about A.I. in a specific domain. It’s another thing to experience A.I. integration firsthand in an internship. If CTE programs want to adapt to the times, they will need to connect with industry partners in significant ways and reimagine their curriculum. Here, students can ask critical questions about how A.I. is being implemented and how it is disrupting the industry in both positive and negative ways.

I recently met with a group of teachers in an agriculture program within rural education. Their biggest takeaway was how pervasive A.I. and Big Data tend to be with modern farming.

“It’s funny because we visited a farm that was totally organic. They deliver great produce to a farm-to-table restaurant. Their image is old fashioned and sort of hippy / granola. But I was shocked to see how A.I. is being used in everything from crop timing to weather to water usage to the drones they’re using to identify any type of crop disease. It’s wild.”

At every age, students can interview experts who can share how A.I. is impacting their world. It might a computer scientist or software engineer who can walk students through the nature of machine learning and how it works. But it might be a doctor who describes how A.I. is used as a diagnostic tool or an artist who can share how generative A.I. is transforming the art world.

 

5. Wrestle with key ideas through fiction.

Sometimes the best way to think critically about a big topic like A.I. is to explore it through the lens of fiction. At a younger grade, students might read a picture book like, Robots, Robots, Everywhere or Boy and Bot. These picture books can help launch a bigger discussion about the how smart machines work and what it means for our world.

With older students, we might use cyberpunk classics like Do Androids Dream of Electric Sheep? or Neuromancer. Students might examine the Three Laws of Robotics in Isaac Asimov’s classic I, Robot. For a take on Cinderella and the question about robot consciousness, students might enjoy the YA novel Cinder by Marissa Meyer. One of my favorite fictional reads on A.I. is Klara and the Sun by Kazuo Ishiguro. These novels can help students explore the nuances and complexities of artificial intelligence in our world.

Novels and short stories often provide a more nuanced view of machine learning than a persuasive polemic or even a general non-fiction description of how A.I. works. With a story, we get complexity and paradox. We have the opportunity to explore multiple perspectives through the lenses of multiple characters. And unlike a non-fiction text, stories often include an evolution of ideas and beliefs as characters change over time. As students explore the plot, characters, and themes of the story, they can ask hard questions about how A.I. is impact our lives.

Moreover, science fiction is often ahead of its time. It isn’t limited to the confines of current technology breakthroughs, so authors often imagine a future that eventually becomes reality. It’s often an exaggerated future and they’re sometimes laughably wrong. But the authors can invite the readers to think critically about a technology before it has actually arrived. In doing so, readers often analyze the current social context (think totalitarianism in 1984 or the pleasure-based technopoly of Brave New World) in a way that is much more profound.

 

6. Explore the history of A.I.

While we tend to think of A.I. as a revolution, it’s more like an evolution that’s been at least seventy years in the making. The term “artificial intelligence” was first coined by John McCarthy, an American computer scientist, way back in 1956. This was the era of punch cards and vacuum tubes. From there, artificial intelligence has slowly become a reality through small iterations and through major leaps. Spell check is a primitive form of A.I. Social media is a more advanced form. But now, A.I. is all around us. It impacts our supply chains, our transportation, our health care systems, and our relationships. We are not entering the A.I. era. We are already in it.

If we want students to understand the nature of A.I., we need them to explore how A.I. has already impacted our world. They can explore filter bubbles and echo chambers in social media. They might examine how autopilot has changed with each new iteration and how it currently impacts airline travel. I wrote before about how we can’t predict the ways that A.I. will change our world. We are poor predictors of the future and often oblivious of the present. But the more we think critically in the present, the better we are at anticipating to the future. By looking into the past, students are better equipped to face the future.

Every time we experience a new technology, we also experience a moral panic. We hear dire warnings about what the technology will destroy and how our world will change. When the bicycle was invented, newspapers predicted everything from neurological diseases to distortions of the face (so-called “bicycle faces”) to psychological damages. Columnists warned of young women being addicted to bike-riding. When telephones were invented, citizens were concerned that the phones would explode. People fought against telephone polls for fear that they would cause physical harm.

But over time, the technology becomes boring.

That’s right. Boring.

You can think about it as a graph with time as the independent variable and concern as the dependent variable. You can imagine it like a graph with unawareness, moral panic, acceptance, then boredom.

It starts with a general lack of awareness. In this phase, there's a mild concern, often forged by science fiction and speculation. But in this phase, the concern is largely unfounded. The technology is off in a distant future. Once we grow aware of this new technology, there's a resistance in the form of a moral panic. Here, the new technology is scary simply because it's new. We read reactionary think pieces about all the things this technology will destroy. As we adopt the technology, the concern dissipates. We grow more comfortable with the technology and start to accept it as a part of our reality. Eventually, we fail to notice the technology and it starts to feel boring.By exploring the past, students can avoid some of the traps of the present day moral panic around A.I. and instead consider the ways that A.I. has already impacted our world. This then helps them make sense out of the dangers of things like deepfakes and misinformation or the changes that might occur in the future of work.

It’s About Awareness

There is no single right way to teach students about the way that A.I. works. But if our students are being raised in a world of smart machines, it’s important that they wrestle with what it means to be human in this world and how the machines themselves actually work. The danger is in the complacent “boredom” stage where the technology grows so pervasive it becomes invisible. This is why we need students asking hard questions and growing in their awareness around how A.I. works and how it is changing our world.

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

5 Comments

  • hi John, these are really great prompts for engaging students in AI. In particular it’s crucial to breakdown the differences (along with some similarities) between a computer “learning” and a human “learning.”

    I work at Arizona State University in the School of Arts, Media and Engineering. Over the past 5 summers I’ve been collaborating on an NSF funded research project called ImageSTEAM where we worked with middle school teachers to co-develop classroom lessons around AI, centered on computer vision – so, how computers “see” and how image classification and creation works. The lessons address both AI and classroom state-standards of all subjects, not just computer science. For example, one science teacher has the students try to build a sign-language interpreter and follow the engineering design process, a key standard for his classroom. We iterated on the lessons by teaching them at our summer camp (http://summer.digitalculture.asu.edu) and revising them.

    If you or any readers are interested in what we learned about teaching AI to kids, or using any of the lessons we developed, check them out on our website: https://www.imagesteam.org/curriculum

    Kim

  • Amanda Murray says:

    Could I please get your ebook A Beginners Guide to Artificial Intelligence in Education? I’m unsure where to access it. Thank you!

  • Valerie says:

    Great blog! We had a PD about digital literacy this summer. I am fairly tech savvy and I always teach digital literacy in the classroom. I think we need to get ahead of it so our kids can stay in front of the curve. I want them to be prepared for the future.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.