Skip to main content

Cognitive atrophy happens any time we lose the ability to engage in a mental process due to inactivity. In a world of Artificial Intelligence, we need to be cognizant of the dangers of cognitive atrophy so that we can continue to engage in curiosity, creativity, and deeper learning. In my latest article and podcast episode, I explore what we can to ensure that we mitigate cognitive atrophy when using AI.

How do you prevent AI from short-circuting your thinking?Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify

 

If We Off-Load Our Thinking to AI, What About Cognitive Atrophy?

When I lived in Phoenix, I had a solid mental layout of the city. I knew which streets were north and south and which were east and west. I knew that Cactus would turn to Shea for some reason when you went to the east side of town. I accidentally memorized the order of our first twenty presidents because of the sequence of presidential street names downtown. I knew where landmarks were located and I would use phrases like, “Where the old Denny’s was located before they moved it up the street” or “Right by where the mall used to be before they turned it into the middle school.”

In other words, I learned how to drive in an era where we had to give verbal directions that often shifted from landmark to cardinal direction to left and right and often included a mix of all of the above. I later navigated additional areas of town via MapQuest, which was a bit like being a pirate with a customized map. I often cringe at just how distracted a driver I was as a college student using a printout map to find my way to a friend’s apartment.

When I moved to Salem, I tried to orient myself in the city. While it’s significantly smaller, it is confusing. We don’t run on a tidy grid system like Phoenix. And figuring out if I’m going northbound is a struggle in a space with thick gray clouds all winter. And yet, my biggest challenge has been my dependency on Google Maps. I learned my way around town by following precise directions. While I could have theoretically listened to the command, “Go northbound,” it didn’t really register. I simply drove straight. I turned left. I went straight again. I did so thoughtlessly while I listened to podcasts or music.

Nearly a decade later, I still struggle to orient myself in this city. I don’t know which landmarks are by other landmarks. I never truly know what side of town I’m in. I haven’t developed a mental schema if the city. It’s not for lack of effort, either. I’ve somehow lost some of my ability to think spatially. I used to estimate how long it would take to move from one location to another. I would plan out my route. Now I just click in an address and it does that for me. I now find it a struggle to estimate how long various multi-step trips might take me. I’ve outsourced this process to my device.

I’m experiencing a phenomenon called cognitive atrophy. We typically think of cognitive atrophy as a phenomenon related to aging and to cognitive decline through dementia and Alzheimers. On a biological level, cognitive atrophy refers to the gradual decline in cognitive functions due to the degeneration of brain cells or a decrease in brain mass, not unlike a muscles that atrophy. This process can affect various aspects of cognitive ability such as memory, decision making, and problem-solving skills. But I see the same phenomenon when we allow technology to do an entire thinking process for us.

This isn’t a new phenomenon. Socrates was concerned about  the advent of writing and its impact on memory and knowledge in the “Phaedrus.” Socrates argued that writing would lead to a decline in the mind’s ability to memorize and recall information because people would rely on external written sources instead of internal memory. This reliance on written texts would weaken the mind’s capacity to learn and remember, as individuals would no longer need to exercise their memory to recall information. The truth is, he was right. In a print rich world, modern humanity has lost some of the ability to memorize large chunks of information.

As we think about AI, we need to be cognizant of the potential for cognitive atrophy. I love the question and answer nature of a chatbot but I worry about the lack of productive struggle it might cause. I worry about instant answers and the loss of things like boredom and confusion that are so necessary for the learning process. I love how AI can help with ideation but I never want it to be my default in brainstorming.  I can see value in using AI throughout the creative process (especially within project-based learning) but I worry about outsourcing creative work to a machine. When that happens, students don’t become the makers and problem-solvers that can be. In other words, I worry that we might become so dependent on AI that we lose the ability to engage in certain types of thinking.

 

Just Because AI Can Do It, Doesn’t Mean We Shouldn’t

I have seen many AI experts suggest that we ask the question, “Can AI do this process?” If the answer is “yes,” then it’s time to transform the learning and focus solely on the areas that humans do better. While I get the sentiment here and the need for transformation, I’d like to push back on that.

First, I think it misses the fact that some tasks are simply fun to do, whether they can be mechanized or not. You can buy a blanket at Wal-Mart. It’s fast and cheap. Or you can crochet it and spend more time and money along the way. And yet, when a friend of ours gave us a crocheted blanket when we had a newborn, it was something we cherished forever.

You can buy a pitching machine and never play catch with your kid but you’ll be missing out on one of the best parts of parenthood. You can use navigation to get around a city but you can also put your phone away and discover a new city by chasing your curiosity on foot. Sure, Google Maps can do a better job getting you there efficiently but is efficiency always the bottom line?

We know that AI can beat the best chess masters in the world. And yet, how many high schoolers fell in love with the game of chess two years ago when they watched The Queen’s Gambit? Chess is worth pursuing because it’s fun to do. When the pandemic hit, my students did a show and tell where they shared their positive coping mechanisms. They talked about gardening, writing, knitting, pickling (because Portland, right?), coding, drawing, painting, and cooking. Every activity was something that we could automate. And yet, these were lifelines for them. Part of what it means to be human is to do tasks that can be automated but to do them in an idiosyncratic way, where you put your own stamp on it.

On a more academic level, we often need lo-fi tools and old school strategies to learn more deeply. I worry about people using AI for writing and failing to understand that we learn to think deeply though writing. It’s not merely the way we demonstrate our learning. It’s often how we learn. We can use AI to summarize key information but handwritten notes allow us to retain more information from our learning.

A hand drawn sketch note helps create the synaptic connections needed to move the information from short term to long term memory. You become a better conceptual thinker when you don’t use AI for note-taking. If we look at this diagram of information processing, we need students to get information into their long-term memory:

information processing diagramResearch has demonstrated that students retain more information when they take notes by hand rather than typing them. Similarly, students become better observers in science when they sketch out what they see. This seems odd at first. Is a photo more efficient? Absolutely. Is a photo more accurate? Most definitely. Do scientists use photographs out in the field? You bet. Then why bother sketching? The act of drawing teaches students how to observe. We don’t want to short-circuit that process. I want to see students ideating with sticky notes and sketches and webs rather than asking the AI to develop a fully formed project plan.

In terms of learning, we also need to engage in hands-on, minds-on, technically minimal learning in order to master a skill. In other words, we shouldn’t use AI when we are first learning a new skill.

 

When Learning a Skill, Start with the Human Element First

I reached out to my friend Trevor Muir and asked him, “What would you recommend to tackle the problem of cognitive atrophy?” His response was, “I love this topic. I’ve been thinking about it in writing. I don’t think teachers should use AI with students in writing until students have mastered it first.”

If we ask students to use AI for writing, they need to know what good writing looks like. That takes time. And effort. And a whole bunch of mistakes. If we want students to edit an AI generated text with their own voice, we need them to find their creative voice first. This is true of AI in writing but also AI in math. We don’t want students using AI to check their processes if they haven’t first learned the mathematical process. It’s true of computer coding, where we might start with a Scratch project, then hand-written code, then an AI and coding hybrid.

 

Be Deliberate About What You Off-Load to AI

A couple of months ago, I wrote about seven things we should consider when deciding to use AI. People often ask, “When is it okay to use AI?” The short answer is, “It depends on the learning task.” In using AI, we don’t want the machine to do the learning for us. This is why we should start with the learning tasks and then ask, “Does the AI help or hinder the learning in this situation?” The core idea here is that we need to use the learning targets to drive the AI and not the other way around.

If you’re teaching a coding class, you might want to be tight with students on using generative AI to create any kind of code. You might want students to learn how to code by hand first and then, after mastering the language, use AI-generated code as a time-saving device.

By contrast, if you’re teaching a health class where a student develops an app, you might not care if they use generative AI to help write the code. Instead, your focus is on helping students design a health campaign based on healthy habits. You might not have time to teach students to code by hand. You might not care about coding by hand. The app is merely a way for students to demonstrate their understanding of a health standard.

If you’re teaching an art class, you might not want AI-generated images but you might embrace AI-generated images in a history class where students work on making infographics to demonstrate their understanding of macroeconomics principles.

It might feel like cheating for a student in a film class to use AI for video editing but the AI-generated jump cuts might save loads of time in a science class where students demonstrate their learning in a video. In a film class, it’s critical for students to learn how to edit by hand in order to tell a story. In science, AI-generated jump cuts allow students to create videos quickly so they can focus on the science content.

I also want to recognize that some of what students learn can and will become obsolete. I’m pretty sure I didn’t actually need to memorize the state capitols, for example. Which leads me to the next question . . .

 

But What If We Don’t Need That Skill Anymore?

I grew up in an era where teachers were moving away from memorization. We still had to memorize math facts and, for some reason, state capitols. I’ve never visited a state and thought, “Man, I really need to see the capitol.” If I’m Nevada, I’m not like, “Screw Vegas, I’m going to Carson City!”

But, for the most part, we had moved past memorization. We were now in a largely print-based culture and memorization just wasn’t too important anymore. For many people, this tradeoff is a good thing. Why memorize it if you can access the knowledge with technology? However, when I was in college, I decided to memorize key texts that I wanted with me at all times. I memorized Bible verses and Shakespearan stanzas. I memorized an ee cummings poem and a quote from bell hooks. I memorized Stoic passages and the every word of the Bill of Rights.

Since then, I have forgotten many of the lines of poetry. But the act of memorizing text allowed me to slow down and think harder about the meaning of the text. Yes, I was memorizing it. But I was also meditating. Decades later, when I experience a high anxiety day, I will still recite back Philippians 4:6-7. In addition, learning how to memorize text also taught me how to remember conversations I had with people at greater length. It taught me to remember books I had read.

As students move away from our K-12 classrooms, they will need to decide which skills they want to continue to use even if AI can do it for them. Some might feel that coding / programming should be something AI does and therefore they won’t learn to code. Maybe that’s okay. After all, I don’t make my own clothing. I choose to outsource and automate it.

The key thing is that they learn how to think critically about when they use and don’t use AI. That requires students to move away from a state where AI is the default.

 

Be Careful About Using AI as the Default

Google Maps is a fantastic tool. If I am visiting a city for the first time on vacation, I definitely prefer using an automated map rather than trying to pick up a physical map, sketch out my route, and memorize it. The problem was when I shifted into using Google Maps as my default. I should have gotten “lost” in Salem for a day or two. I should have ridden my bike around Wallace Marine Park, up through Riverfront, and into downtown. I should have paid close attention to landmarks and said, “The Home Depot is on the way to the I-5.” I didn’t do any of that. I figured I would simply learn my way around the city after using my map app long enough. In other words, I allowed the technology to be my default.

So, let’s consider AI and writing. I’ve written an article about how AI might transform the essay  and another article about AI and the future of writing. In it, I described how we might integrate AI into each part of the writing process. We will need to pick and choose how we use AI within each writing piece we create.

We might start from a place where we are human-driven first and use AI to modify what we are doing. I write out my blog posts from scratch but I will use some auto-fill and some Grammarly feedback to improve it. I might even go to AI to help define a concept. But it is human-driven and AI informed.

Our voice with a megaphone and then an arrow pointing to AI to modify it (with a brain that has AI-like nodes)

Sometimes, though, I might want to start with an AI piece of writing that I then modify to make my own. Here’s an example of a time I began with AI and changed it to fit my voice. I began with a writing prompt of my own:




From there, I had the AI create a response. Here’s what it came up with.

Note that this isn’t bad but it is cliche. Parts of it feel derivative. But it also doesn’t fit my voice or personality. It’s too violent and even cynical. So, I modified it to make it my own. My parts are in bold.

  • Take over the world. But maybe start out small. Perhaps an exoplanet? Or just take over Fresno. Yeah, start out small with Fresno and then go big. 
  • Steal the moon. I mean, not our moon, of course. I need the moon if I’m going to keep surfing. I’m thinking maybe Titan or Io? Perhaps Callisto? Nobody ever pays attention to Callisto.
  • Create a shrink ray but one that only makes clothes shrink so that everyone in Fresno thinks they gained ten pounds overnight.
  • Build a giant robot navy. All the villains do an army. We’re going with a solid robot navy. Pretty sneaky, huh?
  • Train army of moderately sized genetically-engineered hamsters.
  • Hijack Santa’s sleigh and replace all the presents with leftover AOL CDs
  • Create a secret underground lair with a moat full of that weird Midwestern Jell-O Salad that you’re grandma used to make with the coconut and walnuts. While we are at it, let’s replace the carpet with hardwood floors. Maybe the Property Brothers have some ideas?
  • Come up with a ridiculous and over-the-top villainous name like Kyle.
  • Brainwash all the puppies in the world (they make great henchmen) so that they act like cats and their owners can experience the rejection normally dished out by their feline companions.
  • Build a time machine and go back in time to raise baby Batman to be a healthy, well-adjusted adult without any chip on his shoulder. Then attack Gotham City. They’ll be defenseless without the Caped Crusader.

Is it better? Not really. But it is distinctly mine.

So, if I can take AI-generated text and merely edit it, why don’t I do that more often? Simply put, I don’t want to make AI my default. I want to write because, well, I enjoy writing and it’s how I make sense out of my world. It’s how I wrestle with ideas. If I start with AI as my default, I might slowly lose my voice without even realizing it.

 

Do a Time Audit

The scariest thing about cognitive atrophy is that it’s hard to notice in the moment. With physical muscles, I can feel a difference. A year ago, I got busy and stopped lifting weights for two months. I felt a difference when I lifted a couch or when I was carrying a lot of groceries. I knew, in my daily life, that I needed to get back to the gym. But in the case of Google Maps, it happened without me realizing it at all. I thought I had retained my spatial reasoning skills because I had used them for two decades. But then, they evaporated and I didn’t noticed it until it was too late.

As educators, we want to take a vintage innovation approach that embraces the overlap of old school tools and new technology. We want to embrace the overlap of best practices and next practices:




But this requires an intentionality that can be a challenge in a tech-infused world. We start using auto-fill in Google and don’t realize we’re using it. We go to ChatGPT to design some scaffolds and supports and we don’t realize that we have stopped thinking intentionally about it. One potential solution is to track how often we use AI. As educators, we can do a time audit where we track how often we use AI versus engaging in a fully human-centered approach. We can step away from the tech and ask if we are growing too dependent on machine learning. We might even choose deliberate times to be fully tech-free.

In the end, cognitive atrophy will be one of the most significant challenges of AI. As educators, we need to be intentional about how we use it professionally and with our students so that we don’t off-load the thinking to a machine. We need to draw students into these conversations as well so that they can learn to use AI wisely as they navigate an uncertain future.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

Fill out the form below to access the FREE eBook:

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

2 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.