Skip to main content

Over the last year, I’ve had the opportunity to work with educators who are incorporating generative A.I. into their classroom practice. Whether it’s working with the pre-service teachers at my university or in working with current teachers in the workshops and professional development I lead, I have been inspired by the creative ways teachers are using generative A.I. for student learning. I’m struck by the intentionality and creativity inherent in the process. But I’ve also noticed a surprising trend. Students don’t always know how to craft relevant prompts for the A.I. chatbots. So in today’s article and podcast, I share a framework for prompt engineering. Here’s a brief overview in a sketch video:

Listen to the Podcast

If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Spotify.

 

How Will Students Use Generative AI?

I’ve written before about how we might use A.I. in a blended approach to writing. This blended approach moves away from the either/or options of embracing Artificial Intelligence or blocking it entirely. Instead, it focuses on using AI wisely to enhance the learning while also embracing the human elements.

Blended Venn Diagram - Human on the left, Machine on the right, Blended in the middleA blended approach might include a mix of hand-written and AI-generated writing. Students can create sketchnotes and blend together drawings and text in an interactive notebook or journal. These low-tech options focus on writing as a way of “making learning visible.” Here, students choose old school tools because the simplicity provides more flexiblity for deeper thinking. But then these same students might also use a chatbot to generate new ideas or use an AI-generated response that they then modify and amplify. In this sense, they begin with something human generated and add the A.I. as a tool only when necessary. They might use it to get feedback or help with editing (think Grammarly).

But students might also use A.I. as the starting place for writing and add their own unique voice to the finished product. This could happen in the same class period. Students in a blended mode can shift back and forth between the human and artificial intelligence.

Often, students will take the “vanilla” of an A.I. generated prompt and add their own unique flavor.

This is just one small example of how students might use generative A.I. Here are a few more ideas:

  • Creative Writing: Students can use generative A.I. for story starters, where the A.I. begins a story with a sentence or scenario and they take that first start and amplify it. Or they might come up with a general outline for a story and the chatbot helps them clarify characters (including the appearance, backstory, personality).
  • Project Management: Students will use generative A.I. for every aspect of project management. They will need to use it to set goals, chart their progress, come up with tasks, break tasks into sub-tasks, and problem-solve challenges.
  • Ideation: Students will use chatbots in the brainstorming sessions to come up with new ideas and to create mash-ups of their initial ideas.
  • Skill Practice: Generative A.I. can help students practice key skills by creating examples, problems, and informational texts that students can access as they wrestle with new information.
  • Tutor: Students will use generative A.I. as a way to ask questions and clarify misunderstandings. Here’s a video showing how I might use ChatGPT to clarify what a p-value is. They might ask the chatbot to come up with real-world examples in math. They might ask for scientific explanations. Often, students will use it as a tool to summarize information and even convert current text into the type that fits their independent reading level. Here, the goal is to be more about personalized learning rather than adaptive learning. While we tend to think about this as a text-based process, generative A.I. can help students visualize challenging concepts.
  • Generating Scaffolds: I recently wrote about all the ways that we can empower students to generate scaffolds and supports so that they can access the learning. They might use it for skill practice or do a prompt such as “function as a set of flash cards for me.” They might use it to break down tasks, create checklists, or craft graphic organizers. Students might use a chatbot to define key vocabulary, create sentence frames, or translate text.
  • Assessment: Earlier, I wrote about how we can use generative A.I. as an assessment tool. Here, the chatbots can provide feedback, guide reflection, look for errors (as a diagnostic tool) and engage in role-playing.

These are just a few small examples. Students might also use these tools to create the initial set of code in programming or the first draft in writing. They might use it as a first draft in a digital artwork. There are so many possibilities. But this requires students to be creative and competent in prompt engineering.

Students Struggle with Prompt Engineering

I’m standing here in a middle school social studies classroom as students engage in a Wonder Day activity. This teacher has been using this short-term inquiry activity as a way for students to answer their own questions by finding answers online and ultimately creating their own podcast episodes. She has decided to integrate ChatGPT into the initial inquiry phase. Here, students start with their question and then use the chatbot to build background knowledge.

Unfortunately, it’s not working very well. Students are asking great questions but they’re leaving out important context. So, a student uses “the world war” but doesn’t state whether it’s the first or second world war. Another student asks “what kinds of food did American soldiers eat during the war?” In both cases, students struggle because of a lack of clarity. Meanwhile, other students create super rigid prompts that lead to short answers and no room for elaboration.

It can be hard to strike the right balance between providing specific instructions and allowing room for creative responses. Super rigid prompts tend to limit the chatbot’s ability to generate new content but vague prompts lead to answers that are . . . well . . . vague. And sometimes those vague answers end up leading to misinformation.

In other cases, students seem to struggle with what exactly an A.I. can and can’t do. They tend to assume the chatbots have a certain contextual knowledge that they lack. Because of the ELIZA Effect, students often treat the A.I. as more human than it actually is and they forget that the seemingly intelligent A.I. sometimes lacks basic understandings around context clues and empathy.

As they move on, many students merely copy and paste the answers. They don’t engage in deeper analysis. They rarely engage in fine-tuning, where they adjust the prompts through tiny tweaks and experimentation. In some cases, students don’t even read the whole answer. Often, students get into a narrow focus and they don’t ask unrelated questions that might lead to deeper learning. They also tend to get a tunnel vision where they only use one A.I. tool and don’t seek out variations on the answers they receive. In other words, they become rigid in their thinking.

Prompt engineering in generative A.I. involves designing and fine-tuning the questions or instructions you provide to the A.I. system to get specific, desired responses. It’s like crafting the right query to get the best answer. By carefully constructing prompts, you can guide the A.I. to generate content that meets your needs, whether it’s writing, problem-solving, or other tasks, making it a valuable tool for various applications.

The FACTS Cycle is an acronym that stands for formulate a question, acquire an AI tool, create context, type prompt, and scrutinize results.

The FACTS Cycle of Prompt Engineering

The following is a five step process for prompt engineering. I created this after talking to artists, engineers, entrepreneurs, and other industry experts to see what their workflow is like when using prompts. I then tested out a more intentional process with secondary students who attend schools I work with locally. Finally, I reached out to my friend Bill Selek, who is a deep thinker about all things ed tech related. He shared a few ideas from Joe Marquez (another deep, intentional thinker about A.I.). In the end, I reshaped something that was seven phases into a shorter 5 phase process. And I turned it into an acronym, which might turn off a few members of the Anti-Acronym Brigade. So, here is the current iteration.

Phase 1: Formulate the Question

In this first phase, students formulate their initial question (often called a prompt). Often, this is written in the form of a question. As a teacher, you might provide some sample questions for students to use. Or you might use sentence frames. The following are some sentence frames students might use when asking questions about 18th Century imperialism:

  • How did 19th-century imperialism impact _________?
  • What were the motivations behind _________?
  • Can you explain the key events that led to _________?
  • In what ways did 19th century imperialism shape __________?
  • How did 19th century imperialism contribute to ____________?
  • How did _________ respond to the imperialist movements of the 19th century?

Notice that these are largely question-based. But consider the initial prompt formula you might use in a computer coding / programming course. The following are actually a set of prompts that I had ChatGPT create (unlike the previous examples).

  • I’m working on a program that aims to [describe the program’s goal], and I’m wondering if someone could take a look at my code to see if there are any improvements I can make with ____________?
  • I’ve encountered an issue in my code where [describe the problem or error], and I’m not sure how to resolve it. Could someone please provide guidance or suggestions?
  • I’m trying to optimize my code for [mention the specific optimization goal], but I’m not sure if I’m following best practices. Could someone review my code and offer tips for optimization in ________?
  • I’ve implemented a new feature in my code that involves [describe the feature], and I’d appreciate feedback on whether it’s well-structured and efficient.
  • I’m working on a project that involves [briefly explain the project], and I’d like a second pair of eyes to review my code for readability and adherence to coding conventions. Any feedback would be greatly appreciated.

In this phase, you might need to model the prompt creation process. This is a great opportunity to talk through your thinking process and ask students to do the same. You might even create an initial prompt and ask students to give feedback on how they might change the prompt. Here, you ask students questions like, “How might we tighten up the language so the A.I. for clarity?” or “How do we change this prompt to allow more divergent thinking within the A.I. response?”

The following is a rubric students can use to judge the quality of existing prompts:

Criteria Proficient (4) Satisfactory (3) Needs Improvement (2) Inadequate (1)
Clarity and Conciseness Prompts are clear and mostly concise, with minimal ambiguity. They effectively communicate the task or request. Prompts are generally clear but may contain some ambiguity or extraneous information. They convey the task or request but with minor clarity issues. Prompts lack clarity and conciseness, leading to some confusion. Ambiguity or irrelevant information hinders communication of the task or request. Prompts are unclear, overly verbose, or entirely ambiguous, making it difficult to understand the task or request.
Specificity and Relevance Prompts are specific and relevant to the desired output. They guide the A.I. system effectively. Prompts are somewhat specific but may lack precision or relevance in some aspects. They guide the A.I. system but with room for improvement. Prompts lack specificity and relevance in guiding the A.I. system, leading to partially relevant or off-topic responses. Prompts are entirely vague or irrelevant, resulting in responses that do not align with the desired output.
Creativity and Innovation Prompts support creative thinking and innovation, but they may occasionally limit creative possibilities. Prompts offer limited room for creativity and innovation, potentially constraining the A.I.’s response. Prompts stifle creativity and innovation, resulting in predictable or unoriginal responses. Prompts entirely discourage creative thinking, leading to dull or repetitive responses.
Ethical Considerations Prompts show awareness of ethical concerns, with minimal potential for biased or inappropriate content. Prompts show some consideration of ethical issues but may have potential for bias or inappropriate content. Prompts lack awareness of ethical concerns, potentially leading to biased or inappropriate content. Prompts disregard ethical considerations entirely, resulting in highly biased or inappropriate content.

Once students have an understanding of what makes a quality prompt, they can craft their own. It can help to provide students with samples and sentence frames as a scaffold. Most students haven’t seen what great prompts look like and these supports make the process more explicit and visible. After crafting a few prompts, students can engage in a peer feedback process that cites one or more of the categories from the rubric.

Phase 2: Acquire the A.I. Tool

Students don’t always grasp the limitations and capabilities of the A.I. system they’re using. Unrealistic expectations or assumptions about what the A.I. can do may lead to anger, frustration, and disappointment. In this phase, students take their initial prompt and ask, “Is an A.I. capable of adequately answering this prompt?” If not, they might need to revise it.

If students feel satisfied with their prompt, they can consider which A.I. platforms they want to use. Some students might use the same prompt on multiple platforms and compare and contrast their results. Others might stick to one platform and use it for the entire process. Here, students consider the pros and cons of each A.I. platform to see which one fits their prompt the best.

I was recently on an ISTE and ASCD webinar where Alyssa Moon, shared an activity where participants had analyzed the personality of different A.I. platforms. It was fascinating to see the work that Dr. Gwen A. Tarbox had done comparing the personalities of ChatGPT, Claude 2.0, Bing, and Pi AI. It was a great schema to get students thinking about the approach, tone, and style that each chatbot might use. Culturally, we tend to think of A.I. as both a tool and as an assistant. Asking about the “personality” of a chatbot reminds us that it is a little more complicated than merely selecting a tool.

Phase 3: Create Context

Chatbots tend to struggle with context due to inherent limitations in their natural language processing capabilities. Their initial training data is essentially everything they’ve been allowed to process. Thus the chatbots fail to retain a true memory of past interactions, resulting in each user input being treated in isolation. It’s a bit like interacting with someone who only has memory of interacting with you.

At a basic level, a chatbot has none of the contextual knowledge you take for granted. A chatbot can’t read body language, understand tone, and feel the emotions of a shared experience. They can’t “read the room.” So, the nuances and ambiguities present in human language can pose difficulties. While we, as humans, tend to process context quickly, a chatbot needs the human to provide more context. Think of an A.I. as a side character in Grease constantly imploring you, “Tell me more, tell me more.”

Complex conversations can confuse chatbots. They may lose track of the conversation thread or respond in a manner incongruent with the ongoing context. Moreover, the A.I.’s dependence on established patterns and pre-programmed responses can hinder adaptability to new or unforeseen contexts. Chatbots often struggle to make inferences. Emotional context, including elements such as sarcasm or humor, often eludes A.I. chatbots. It can feel like you’re interacting with a toddler — albeit one who has an encyclopedic knowledge of the world.

This is why it helps to throw your chatbot a RAFT. Just stick with me. I’m going somewhere. I promise. Back in 2004, I learned about the RAFT concept developed by Carol M. Santa, Lynn T. Havens, and Bonnie J. Valdes in their book Project CRISS. For well over a decade, I used this with my students to help them clarify the Role, Audience, Format, and Topic. I mean, yes, I tweaked it a bit. I taught middle school, so we changed it to FART and I had a sign that said, “FART Before You Write” but it was essentially their process.

What RAFT nails is the notion of context. It spells out the domains and dimensions of writing in advance. So, when asking generative A.I. to produce content, it can help to set those same parameters for the chatbot. The following is a modified version of RAFT for this purpose:

  • Role: What is the role of the chatbot? What are you telling it to do? What is your role? How will you explain it to the chatbot?
  • Audience: Who is the ultimate intended audience of what you are doing? Let the chatbtot know the specifics (i.e. “Explain this in a way that a 12 year old can understand”)
  • Format: What type of format does this need to be in? A table? A chart? A bullet point list?
  • Tone: While the original RAFT uses topic, I change it to Tone here so that the chatbot understands what you’re looking for. Is it formal or informal? Is it simple or complex? Is it approachable or authoritative? Let the chatbot know what to expect.

Students can set the tone in advance by making the RAFT as clear as possible. It becomes the first part of their prompt. Then, they can add the prompt itself.

Phase 4: Type the Prompt

After prepping the chatbot with the initial context, type up the prompt from Phase One and submit it. This might be a time to do a last revision or rewrite. I’ve noticed that sometimes students struggle with citing sample text. In other words, they might have a prompt and want to include a section of text to be analyzed. When they copy and paste it, they don’t always provide an explanation of what they are citing. So, again, that context piece becomes critical.

Phase 5: Scrutinize the Answer

There’s often a temptation to take what the chatbot produces at face value. In A Human’s Guide to Machine Intelligence, Kartik Hosanagar describes how most people tend to assume that A.I. is inherently more factual, less biased, and more objective. We tend to give the chatbots far more trust than they deserve. I’ve seen this trend with students, who take the answers from the chatbot and apply it directly to their work.

This is where it helps to lean on the information literacy expertise of librarians and language arts teachers. It’s important to remember that tools like the CRAAP test and even lateral reading might not help in this moment. I love the work of Jennifer LaGarde and Darren Hudgins  in Developing Digital Detectives, where they focus on aspects like your emotional reaction, the tone of the piece, and what kind of device you are using. This approach will be even more relevant as A.I. chatbots move from browsers toward mobile apps with more personalization and sophistication.

The following are a few areas that students might scrutinize:

  • Bias Analysis:
    • Check for any favoritism, prejudice, or discrimination in the response.
    • Identify instances where the chatbot may exhibit biases toward specific groups or perspectives.
    • Look for loaded language and word choice.
  • Relevance Assessment:
    • Determine if the response directly addresses your initial prompt.
    • Assess whether the content is related to the topic at hand or if it contains irrelevant information.
    • Check to see if the chatbot truly understood the context or if it needs to be revised.
  • Factual Accuracy Check:
    • Verify the accuracy of information provided in the response by cross-referencing with reliable sources.
    • Highlight any inaccuracies or misleading statements.
    • If you are doing scholarly research, look for peer-reviewed sources. If it’s more informal, look up the information online. Talk to experts (if possible) and compare it to your own prior knowledge. While A.I. has been experiencing less frequent “hallucinations,” it still happens.
  • Text Construction Evaluation:
    • Examine the clarity, coherence, and readability of the response.
    • Look for grammatical errors, awkward phrasing, or structural issues that hinder comprehension.
    • Consider the style of the text organization. Is it linear? Is it chronological? Does it move logically? Or is it more connective? Is it missing something?
  • Tone and Language Analysis:
    • Assess the tone and language used in the response to ensure it aligns with the context you have given it. Pay close attention to your ultimate audience and application.
    • Identify any instances of disrespectful or offensive language and evaluate overall professionalism.

At this point, some students are going to do a small revision with their answer. If they’re using ChatGPT, they might click the regenerate option to see if they can find a better answer. Here, they would scrutinize the answer again to see how it differs from the previous results. This is often a chance for students to provide feedback to the chatbot. If students find issues in any of the previous five domains (bias, relevance, accuracy, text construction, tone/language), encourage them to offer constructive feedback for improvement. This is chance for the A.I. tool to adapt and learn.

The Cycle Continues

In some cases, students might be done with the chatbot. They might copy and paste the results into a document that they then edit and revise to add their own voice. They might follow the directions from a prompt to create something new. But in many cases, this will be the beginning. Students might take that initial answer and begin to formulate a new follow-up question. When that happens, they will move back to the first phase. Often, they’ll shift to the second phase and use the same A.I. platform. However, they might get a second pair of eyes by using a different chatbot. This allows students to compare and contrast the results from multiple platforms. They might choose to go a more social route and talk to a classmate about the results.

At some point, though, students will move through this cycle more quickly. With a clear sense of the context, students will fire off additional follow-up questions in the moment. It can move into a rapid-fire pace. While this is a natural fluency that occurs as skills shift to automaticity, there is a danger in students moving too quickly and failing to scrutinize information or provide additional context. It’s easy to get careless with prompt composition in follow-up questions. Our students might need to develop the mindset of slowing down while interacting with chatbots.

Ultimately, we want students to modify this process and make it their own. They might add a few steps or even change up the order slightly. And that’s okay. This isn’t meant to be a lockstep protocol or even a practical heuristic. Instead, the ultimate goal is to view prompt engineering as a mindset and a habit — one that students can use to think more intentionally, critically, and creatively about the way they interact with A.I.

 

Get the FREE eBook!

With the arrival of ChatGPT, it feels like the AI revolution is finally here. But what does that mean, exactly? In this FREE eBook, I explain the basics of AI and explore how schools might react to it. I share how AI is transforming creativity, differentiation, personalized learning, and assessment. I also provide practical ideas for how you can take a human-centered approach to artificial intelligence. This eBook is highly visual. I know, shocking, right? I put a ton of my sketches in it! But my hope is you find this book to be practical and quick to read. Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.

 

John Spencer

My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both.More about me

4 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.