Exploring the pedagogical uses of AI chatbots
You can use generative AI chatbots to support teaching and learning in many ways. Here we will guide you through exploring various use cases and examples. We also encourage you to access and use chatbots to complete some provided sample tasks.
Defining AI and chatbots
- Designers use a variety of machine learning and deep learning techniques to develop the large language models (LLM) that provide the basis of generative AI chatbots.
Outcomes for this module
Here we guide you in exploring the capabilities and uses of chatbots for your discipline, courses, and teaching through hands-on tasks.
Upon completion of this module, you should be able to:
- Reflect on educational uses of AI chatbots.
- Access commonly-used generative AI tools.
- Practice using a chatbot to complete a task.
- Describe some of the risks of using AI in teaching and learning contexts.
Warm-up with a metacognitive exercise
As you begin to explore, think about what you already know and the opinions you may already hold about the educational aspects of AI chatbots. This metacognitive exercise can help you identify what you want to explore and what you already understand. Making connections to what you already know can deepen your learning and support your engagement with these modules (Santascoy, 2021). Begin with the prompt, “What is one way you might use AI in education?” and respond to the poll below.
Example educational use cases for chatbots
Instructors and students can try out many ways to use a chatbot for teaching or learning. We organize the examples below according to seven approaches, originally proposed by Ethan Mollick and Lilach Mollick in June 2023. See their paper "Assigning AI: Seven Approaches for Students with Prompts" for details on these educational use cases.
AI as mentor
We know timely formative feedback can help students learn (Metcalfe, 2017). However, providing frequent quality feedback requires much time and effort from you and your teaching team. An AI chatbot might help you by giving students frequent, immediate, and adaptive feedback. For example, you might guide your students in using chatbots to get feedback on the structure of an essay or to find errors in a piece of programming code. Remember that you and your students should always critically examine feedback generated by chatbots.
AI as tutor
Tutoring, which focuses on skill-building in small groups or one-on-one settings, can benefit learning (Kraft, Schueler, Loeb, & Robinson, 2021). Effective tutors may use questioning techniques, collaborative problem-solving, and personalized instruction to support their students. While Stanford provides a range of tutoring services, not all students use them regularly; students might use AI chatbots as a supplement to tutoring services. For example, users can prompt chatbots to generate explanations and analogies for concepts based on your or your students' interests or to ask open-ended questions that encourage further thinking. Chatbots may be better at tutoring certain subjects than others, so be sure to try it out first to assess the helpfulness of the responses.
AI as coach
Metacognitive skills can help students understand how learning works, increase awareness of gaps in their learning, and lead them to develop study techniques (Santascoy, 2021). Stanford has academic skills coaches that support students in developing metacognitive and other skills, but you might also integrate metacognitive activities into your courses with the assistance of an AI chatbot. For example, you and your students could use a chatbot to reflect on their experience working on a group project or to reflect on how to improve study habits. We advise that you practice metacognitive routines first, before using a chatbot, so that you can compare results and use the chatbot most effectively. Keep in mind that the tone or style of coaching provided by chatbots may not suit everyone.
AI as teammate
A well-functioning team can leverage individual team members' skills, provide social support, and allow for different perspectives. This can lead to better performance and enhance the learning experience (Hackman, 2011). AI chatbots can play a number of roles within a team. For example, teams can use a chatbot to synthesize ideas, develop a timeline of action items, or provide differing perspectives or critiques of the team's ideas. Remember to take the lead when using chatbots for team projects, making your own choices while incorporating the helpful and discarding what is not.
AI as student
The process of organizing your knowledge, teaching it to someone, and responding to that person reinforces your own learning on that topic (Carey, 2015). You can incorporate this technique into your course in a number of ways. Note that AI chatbots can serve as a convenient stand-in for a student. For example, you might prompt a chatbot to act as a novice learner and ask you questions about a topic. Try different prompts and refine them so the chatbot responds in a helpful way.
AI as simulator
The ability to transfer skills and knowledge that you learned to a new situation involves abstract thinking, problem-solving, and self-awareness. Deliberate practice, such as role-playing, can help you develop these transfer skills. AI chatbots can help with developing scenarios, role-playing a situation, and providing feedback. For example, you might prompt the chatbot to create a realistic ethical dilemma that applies to the discipline or to role-play as a patient or client in a relevant scenario.
AI as tool
The work that you do as an instructor also includes many tasks that extend beyond the strictly pedagogical that these examples do not cover. The versatility of AI chatbots represents part of their potential. In a recent presentation, "Workshop: Leveraging Generative AI Tools to Support Teaching and Learning" on June 1, 2023, Stanford's Learning Accelerator's Reuben Thiessen suggested these "Three D's" for thinking about how AI can support your work. We encourage you to explore and share ways that AI can support your work, whatever it may be.
- Dreaming—Helping you think
- Examples: Brainstorming, Concept Expansion, Concept Mapping, Collaborative Writing, Scenario Building, Simulating Discussions
- Drudgery—Lightening your load
- Examples: Summarization, Data Cleansing, Progress Tracking, Content Moderation, Synthetic Data, Review and Feedback
- Designing—Designing content
- Examples: Lesson Plan Generation, Project Planning, Content Personalization, Accessibility Design, Interactive Experiences, Curriculum Mapping
Potential risks when using AI
Generative AI chatbots are not perfect tools. Any use of AI carries some risks and shortcomings in how these tools perform and respond to different prompts. Here we focus on a few areas most relevant to teaching and learning.
Large Language Models can produce incorrect yet plausible information confidently presented as factual. This kind of hallucination or confabulation stems from how these systems work and the limits of their training data. Chatbots tend to make mistakes when prompted to provide quotes, citations, and specific detailed information. Different LLMs vary; most have become more sophisticated and less prone to make errors over time. However, you and your students should always fact-check the output of chatbots with reliable external sources when using them to get information (Mollick & Mollick, 2023).
Assume that the organization that developed the chatbot will use any data you enter according to their terms of service. Also, privacy laws and regulations concerning chatbots remain evolving and unclear. We recommend that you and your students exercise caution when entering sensitive or private data into a chatbot, as doing so might put your privacy at risk. You should not enter any protected information, high-risk data, or other data that should not be made public into a chatbot. You also should not enter copyrighted data or intellectual property that belongs to others, such as student work, unless you have their permission. University IT provides additional guidance on the responsible use of AI regarding privacy and data security on their Responsible AI at Stanford webpage.
Bias and stereotypes
Chatbots and Large Language Models can produce content that perpetuates harmful biases and stereotypes. Developers train LLMs on vast but still limited sets of digital data. Most training data comes from Western perspectives in the English language available from the internet. Human engineers, with their inherent biases, also provide additional training for these tools. Individual users bring their own perspectives into dialogue with a chatbot through prompts and queries. All these can result in subtle biases and stereotypes in the output of a chatbot. We encourage you and your students to be critical of language generated by AI chatbots and consider these important issues when using these tools (OpenAI Platform, n.d.).
Equity and access
Like any technology, access to these tools varies and lack of access can perpetuate existing inequities. Consider the cost of subscriptions, access to computers and reliable connectivity, geographic restrictions, accessibility issues for people with disabilities, the user's preparation, and the tools' performance in other languages as important aspects of this issue. While chatbots can help to reduce some gaps, they may also exacerbate others. Keep these issues in mind as you and your students work to maximize the potential benefit of using chatbots.
Critiques of LLMs
Critiques of LLMs highlight broader issues of environmental impact, justice, ethics, economic impact, and so on. We consider these criticisms important and valid; however, many exist beyond the scope of these modules. If you'd like to explore further these broader critiques, consider the following articles as starting points.
- "We read the paper that forced Timnit Gebru out of Google. Here’s what it says." by Karen Hao, MIT Technology Review, 12/4/2020.
- "Ten Legal and Business Risks of Chatbots and Generative AI" by Matthew F. Ferraro, Tech Policy Press, 2/28/2023.
- "ChatGPT Is a Blurry JPEG of the Web" by Ted Chiang, The New Yorker, 2/9/2023.
Access commonly used chatbots
While many different chatbots and LLMs exist, we choose to highlight four prominent chatbots currently available for free. Each has some unique characteristics and nuanced differences in how developers built and trained them, though these differences are not significant for our purposes as educators. We encourage you to try accessing these chatbots as you explore their capabilities.
Remember to read the terms of service of the tool when deciding to access it. Consider how the tool addresses issues around privacy and security. Some chatbots have options to opt out of sharing data which are described in the terms of service.
ChatGPT, developed by OpenAI, uses the Generative Pre-training Transformer (GPT) large language model. As of July 2023, it is free to those who sign up for an account using an email address, Google, Microsoft, or Apple account. You may also need a valid phone number to verify your account. See ChatGPT help documentation for more details.
Go to openai.com/chatgpt and sign up to access ChatGPT.
Bard, a generative AI chatbot developed by Google, relies on the Pathways Language Model (PaLM) large language model. As of July 2023, it is free to those with Google accounts. It is not enabled for Stanford University Google Workspace accounts. If you'd like to access this tool, please use your personal Google Account. See the Bard FAQ for more details.
Go to bard.google.com and sign in with your personal Google account to access Bard.
Bing Chat, an AI chatbot developed by Microsoft, also uses the GPT large language model. Unlike most other chatbots it can access and search the internet. It is available from within the Microsoft Edge web browser. Sign in to a Microsoft Edge account to allow longer conversations with Bing Chat. See Bing Chat help documentation for more details.
Download Microsoft Edge and sign in with a Microsoft account to access Bing Chat.
Claude, the name of the large language model and chatbot developed by Anthropic, uses a different method of training from GPT and Bard that aims to focus on safety and helpfulness. As of July 2023, the chatbot is available for free in the US and UK. It can be accessed with an email address or Google account. See Anthropic's help documentation on Claude for more details.
Go to claude.ai/login and sign in with an email address or Google account to access the Claude chatbot.
Using prompts with chatbots
You can communicate with a chatbot in many ways. Consider these suggestions when using a chatbot.
Converse with chatbots naturally for open-ended tasks
In conversations with other people, we routinely ask for clarifying details, repeat ideas in different ways, allow a conversation to go in unexpected directions, and guide others back to the topic at hand. This approach can help with open-ended tasks. For example, if you are using a chatbot to reflect on a recent experience and to think of possible next steps, a conversational tone might yield better results. Try beginning the same way you would begin a chat conversation with a colleague or acquaintance.
Giving structured prompts for outcome-oriented tasks
In some situations, you might want a chatbot to generate specific kinds of responses or to behave in a particular way. When prompting the chatbot in this way, use the prompt to describe its role, its goal, any constraints (what it should not do), step-by-step instructions, personalization (what you want it to ask you), and the general pedagogic routine (Mollick & Mollick, 2023). For example, you might use a chatbot to practice your knowledge of beginner Spanish phrases and prompt the chatbot to ask a series of questions that become increasingly difficult while providing motivational affirmations for correct answers and hints for incorrect answers. See "Assigning AI: Seven Approaches for Students with Prompts" for examples of structured prompts.
Provide context and background details to generate more useful responses
The input you provide largely determines the chatbots' predictive responses. The more context, details, and nuances you give the chatbot the more it has to work with to generate responses. For example, instead of asking "How do I write a course syllabus?", you might instead say “I am a university instructor developing a new introductory course on genetics. Can you assist me in developing a useful and clear syllabus for first-year students?” (Gewirtz, n.d.).
Ask chatbots to tell you what more they need
Chatbots will rarely get things right on the first try. Point out mistakes and give it instructions for how to do better. When prompting a chatbot, ask it "What more would you need to make this interaction better?" (Chen, 2023). The chatbot will likely ask for more specific details. This can in turn prompt you to give more specific details and instructions that can yield better results.
Ask chatbots to assume a perspective or identity
Consider asking the chatbot to take on a particular perspective or identity. For example, when using a chatbot to practice providing supportive language as an instructor, you might ask a chatbot “Please act as an anxious first-year college student from an under-represented minority coming into office hours for the first time” (Chen, 2023).
Practice using a chatbot
Hands-on experience using a chatbot can help you to better understand the capabilities and limitations of these tools. Try completing some of the following tasks, or the example educational use cases above, to practice using a chatbot.
Stump the AI
Prompt the chatbot to give you information about a subject area in which you are an expert. Prompt for increasingly specific or obscure information. At what point does the chatbot produce helpful information? At what point does it provide incorrect or unreliable information?
Creative writing brainstorm
Begin by telling the chatbot that you would like to develop a fictional short story and that you'd like its assistance in developing your ideas. Explore different elements of the story. Try different ways of interacting and responding to the chatbot to get a sense of its capabilities.
Language study partner
Prompt the chatbot to be your study partner and help you with vocabulary practice. You might start your conversation with: "I want to practice my knowledge of Spanish phrases and vocabulary that I will need when traveling in Buenos Aires this summer. I will be the student and you are my supportive language teacher. What more do you need to set up this practice session?"
Get assistance with a coding project
You might first use the chatbot to help you define a project and break down the work into manageable chunks, then clarify the function or routine you want to work on. You might then use the chatbot to generate examples or suggest useful methods (Gewirtz, n.d.).
Try something fun with the chatbot
Consider ways you might play and have fun with AI chatbots.
- Create a recipe for a new fusion dish.
- Brainstorm ideas for a vacation to a fantastical world.
- Write a silly song or poem.
- Plan a themed surprise party for a special guest of honor.
- Play a guessing game about animals, movie stars, or other topic.
- Provide song recommendations based on your creative descriptions.
Assess and reinforce your learning
We offer this activity for you to self-assess and reflect on what you learned in this module.
- Go to the Stanford-only version of this activity
- Use your Stanford-provided Google account to respond.
- You have the option of receiving an email summary of your responses
- After submitting your responses, you will have the option to view the anonymized responses of other Stanford community members by clicking Show previous responses.
- Complete the activity embedded below.
- You have the option of receiving an email summary of your responses.
- Your responses will only be seen by the creators of these modules.
- Incorporating AI in Teaching: Examples from Yale Instructors, Poorvu Center for Teaching and Learning at Yale University.
- 101 creative ideas to use AI in education, A crowdsourced collection, Chrissi Nerantzi; Sandra Abegglen; Marianna Karatsiori; Antonio Martínez-Arboleda (Editors)
- Responsible AI at Stanford, Stanford University IT, Information Security.
Carey, B. (2015). How we learn: The surprising truth about when, where, and why it happens. Random House Trade Paperbacks.
Chen, B. X. (2023, May 25). Get the Best From ChatGPT With These Golden Prompts. The New York Times. NYTimes.com. https://www.nytimes.com/2023/05/25/technology/ai-chatbot-chatgpt-prompts.html
Chiang, T. (2023, February 9). ChatGPT Is a Blurry JPEG of the Web. The New Yorker. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
Ferraro, M. F. (2023, February 28). Ten Legal and Business Risks of Chatbots and Generative AI. Tech Policy Press.
Gewirtz, D. (n.d.). How to Write Better ChatGPT Prompts (and This Applies to Most Other Text-Based AIs, Too). ZDNET. https://www.zdnet.com/article/how-to-write-better-chatgpt-prompts/. Accessed July 17, 2023.
Gewirtz, D. (n.d.). How to use ChatGPT to write code. ZDNET. Retrieved July 28, 2023, from https://www.zdnet.com/article/how-to-use-chatgpt-to-write-code/
Hackman J. R. (2011). Collaborative intelligence: using teams to solve hard problems. Berrett-Koehler.
Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here's what it says. MIT Technology Review.
Kraft, M., Schueler, B., Loeb, S., & Robinson, C. (2021, February). Accelerating Student Learning with High-Dosage Tutoring. Annenberg Institute for School Reform at Brown University.
Metcalfe, J. (2017). Learning from errors. Annual Review of Psychology, 68, 465-489.
Mollick, E. R., & Mollick, L. (2023, June 12). Assigning AI: Seven Approaches for Students, with Prompts. SSRN. Available at SSRN: https://ssrn.com/abstract=4475995 or http://dx.doi.org/10.2139/ssrn.4475995
OpenAI Platform. (n.d.). ChatGPT Education Documentation. Retrieved July 17, 2023, from https://platform.openai.com/docs/chatgpt-education
Responsible AI at Stanford | University IT. (n.d.). Retrieved October 19, 2023, from https://uit.stanford.edu/security/responsibleai
Santascoy, N. (2021, June 7). Promoting Student Metacognition. Stanford Teaching Commons. https://teachingcommons.stanford.edu/news/promoting-student-metacognition.
Thiessen, R. (n.d.). AI Cases in Education. IT Teaching Resources. Retrieved July 26, 2023, from https://teachingresources.stanford.edu/resources/ai-cases-in-education/
Analyzing the implications of AI chatbots
Guided analysis of how AI can affect your own courses and teaching practice, covering ethical issues, student success issues, and workload balance.Go to the next module
Learning together with others can deepen the learning experience. We encourage you to organize your colleagues to complete these modules together. Consider how you might adapt, remix, or enhance these modules for your own needs. If you have any questions, contact us at TeachingCommons@stanford.edu. This guide was created by Stanford Teaching Commons and is licensed under Creative Commons BY-NC-SA 4.0 (attribution, non-commercial, share-alike).