Skip to main content Skip to secondary navigation
Teaching Events

Teaching Commons Conference 2024

Join us for the Teaching Commons Conference 2024 – Cultivating Connection. Friday, May 10.

Registration and more information

Analyzing the implications of AI for your course

Main content start

Here we will guide you through analysis and self-reflection about how AI can affect your own courses and teaching practice. We have organized our guidance into three broad topic areas: academic integrity, student success, and workload balance.

Key points from the previous module

Exploring the pedagogical uses of AI chatbots

  • AI chatbots might be used in a variety of ways for teaching, such as providing feedback, tutoring, coaching, teamwork tasks, simulations, and more.
  • Strategies for prompting chatbots include using structured prompts for outcome-oriented tasks, conversing naturally for open-ended tasks, providing context details, and so on.
  • Practice using a chatbot to better understand its capabilities and limitations.
  • Potential risks around chatbot use include risks around truthfulness, privacy, bias and stereotypes, and equity and access.
Go to the previous module

Outcomes for this module

In this module, you will analyze how AI chatbots fit into your own course relative to the broader campus context around AI and technology. The nuances of these issues will vary depending on the unique characteristics of your discipline area and course. We encourage you to think carefully about your specific situation when going through this module.

After completing this module, you should be able to:

  • Describe some of the risks of using AI in teaching and learning contexts.
  • Describe campus policy guidance from the Office of Community Standards regarding AI use.
  • Analyze how AI might impact your specific course through the lenses of academic integrity, student success, and workload balance.

Thinking about your own course

As you move through this module, we ask you to think about the characteristics of the course or courses that you teach. We hope that by focusing on your specific course you can better apply the ideas and insights you gain through these modules to develop something actionable and meaningful to you. Consider the prompt "What do you want students to learn in your course?" and respond to the poll below.

Embed Code
Embed Code

Potential risks when using AI 

Generative AI chatbots are not perfect tools. Any use of AI carries some risks and shortcomings in how these tools perform and respond to different prompts. Here we focus on a few areas most relevant to teaching and learning.

Truthfulness

Large Language Models can produce incorrect yet plausible information confidently presented as factual. This kind of hallucination or confabulation stems from how these systems work and the limits of their training data. Chatbots tend to make mistakes when prompted to provide quotes, citations, and specific detailed information. Different LLMs vary; most have become more sophisticated and less prone to make errors over time. However, you and your students should always fact-check the output of chatbots with reliable external sources when using them to get information (Mollick & Mollick, 2023).

Privacy 

Assume that the organization that developed the chatbot will use any data you enter according to their terms of service. Also, privacy laws and regulations concerning chatbots remain evolving and unclear. We recommend that you and your students exercise caution when entering sensitive or private data into a chatbot, as doing so might put your privacy at risk. You should not enter any protected information, high-risk data, or other data that should not be made public into a chatbot. You also should not enter copyrighted data or intellectual property that belongs to others, such as student work, unless you have their permission. University IT provides additional guidance on the responsible use of AI regarding privacy and data security on their Responsible AI at Stanford webpage.

Bias and stereotypes

Chatbots and Large Language Models can produce content that perpetuates harmful biases and stereotypes. Developers train LLMs on vast but still limited sets of digital data. Most training data comes from Western perspectives in the English language available from the internet. Human engineers, with their inherent biases, also provide additional training for these tools. Individual users bring their own perspectives into dialogue with a chatbot through prompts and queries. All these can result in subtle biases and stereotypes in the output of a chatbot. We encourage you and your students to be critical of language generated by AI chatbots and consider these important issues when using these tools (OpenAI Platform, n.d.).

Equity and access

Like any technology, access to these tools varies and lack of access can perpetuate existing inequities. Consider the cost of subscriptions, access to computers and reliable connectivity, geographic restrictions, accessibility issues for people with disabilities, the user's preparation, and the tools' performance in other languages as important aspects of this issue. While chatbots can help to reduce some gaps, they may also exacerbate others. Keep these issues in mind as you and your students work to maximize the potential benefit of using chatbots.

Critiques of LLMs

Critiques of LLMs highlight broader issues of environmental impact, justice, ethics, economic impact, and so on. We consider these criticisms important and valid; however, many exist beyond the scope of these modules. If you'd like to explore further these broader critiques, consider the following articles as starting points.

Three areas of focus

All of us will likely experience the potential impacts of AI as wide-ranging, emerging, and rapidly evolving. We cannot address them all here; instead, we will focus on three particular areas of concern for instructors: academic integrity, student success, and workload balance.

Academic integrity

Many instructors express concern that students will use AI to shortcut the learning process and that students will present AI-generated text as their own work. When thinking about AI use in your Stanford courses, you and your students should refer to campus policies, the Honor Code, the Student Judicial Charter, and what constitutes plagiarism. We ask you to consider what you can do to promote trust, integrity, and honorable behavior.

Student success

We all have a responsibility to support students in achieving success, which can have many dimensions. Supporting students' success might include preparing them for the future, helping them to meet their own goals, and supporting their well-being. Consider these factors when determining whether AI tools align with your course’s learning outcomes, how you support students in using AI tools, how you and your students understand the risks and benefits of AI, and so on. Learning about new AI tools, adapting your course design, and employing evidence-based teaching techniques are also ways to support your and your student's success. Consider how your efforts can benefit all students of diverse backgrounds, identities, and experiences.

Workload & balance

We must consider promoting positive mental health and well-being to sustainably and meaningfully support students. Adopting a new tool or growing your teaching practice meaningfully can require a lot of work. We recognize you often have competing priorities and limited time. Therefore, when considering the impact of AI tools we should all consider the time needed to learn how to use such tools and implement changes to a course or one's teaching practice, while also considering the benefits of adopting such tools and alignment with broader organizational goals.

Campus policy guidance on AI use

The Office of Community Standards (OCS) has stated the following:

"Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt. Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools. Course instructors should set such policies in their course syllabi and clearly communicate such policies to students. Students who are unsure of policies regarding generative AI tools are encouraged to ask their instructors for clarification."

In this context, we note the importance of deciding the best policy regarding AI use for your own unique course and context.

Guidance on tools for academic integrity

The Office of Community Standards oversees policies about technology tools for plagiarism detection and proctoring. Their Tips for Faculty & Teaching Assistants web page has guidance on using software tools to compare submitted work to other sources. The Honor Code section of its website has up-to-date information on proctoring.

The Teaching Commons offers more background information about these technology tools on the Guidance on technology tools for academic integrity page linked below.

Self-evaluation of your course

Here is a method to help you analyze how AI chatbots might impact a specific course that you teach. We devised questions intended to stoke your thinking as you analyze your own course and teaching practice. We structure what follows like a rubric with an evaluation metric for each criterion, organizing the material into three focus areas: academic integrity, student success, and workload balance. We then organize each area into sub-categories: 

  • Assessments: Measuring how well students learn what you intended and how you assign grades to students
  • Student support: Providing students with what they need to succeed
  • Learning activities: Activities that students do to reinforce learning
  • Inclusion and belonging: Supporting a wide range of diverse students
  • Discipline area: Unique characteristics of your discipline area 

If you answer "Yes," "Very much," or "A lot" to many of the questions below, you might consider a more open policy and consider integrating the use of AI tools into your course.

Academic integrity

A self-evaluation rubric about academic integrity for integrating AI into your course.
Sub-categoryCriterion
AssessmentsTo what degree do your learning objectives align with higher-order thinking skills, such as creating original work, proposing solutions to complex problems, and internalizing values?*
 How effectively do your current assessments, rubrics, and so on measure your learning objectives?*
 How difficult would it be for an AI chatbot to successfully complete your current assessments?
 How clearly and consistently does your method help you in fairly grading student work?
 To what degree does your course provide multiple opportunities and forms of assessment and avoid single high-stakes assessments?
Student supportHow well do you communicate to students what integrity means in your course?
 How clearly do you communicate to students any course or campus policies about academic integrity and AI chatbot use?
 How well might your students know how to use AI chatbots in responsible and honorable ways?
Learning activitiesTo what degree do ungraded (and therefore not subject to OCS policy) learning activities factor into your course?
 To what degree have students already mastered foundational skills that AI chatbots might augment?
Inclusion and belongingTo what degree do you model integrity and the responsible use of AI tools?
 To what degree might your students react positively to allowing AI chatbot use in your course? (Consider how much students might feel pressured vs. protected with a stricter policy, and how much they might feel tempted vs. trusted with a looser policy.)
 To what degree does your course foster belonging, psychological safety, integrity, and intrinsic motivation to succeed?*
Discipline areaHow important are the ethical issues concerning AI use in your field?

*See the "Learn more" section below for links to resources on these topics.

Student Success

A self-evaluation rubric about student success for integrating AI into your course.
Sub-categoryCriterion
AssessmentsTo what degree could integrating AI chatbots make your assessments more compelling or effective?
 How well do current assessments align with students' goals and needs?
Student supportHow well can or do you provide support for students on how to use AI chatbots effectively?
 To what degree are your students independent, experienced, and skilled in self-directed learning with technology tools?
 To what degree does your course promote and do your students leverage relevant support services, such as academic coaches, writing tutors, language partners, and so on?
 To what degree do you and your students understand and consent to the inherent privacy and data security risks that come with using AI tools?
Learning activitiesTo what degree could AI chatbots make learning activities more compelling or effective?
 To what degree do you value experience using AI chatbots for students in your course?
Inclusion and belongingTo what degree do you understand the different issues, challenges, and preferences of students typically enrolled in your course?
 To what degree would using AI chatbots benefit students, particularly first-generation/low-income, under-represented minority, or students with less academic preparation?
 How flexible do you consider yourself and your course in adapting to the needs of diverse students?
 To what degree can you support equal access for students to AI tools in terms of affordability and accessibility?
 To what degree can you give students and your teaching team informed choices and alternatives in how or if they use AI tools?
Discipline areaHow important is it for students in your discipline area to have experience with AI tools or understand AI-related issues?

Workload balance

A self-evaluation rubric about workload balance for integrating AI into your course.
Sub-categoryCriterion
N/AHow easy would it be to adapt different aspects of your course to integrate AI chatbot use?
 How positive and motivated do you feel about integrating AI into your course or teaching?
 To what degree have you identified possible enhancements to your course that support the use of AI chatbots?
 To what degree do you have the time and resources to make changes to your course, or gain the skills needed to do so, while maintaining your own well-being?
 How many resources, collaborators, colleagues, and communities do you have to support you in this work?
 To what degree does integrating AI tools align with departmental or unit strategic goals?

Assess and reinforce your learning

We offer this activity for you to self-assess and reflect on what you learned in this module.

Stanford affiliates

  • Go to the Stanford-only version of this activity
  • Use your Stanford-provided Google account to respond.
  • You have the option of receiving an email summary of your responses
  • After submitting your responses, you will have the option to view the anonymized responses of other Stanford community members by clicking Show previous responses.

Non-Stanford users

  • Complete the activity embedded below.
  • You have the option of receiving an email summary of your responses.
  • Your responses will only be seen by the creators of these modules.
Embed Code

Learn more

Works cited

Chiang, T. (2023, February 9). ChatGPT Is a Blurry JPEG of the Web. The New Yorker. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

Fowler, G. A. (2023, April 14). Analysis | We tested a new ChatGPT-detector for teachers. It flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Ferraro, M. F. (2023, February 28). Ten Legal and Business Risks of Chatbots and Generative AI. Tech Policy Press.

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here's what it says. MIT Technology Review.

McMurtrie, B. (2023, May 26). How ChatGPT Could Help or Hurt Students With Disabilities. The Chronicle of Higher Education. https://www.chronicle.com/article/how-chatgpt-could-help-or-hurt-students-with-disabilities

Generative AI Policy Guidance | Office of Community Standards. (n.d.). Retrieved August 28, 2023, from https://communitystandards.stanford.edu/generative-ai-policy-guidance

Stanford CRAFT. (n.d.). Retrieved July 28, 2023, from https://craft.stanford.edu/

Preview of the next module

Creating your course policy on AI

Example syllabus statements, suggestions, and sample sentences for creating your own AI course policy.

Go to the next module

Learning together with others can deepen the learning experience. We encourage you to organize your colleagues to complete these modules together or facilitate a workshop using our Do-it-yourself Workshop Kits on AI in education. Consider how you might adapt, remix, or enhance these resources for your needs. 

If you have any questions, contact us at TeachingCommons@stanford.edu. This guide is licensed under Creative Commons BY-NC-SA 4.0 (attribution, non-commercial, share-alike) and should be attributed to Stanford Teaching Commons.