Artificial Intelligence in Education

This topic evolves rapidly. What was true and accurate yesterday may be out of date today.

What is AI?

Artificial Intelligence (AI) is when a machine appears to perform the cognitive functions we usually associate with human minds. It does this by finding patterns in data sets, which it does far better and faster than humans. These tools offer potential to improve the way that humans interact with computers.

You have already been using AI for years. Autocomplete. Spelling & grammar check. Language translation. Alexa & Siri. Audio Transcripts. The way Amazon suggests products or Netflix suggest a new show for you to watch. All of these tools are AI driven.

The type of AI that teachers and students are most interested in is likely “generative AI” (Gen AI.) These are tools that can create new original content at our request, from conversational requests (prompts) we provide.

What should teachers know?

Lilach Mollick explains it well:

AI is UBIQUITOUS, UNDETECTABLE, and TRANSFORMATIVE.

Ubiquitous – AI is everywhere. Every person has access to it. There is no way to filter or block students from access. AI features are already found within everyday tools, and increasingly so. Even if it were possible to block AI from LPS devices (which it is not) students would be still able to access it from home devices and/or their phones.

Undetectable – Students are already using it in all kinds of ways. This includes cheating, and we can’t tell if they’re doing so. Common AI “detectors” have high false positive rates, are easily fooled, and may contain implicit bias. 

Transformative – AI is our first technology since the creation of the Internet that touches everything we do. It has the potential to transform how we live, how we work, and how we teach.

It may be worth your time to watch the following video. Created by Ethan and Lilach Mollick, they offer an overview of how large language models work and how this latest generation of models has impacted how we work (as educators) and how we learn (as students), 

LPS AIAG

 The LPS Artificial Intelligence Advisory Group (AIAG) is a committee who meet regularly to consider the opportunities and risks of AI use by LPS staff and students, and make recommendations to leadership about appropriate positions.

Similar to the ITT Committee that has reviewed instructional technology tools over the past decade, this cross-departmental team includes staff from a number of departments including Curriculum, Equity, Diversity and Inclusion, Library Services, and Computing Services.

The committee is chaired by Kristi Peters, Director of Educational Technology.

Common Vocabulary

Artificial Intelligence (AI) is when a machine performs the cognitive functions we usually associate with human minds. The term was coined by scientists in 1955.

Algorithm – precise steps (rules) for a computer program to take, much like a recipe in baking. While AI systems may contain algorithms, much of their output is the result of learning from data.

Generative AI (GenAI) – the class of tools that can be used to create new material (novel synthetic content), including audio, code, images, text, simulations, and videos. (E.g. ChatGPT, Gemini, Claude, etc.)

Hallucination –  Inaccurate or misleading output from an AI tool

Large Language Models (LLM) – a type of AI that has been trained on vast amounts of text to understand existing content, find patterns, and generate original content in response to human prompts. Examples include GPT from OpenAI, and Gemini from Google. 

Prompt – The method of interacting with an AI tool in the form of a request, or question. Usually text based, and conversational in nature.

Deeper Dives

We’re already using AI more than we realize

Spam filters in email, sentence suggestions in word processing, voice assistants on our phones, or fitness tracking in our watches… there are countless machine learning tools quietly existing in our everyday lives in ways you may not have considered. (March 2024)

What can educators do?

As technology evolves and becomes more sophisticated, it’s understandable that we’re uneasy about actual and anticipated challenges. Here are some thoughts and strategies to consider as we all move forward. 

How do people access
generative AI?

Generative AI has been available to the public for a few years now, and can be accessed in a number of different avenues. Instead of publishing a list of them here, LPS Staff may refer to this resource that lists common examples.

Learn more about AI for Educators

Google offers a short (~2 hour) course where you can learn the necessary  concepts behind generative AI, identify ways it might assist your professional practice, and explore the process of writing prompts. It is a great introduction to this new world of GenAI, with classroom specific examples.

Before using AI tools in an instructional setting...

Many Federal Regulations are relevant to the use of AI in an educational setting. Teachers and staff should be mindful of this, and adhere to these commitments we have to students and families before using an AI tool in our work, or with students.

FERPA – Any AI systems used by LPS employees must protect the privacy of student education records. Most publically available AI tools do not have the level of security and privacy policy in place to assure that student data remains protected. If LPS staff choose to use AI in support of their role and practices, they should ensure that no personally identifiable information is ever included in AI prompts.

COPPA – Like all websites that require a login or collect personal information, AI tools require parental consent when used with students under the age of 13. Tools that are approved for use in LPS have gained this consent. Check the Matrix to see if a tool is approved for use with students.

IDEA – AI must not be used in a way that denies disabled students equal access to education opportunities. 

CIPA – Schools must ensure that AI use aligns with CIPA protections against harmful content.

Section 504 (of the Rehabilitation Act of 1973) applies to both physical and digital environments. Schools must ensure that their digital content and technologies are accessible to students with disabilities.

VERY Important Things to Know...

  • When using free Generative AI tools (ChatGPT, Claude, Magic School, etc.) your prompts and text responses are NOT private nor secure. They are seen and used by third parties. This should make us pause and be thoughtful about when, where, and how we interact with generative AI.
  • Federal regulations (FERPA) demand that we protect certain types of data. Never include personally identifiable information or educational records when using a generative AI tool.
  • LPS employees accept full professional responsibility when using a generative AI tool to create content.
  • Generative AI content should never replace the guaranteed and viable curriculum provided by LPS.

How can I proactively monitor student work?

Use Hāpara

Guided Browsing and Freeze Tabs
Hāpara is a critical component of our systems for managing behaviors in the classroom. It offers some powerful features that teachers can use to focus student browsing activity while keeping them from accessing unintended resources during LPS classroom work time. (NOTE: This does not stop a student from accessing tools from their personal cell phones).

  • Focused Browsing: “Lock” your student(s) to specific tabs you send out to Chromebooks.
  • Filtered Browsing: Add websites to the “blocked” list for a specified amount of time (ex – Google, Wikipedia, known AI tools, curriculum resources, etc.).
  • Freeze Tabs: “Lock” your student(s) to specific tabs already open on their Chromebook for a set amount of time.
Watch this overview of Hāpara Highlights to learn more.

Originality Reports

You can use the built-in Originality Reports in Google Classroom when creating assignments. Our LPS license includes unlimited reports for our staff. These reports do not detect AI generated text, but may help students understand that you are conscious of the potential for academic dishonesty.

Have students work *with* AI

If you are a teacher of science, history, social studies, or any other area that depends upon facts, the most appropriate way to address AI generated text may be to start an assignment by using AI generated text!

Start with an assignment or prompt about your subject that you would otherwise have assigned to your students. Instead, take that to Gemini, ChatGPT, or another generative text tool. Have it generate a version or two of a response.

For your assignment, have students respond to the text you generated. Show them how you got the text. Have the students evaluate the response. Fact check it, cite their sources. Evaluate it for potential bias. Ask them to add one thing they learned about the topic that the AI missed. Have them argue an alternative viewpoint. This is the sort of high level thinking you may have been hoping for when assigning the prompt originally.

Getting an up close look at the hallucinations and errors that can appear in generated text may be the best way to make students wary of using it in a copy/paste fashion. At the very least it will model an appropriate way to approach AI generated text in their future endeavors.

Version History

If the work was done in a Google Doc, Sheet, or Slide document, check the version history. As an editor, you should be able to see the progression of the text in the document from word-to-word, and any editing that happens along the way. If the text suddenly appears all at once in the history of the document, it may be a clue that it was pasted from another source.

Consider assigning work that requires students to iterate on an assignment (edit, revise, remix.) These built-in steps might make it more difficult to copy and paste a completed work from another source.

NOTE: Working “offline” is not a good strategy for stopping AI.

  • Unless the work is started and finished in your presence, students can access AI tools as soon as they leave your classroom.
  • If students had tabs that were opened in Chrome prior to turning off wi-fi, they are still available and can be copied/pasted into documents.
  • Manually selecting “Offline mode” can disrupt settings needed to help the students who most need to access work on their Chromebook outside of school, with no internet access.

How can you tell if a student used AI in an assignment?

About “Detection” Tools

LPS does not recommend any specific AI detection tools. These tools have been proven to be biased, inaccurate, incomplete, or are overselling their abilities. They occasionally make false accusations and even the creators admit there is no 100% accurate tool available. As the tools improve and clever students figure out how to get better results by giving the models more sophisticated prompts, it is hard to imagine any reliable way to “detect” AI text.

As stated earlier, be careful of implicit bias, which is prevalent in most AI tools.

TO BE CLEAR: No tool can tell you with 100% accuracy whether or not a student has used AI to generate text used in an assignment.

Be Proactive

Long before a problem occurs, make it clear to students that you are aware of the potential for AI to be used inappropriately. Include references to District Policy on plagiarism and Academic Integrity statements in your course syllabus and in class discussions.

  • Board Policy 6442: Plagiarism
  • Building Student Code of Conduct RE: Academic Integrity

Investigating Questions of Academic Integrity

If your curricular area has provided guidance on Artificial Intelligence and Academic Integrity processes, please refer to those resources.

If you believe that a student may have violated these expectations by using AI to generate the work they submitted, following are some steps you can take to address it.

  • Consider whether the submitted work is consistent with the student’s previous work. Refer to previous artifacts of their learning for comparison.
  • If the work was done in a Google Doc, Sheet, or Slide document, check the version history. You should be able to see the progression of the text from word-to-word, and any editing that happens along the way. If the text suddenly appears all at once in the history of the document, it may have been pasted from another source.
  • If you choose to use “AI detectors,” use multiple. No single tool can tell you with 100% accuracy whether or not a student has used AI to generate text used in an assignment. You should be aware that current AI detectors are known to indicate false positives at a high rate and have shown bias. That being known, if you consult a few of them it may help you create an informed opinion about whether text is more likely to be authentic or more likely to have been AI generated.
  • Also note that AI text generators can easily hallucinate a well formatted citation which does not actually connect to anything in the real world. Having students produce the cited document(s) could help them understand this. 

If your investigation leads you to believe that a student may have violated expectations, prepare any relevant documentation and share it with the student and family to inquire about the discrepancy and your concerns. Consider asking the student to resubmit an alternate or revised version of the work, or summarize their main points with pencil & paper in your presence. You will want to contact the student’s parent(s)/guardian(s), a building administrator, and potentially a counselor before taking these steps.

What is the future of AI in Education

The potential benefits of modern AI for students and teachers are many. This US Department of Education resource outlines a number of areas in which education may see benefits when using technology-enhanced approaches to existing priorities such as supporting students with disabilities, addressing variability in student learning, enabling English language learners in the traditional classroom, or adaptive feedback loops that increase the quality and quantity of feedback provided to students and teachers. AI will never replace good teachers. Educators can use AI-enabled tools to amplify their role in the learning process, enabling them to engage and support their students at even higher levels. It can also help reduce the burden of “off-stage” activities like grading, analyzing data, crafting professional communications, and more.

In these early days of the Large Language Model era, AI generated text is known for being flawed when creating text that is based in fact. If students are to use it ethically, they will need to develop the research skills that allow them to determine if the text is factually correct, accurate in representation, or contains explicit or implicit bias. Students will surely try to use AI for cheating on assignments, as they have with every new technology. Teachers will need to rely upon existing policies and codes of conduct around plagiarism and academic integrity. The bigger question for educators when considering cheating (in any form) has always been “what are we really assessing?” There is no question that over time AI will challenge how we do assessment and determine understanding, just like it did when the pocket calculator was invented, and the internet arrived on student devices and cell phones.

We need to balance the risks and opportunities of AI,
because there are plenty of both.