Artificial Intelligence in Education

This topic evolves rapidly. What was true and accurate yesterday may be out of date today.

What is AI?

Artificial Intelligence (AI) is when a machine appears to perform the cognitive functions we usually associate with human minds.

AI has an air of mystery about it, but there is no magic involved. As the US Dept. of Education describes in their recent reports, AI works by finding patterns in data sets, then automating how it presents this information to users in new ways. Technology recognizes and responds to patterns far better (and faster) than humans can. This results in tools that offer a great deal of potential for improving the way that humans interact with computers.

Here’s the thing – you have already been using AI for years. Autocomplete. Spelling & grammar check. Language translation. Alexa, Siri. Auto fill. Audio Transcripts. Voice assistants. The way Amazon suggests products or Netflix suggest a new show for you to watch. All of these tools are AI driven.

The type of AI that teachers and students are most interested in is likely “generative AI.” These are tools that can create original content from written prompts, including audio, code, images, text, and videos.

What should teachers know?

Lilach Mollick explains it well:

AI is UBIQUITOUS, UNDETECTABLE, and TRANSFORMATIVE.

Ubiquitous – AI is everywhere. Every person has access to the most powerful AI models in the world. There is no way to filter or block students from access. AI features are already within everyday tools, and increasingly so. If it were possible to block all AI from LPS devices (which it is not) students would be able to access it from home devices and/or from their phones.

Undetectable – Students are using it in all kinds of ways. This includes cheating, and we can’t reliably tell if they’re doing so. Common AI “detectors” tend to have high false positive rates, are easily fooled, and may contain implicit bias. 

Transformative – AI is our first technology since the creation of the Internet that touches everything we do. It has the potential to transform how we live, how we work, and how we teach.

If you are curious about how large language models work and how this latest generation of models has impacted how we work (as educators) and how we learn (as students), it may be worth your time to watch the following video. It is the 10 minute overview video of a longer series created by Ethan and Lilach Mollick of the University of Pennsylvania’s Wharton School.

Common Vocabulary

AI – Artificial Intelligence is when a machine performs the cognitive functions we usually associate with human minds. The term was invented by scientists in 1955.

Algorithm – precise steps for a computer program to take, much like a recipe in baking. While AI systems may contain algorithms, much of their output is the result of learning from data.

Deep Learning – the use of large computing networks to constantly process new data inputs and gain intelligence, similar to neurons in a human brain.

Generative AI – tools that can be used to create new content, including audio, code, images, text, simulations, and videos. (E.g. ChatGPT, Google Gemini, Claude, etc.)

Intelligence – the ability of a machine to learn and perform tasks to solve problems and achieve goals.

LLM – Large Language Models – a type of AI that has been trained on vast amounts of text to understand existing content, find patterns, and generate original content in response to human prompts. Examples include GPT-3 and GPT-4 from OpenAI, LLaMA from Meta, and PaLM2 from Google. They can be accessed in many different ways, in many different websites and mobile apps.

Machine Learning – when computers improve their perception, knowledge, thinking, or actions based on experience or data.

Deeper Dives

What can educators do?

As technology evolves and becomes more sophisticated, it’s understandable that we’re uneasy about actual and anticipated challenges. Here are some thoughts and strategies to consider as we all move forward.

How do people access
generative AI?

Generative AI has been available to the public for a few years now, and can be accessed in a number of different avenues. Instead of publishing a list of them here, LPS Staff may refer to this resource that lists common examples.

We’re already using AI more than we realize

Spam filters in email, sentence suggestions in word processing, voice assistants on our phones, or fitness tracking in our watches… there are countless machine learning tools quietly existing in our everyday lives in ways you may not have considered. (March 2024)

Never put LPS related data in a generative AI tool.

  • It would be a FERPA violation to upload any student data to an AI tool.
  • It would be a Board of Education Policy violation to upload any LPS data to an AI tool.

How can I proactively monitor student work?

Use Hāpara

Guided Browsing and Freeze Tabs
Hāpara is a critical component of our systems for managing behaviors in the classroom. It offers some powerful features that teachers can use to focus student browsing activity while keeping them from accessing unintended resources during LPS classroom work time. (NOTE: This does not stop a student from accessing tools from their personal cell phones).

  • Focused Browsing: “Lock” your student(s) to specific tabs you send out to Chromebooks.
  • Filtered Browsing: Add websites to the “blocked” list for a specified amount of time (ex – Google, Wikipedia, known AI tools, curriculum resources, etc.).
  • Freeze Tabs: “Lock” your student(s) to specific tabs already open on their Chromebook for a set amount of time.

Version History

If the work was done in a Google Doc, Sheet, or Slide document, check the version history. As an editor, you should be able to see the progression of the text in the document from word-to-word, and any editing that happens along the way. If the text suddenly appears all at once in the history of the document, it may be a clue that it was pasted from another source.

Consider assigning work that requires students to iterate on an assignment (edit, revise, remix.) These built-in steps might make it more difficult to copy and paste a completed work from another source.

“Detection” Tools

Computing Services does not currently recommend any specific AI detection tools. While some might work better than others, many have been proven to be biased, inaccurate, incomplete, or are overselling their abilities at this time. Many are either freemium products, or require hefty subscription fees as well. As the tools improve and clever students figure out how to get better results by giving the models more sophisticated prompts, it is hard to imagine any reliable way to “detect” AI text.

As stated earlier, be careful of implicit bias, which has been prevalent in most AI tools.

Originality Reports

You can use the built-in Originality Reports in Google Classroom when creating assignments. Our LPS license includes unlimited reports for our staff. These reports do not detect AI generated text, but may help students understand that you are conscious of the potential for academic dishonesty.

Have students work *with* AI

If you are a teacher of science, history, social studies, or any other area that depends upon facts, the most appropriate way to address AI generated text may be to start an assignment by using AI generated text!

Start with an assignment or prompt about your subject that you would otherwise have assigned to your students. Instead, take that to ChatGPT, Bard, or another generative text tool. Have it generate a version or two of a response.

For your assignment, have students respond to the text you generated. Show them how you got the text. Have the students evaluate the response. Fact check it, cite their sources. Evaluate it for potential bias. Ask them to add one thing they learned about the topic that the AI missed. Have them argue an alternative viewpoint.  This is the sort of high level thinking you may have been hoping for when assigning the prompt originally.

Getting an up close look at the hallucinations and errors that can appear in generated text may be the best way to make students wary of using it in a copy/paste fashion. At the very least it will model an appropriate way to approach AI generated text in their future endeavors.

NOTE: Working “offline” is not a good strategy for stopping AI.

  • Unless the work is started and finished in your presence, students can access AI tools as soon as they leave your classroom.
  • If students had tabs that were opened in Chrome prior to turning off wi-fi, they are still available and can be copied/pasted into documents.
  • Manually selecting “Offline mode” can disrupt settings needed to help the students who most need to access work on their Chromebook outside of school, with no internet access.

How can you tell if a student used AI in an assignment?

TO BE CLEAR: No single tool can tell you with 100% accuracy whether or not a student has used AI to generate text used in an assignment.

Be Proactive

Long before a problem occurs, make it clear to students that you are aware of the potential for AI to be used inappropriately. Include references to District Policy on plagiarism and Academic Integrity statements in your course syllabus and in class discussions.

  • Board Policy 6442: Plagiarism
  • Building Student Code of Conduct RE: Academic Integrity

Investigating Questions of Academic Integrity

If your curricular area has provided guidance on Artificial Intelligence and Academic Integrity processes, please refer to those resources.

If you believe that a student may have violated these expectations by using AI to generate the work they submitted, following are some steps you can take to address it.

  • Consider whether the submitted work is consistent with the student’s previous work. Refer to previous artifacts of their learning for comparison.
  • If the work was done in a Google Doc, Sheet, or Slide document, check the version history. You should be able to see the progression of the text from word-to-word, and any editing that happens along the way. If the text suddenly appears all at once in the history of the document, it may have been pasted from another source.
  • If you choose to use “AI detectors,” use multiple. No single tool can tell you with 100% accuracy whether or not a student has used AI to generate text used in an assignment. You should be aware that current AI detectors are known to indicate false positives at a high rate and have shown bias. That being known, if you consult a few of them it may help you create an informed opinion about whether text is more likely to be authentic or more likely to have been AI generated.
  • Also note that AI text generators can easily hallucinate a well formatted citation which does not actually connect to anything in the real world. Having students produce the cited document(s) could help them understand this. 

If your investigation leads you to believe that a student may have violated expectations, prepare any relevant documentation and share it with the student and family to inquire about the discrepancy and your concerns. Consider asking the student to resubmit an alternate or revised version of the work, or summarize their main points with pencil & paper in your presence. You will want to contact the student’s parent(s)/guardian(s), a building administrator, and potentially a counselor before taking these steps.

What is the future of AI in Education

The potential benefits of modern AI for students and teachers are many. This US Department of Education resource outlines a number of areas in which education may see benefits when using technology-enhanced approaches to existing priorities such as supporting students with disabilities, addressing variability in student learning, enabling English language learners in the traditional classroom, or adaptive feedback loops that increase the quality and quantity of feedback provided to students and teachers. AI will never replace good teachers. Educators can use AI-enabled tools to amplify their role in the learning process, enabling them to engage and support their students at even higher levels. It can also help reduce the burden of “off-stage” activities like grading, analyzing data, crafting professional communications, and more.

In these early days of the Large Language Model era, AI generated text is known for being flawed when creating text that is based in fact. If students are to use it ethically, they will need to develop the research skills that allow them to determine if the text is factually correct, accurate in representation, or contains explicit or implicit bias. Students will surely try to use AI for cheating on assignments, as they have with every new technology. Teachers will need to rely upon existing policies and codes of conduct around plagiarism and academic integrity. The bigger question for educators when considering cheating (in any form) has always been “what are we really assessing?” There is no question that over time AI will challenge how we do assessment and determine understanding, just like it did when the pocket calculator was invented, and the internet arrived on student devices and cell phones.

We need to balance the risks and opportunities of AI, because there are plenty of both.