AI in Education: The Future is Now

Artificial intelligence, academics and how Gonzaga is handling the new technology.

An example of an AI-generated image
An AI-generated image, created with the prompt: create an image inside school with math equations and magic in sky futuristic scientific style with computer. (Adobe Firefly)
April 06, 2024
Dan Nailen | Gonzaga Magazine Spring 2024

[Notice: A human wrote this article.]

Tech company OpenAI released its ChatGPT tool at the end of November 2022, and it took only a couple of weeks for Gonzaga professors to notice students had quickly adopted the program that miraculously seemed capable of “writing” essays.

Justin Marquis remembers one professor mentioning that some papers she’d received for an assignment seemed odd. The papers were a little too perfect. Perfect grammar. Perfect spelling. Perfect Spanish usage. Perfect to a degree that just doesn’t happen at the undergraduate level, or even among graduate students. And each of the essays had the same structure: five paragraphs, the last one beginning with the words “in conclusion.”

Marquis saw the signs of assignments “written” by ChatGPT or similar tools like Google’s Bard or Microsoft’s Copilot. As director of GU’s Instructional Design and Delivery, Marquis and his team help faculty and staff harness the latest technology, including AI, to deliver innovative classroom experiences. And he’s used ChatGPT since its debut to assist in brainstorming blog posts, designing class materials and helping fellow faculty approach courses in creative new ways.

“What AI knows – and I’m using the word ‘knows’ not very accurately – is the past of all human knowledge,” Marquis explains. “If it’s happened, if it’s been written down, and if it’s on the internet, AI has ingested it.”

“AI is basically math. It’s taken the information on the internet and turned it into equations. It seems like magic because it actually does work, because it’s parsing so much information and doing computations to make things make sense. But it doesn’t understand context in any way. So, when you prompt it to write an essay about “Beowulf,” it might process all early English literature and pull random characters from other stories, completely unrelated.”

ChatGPT’s foibles didn’t stop people from embracing it. Two months after its launch, it had reached more than 100 million monthly users, according to Reuters. By comparison, it took TikTok nine months to reach that many users, and Instagram more than two years.

At Gonzaga, Marquis has worked with offices like Human Resources to improve training programs via ChatGPT brainstorm sessions, and collaborated with the Center for Teaching and
Advising to provide workshops for teachers interested in adopting some AI in their work – or at least getting a better understanding.

'Cheating' is Changing

Is a student cheating if using ChatGPT to write a paper?

That depends on each faculty member’s expectations for each class. Some actively encourage students to use ChatGPT to hone arguments, spark new directions for writing, or simply to learn new technology that will be part of life from this point forward. Others strictly prohibit AI for class work.

The key for faculty is making expectations crystal clear at the beginning of any class, something Marquis encourages his peers to do both as Gonzaga’s resident AI expert and interim chair of the Academic Integrity Board, the entity that handles any accusations of cheating on campus.

“The thing I’d like students to understand, and for faculty to impress, is that you are responsible for the thing AI creates. Whether you write something or make art, you put it out under your name,” Marquis says. “This is a representation of you and you are the one who will bear any consequences from it.”

While the University doesn’t have a specific policy aimed at AI, any student who might hand in a ChatGPT-written assignment without permission to use the tool and/or without citing they’d
used AI would be in violation of GU’s Academic Integrity Policy, which prohibits submitting a paper without proper attribution.

Gonzaga's resident AI expert Justin Marquis
Justin Markquis, Director of Instructional Design & Delivery at Gonzaga, is leading the university's approach to AI.

The challenge for instructors trying to prohibit use of AI is that proving an assignment was generated by AI is exceedingly difficult. Electronic tools designed to determine if an assignment is AI-generated have proved unreliable, Marquis says, and could even falsely accuse a student of academic malfeasance, with possible significant consequences.

How should teachers draw the line on what they accept? No one argues against a student using Spellcheck or Grammarly as they write. How about a student whose first language isn’t English using an AI translation app to help understand a professor’s lecture or some assigned reading?

As experts in their respective fields, faculty are well-positioned to spot AI-generated assignments. Besides those predictable, unrealistically clean essays, they’ll quickly notice when Macbeth inexplicably appears in an essay about “Twelfth Night.” But the very concept of “cheating” might get tougher to define as AI is used by more students and teachers. It will remain, as now, the faculty’s obligation to make sure expectations are clearly defined as students born into an AI-dominated world matriculate.

Challenges and Opportunities

Another important lesson for unsuspecting students and AI advocates goes back to the idea that the “magic” is based in math.

AI’s ingestion of human history’s knowledge means it also absorbs all the biases that humans have inserted into that history for hundreds of years. That means the “answers” it generates to innocent queries could be racist or misogynistic.

“Everything we’ve ever written or created and put on the internet has bias built into it. AI will probably not only replicate that bias, but likely amplify it,” Marquis says. “We’re seeing examples where AI is saying really offensive and strange things, and people ask, ‘Why did it do that?’”

“Different perspectives or understandings of the world are not the most popular viewpoints,” Marquis says, noting that this is where Gonzaga’s liberal arts approach to a Catholic, Jesuit, humanistic education is vital when it comes to AI.

Users also need to know that AI is created to deliver the most common answers to a question, the most popular viewpoints, not necessarily the most interesting or most correct.

“We, as humans, have to add value to the system. We can give those diverse perspectives and understand diverse perspectives that AI can’t.”

That human quality, Marquis adds, is how a faculty member can “hedge their bets” against AI, writing assignment prompts that require students to think outside the box of what AI is able to do, prompts for both personal reflection on material studied and demonstration of perspectives beyond the obvious.

AI isn’t bad, it’s just a tool to be used ethically and responsibly. In his role to help faculty to best deliver on Gonzaga’s educational mission, Marquis is excited that AI can help reduce faculty video editing processes from three hours to 15 minutes. For multiple functions, using AI tools to streamline basic tasks is not a possibility – it’s a reality.

And Marquis believes there’s no reason to be afraid of that.

“AI is probably not going to cause the world to end. It will evolve into a tool that we learn how to use, and it will change the way we function. We have to adapt all our practices to that understanding.”

More on AI

Photo of computer code generated by AI

At the Intersection of History and AI

Read story

Photo of computer code generated by AI

AI Around Gonzaga's Campus

Read story

Photo of campus college building generated by AI

Research Librarians and Artificial Intelligence

Read story

For more articles and perspectives, visit our "AI in Higher Education" collection.
  • Academics
  • Faculty Voices
  • Instructional Design and Delivery IDD
  • Academic Vice President
  • Gonzaga Magazine