AI in the Core
As students of a Catholic, Jesuit and humanistic University, how do we educate ourselves to become women and men for a more just and humane global community?
This question is the anchor for Gonzaga’s core curriculum, a roadmap for all undergraduate students to cultivate understanding, learn what it means to be human, develop principles characterized by a well-lived life and imagine what’s possible for their roles in the world.
As the core website underscores, “a core curriculum housed in the context of a liberal arts education in the Jesuit tradition offers the most complete environment for developing courageous individuals in any major who are ready to take on any career.”
Ann Ciasullo, director of the University Core, believes that the core is a natural fit for content about AI.
Gonzaga Magazine asked three questions of College of Arts and Sciences faculty members who teach core curriculum courses, which now must infuse discussions of artificial intelligence:
- How do you intend to showcase the HOW of AI in your Core course along with the WHY?
- What are the challenges and opportunities you’ve identified regarding AI in your specific discipline?
- How do you believe students will benefit from this approach to Core curriculum?
Here are their responses.
Kris Morehouse, Communication Studies
How & Why
Our discipline examines the ways communication creates the social world by exploring power, meaning making, and representation. When you ask Google a question, it offers an answer generated by artificial intelligence. These AI answers provide clear insights into how power works, what stories or explanations are being told and by whom, and who is represented in the answers and how they are being represented. Ask ChatGPT or any other large language model to create an image of a beautiful woman, and it will give you a specific Westernized image of what might be considered “traditional” (or white) beauty. Ask a question about American history and you have an answer that highlights the stories of those with power without any reference to those whose stories are infrequently told.
It is imperative that all users understand how AI creates its own social world and provides a narrow view of human culture — AI’s world is limited by the perspectives of its creators and the information they provide for its development. As a Jesuit Catholic institution providing a humanistic, liberal arts education, we encourage students to work for justice, and to be in “solidarity with the poor and vulnerable.” Our faculty is focused on helping students understand how power, meaning making, and representation are at work each time they use artificial intelligence work to make AI more representational and egalitarian.
Challenges & Opportunities
The challenge is that students sometimes use AI to do their work for them. Reading, writing, and notetaking are important components of learning; offloading our learning to AI means students are not firing up — and growing — those beautiful neurons in their brains, expanding their own understandings of and to develop an engaged orientation to our world.
Students need to know when and how to use AI critically and responsibly, how to challenge its destruction of our environment, and how to interrogate and transform its flaws.
College students are encountering AI in all kinds of arenas — including the increasingly difficult search for a job after college. As AI use expands in the working world, students need to understand what AI can and cannot do; they need to understand that while they might be able to use it to write a cover letter for a job application, that letter will be devoid of the flair and creativity only the applicant can highlight. There are so many aspects of our culture that are beautiful because of human engagement, creativity, and individuality. Being human, at least for now, is something only humans can do.
Anthony Fisher, Philosophy
How & Why
In Philosophy, especially in PHIL 101 Reasoning, the reflective study of thinking is paramount. One pedagogically powerful way to bring AI into the classroom is to introduce an AI chatbot as the ultimate reasoner that knows everything. However, after spending some time with it, students are encouraged to critique its outputs and thereby strengthen their own reasoning and critical thinking skills. Students then realize that AI technologies don't 'think' the way we do and that they don't have a serious concern for the truth. Students also become less inclined to defer to it as an oracle or over-rely on it for producing their own reflective and critical work.
Challenges & Opportunities
AI can very quickly take over and complete pretty much every reasoning task in critical thinking. With AI tools at the student's fingertips, it is all too tempting to use them for argument identification and argument evaluation assignments. The general problem is that the use of AI leads to de-skilling, and worse still, no-skilling.
In Philosophy, one opportunity is that AI can expose students to ideas and arguments in texts, such as transformational ideas about identity, meaning, and justice, if used in the right way piecemeal at the right stage in the learning process. A related opportunity is that students can learn reasoning and critical thinking skills through AI technologies, so long as their learning experience demands that they use these skills when interacting with AI. So it is possible for students to encounter AI at college, without doing the work for them.
Student Benefits
This approach will fashion students to embrace a mature, reflective approach to the use of AI technologies, while teaching them foundational reasoning and critical thinking skills that are needed to shape and sharpen their minds in their individual intellectual journeys to become leaders that serve the common good based on Jesuit values.
Chase Bollig, English
How & Why
I’ve been teaching with generative AI tools since the first semester after ChatGPT’s release. In these “writing with AI” explorations, students and I have sought to better understand what AI is good for and why it’s important to keep hold of the human elements of writing. These explorations have included activities for generating and critiquing AI essays, creating custom bots through prompt design, or engaging with public writing by both boosters and critics of AI.
While the first audience when writing an AI prompt is the LLM itself, often the AI models are sensitive to how we ask our questions or state our goals. Many of the best practices in writing with AI are recognizable as best practices for writing in any scenario: being able to articulate a clear goal or purpose, having an understanding of what essential context we need to draw on to achieve that goal, thinking through the form or shape an idea should take, being able to evaluate a text against our internal standards.
For Fall 2025, I’m working with the Institute for Informatics and Applied Technology to design a custom AI app, ZagAI, that faculty can use to create AI powered learning experiences. My colleagues Josh Anthony and Yuki Kang and I have collaborated on assignments where students will use ZagAI to deepen their engagement with course concepts and writing practices by using AI that’s been prompted to give feedback, to serve as an intellectual sparring partner, or to simply ask questions that encourage students to explain their thinking about their writing.
Challenges & Opportunities
Writing studies and first-year writing are high exposure areas when it comes to generative AI.
In many ways, what it means to be a writer doesn’t change with technology. Writing is a thinking practice, and we often foreground this idea in conversations with students. We don’t only write to prove what we know; we often write to discover what we think. Those practices persist whether we’re writing by hand, posting to a discussion forum, or designing a bot through custom instructions.
In my first-year writing classes before and after the release of ChatGPT, I’ve noticed that many novice writers have difficulty pinning down the difference between their own ideas and those of their sources. Introducing AI early in that process can exacerbate this problem, making it hard to see where the student’s ideas end and the AI’s “ideas” begin.
Early days in AI, being able to spot a hallucination or to observe “fluff” in text was easier. However, as the models have become more reliable and more powerful, students often need higher levels of content knowledge to tell whether something is true or if it’s been distorted by AI (either as a hallucination or false statement, or in more subtle ways such as framing or implication).
Improvements in reliability and quality similarly make it harder for some students to avoid using AI too early or too often in their writing process. Many students themselves express concerns about skill atrophy, dependency, or otherwise missing out on experiences because they rely too heavily on AI.
The last note is that as AI has become increasingly prominent, it has also become more polarizing in schools and media coverage. When students have only had teachers ban AI or have been told that AI is only a cheating tool, they often come to Gonzaga with impressions that make it harder for them to meet goals for discernment and critical thinking. Similarly, students who have developed habits of reflexively reaching for AI have difficulty gaining the distance necessary for discernment. Part of the project of the new learning outcome for ENGL 101 is to create opportunities for us to slow down and reflect on the role AI should play in our thinking, writing, or learning.
Student Benefits
Few are the problems we solve by ignoring them.
As Marc Watkins writes, engaging with AI does not necessarily mean embracing AI. Faculty at Gonzaga have a range of perspectives on AI and technology, as with any topic, and it’s an enriching experience for students to encounter those multiple perspectives as they develop their own ideas about generative AI or reflect on their own values.
Generative AI tools are a writing technology, and we have no indication that these tools will disappear anytime soon. If they follow the path of other new tech, like computers and the internet or smartphones, then we’ll learn best about how to use them by using them.
ZagAI
Chase Bollig, Josh Anthony and Yuki Kang, all of the English department, are working with AI engineer Chris Lopes and student researchers from GU’s Institute for Informatics and Applied Technology to develop a custom AI app, ZagAI. With ZagAI, faculty can create AI-powered learning experiences, and students will deepen their engagement with course concepts and writing practices by using AI that’s been prompted to give feedback, to serve as an intellectual sparring partner, or to simply ask questions that encourage students to explain their thinking about their writing.
Comprehensive AI Work Across Campus
Gonzaga’s Institute for Informatics and Applied Technology, which opened a new collaborative space in the Herak Center this fall, is poised to assist faculty and staff across the disciplines with the responsible implementation of AI. Beyond exploring the tool from the standpoint of studying and learning, Director Jay Yang and his staff also help with the implementation of AI processes that can help departments achieve operational efficiencies. This is not to suggest that AI could replace workers, but instead free up time for people to focus on aspects of the work that truly need interpersonal insights.
The Institute’s priorities are to be a catalyst for transformative learning, interdisciplinary research and collaborative innovation through partnerships with industry, government and community.
- Academics
- College of Arts & Sciences
- Informatics and Applied Technology
- Institute for Informatics and Applied Technology
- Communication Studies
- English
- Philosophy
- Gonzaga Magazine
