Beyond the AI Panic: Teaching Students Who Can Already Think
How foundational skills change everything about AI in the classroom
Graduate programs operate from a different starting point than undergraduate or K-12 education. Students arrive with foundational skills: critical reading, source evaluation, academic writing conventions, research methodology, and disciplinary knowledge frameworks. They can synthesize complex information, engage with abstract concepts, and construct reasoned arguments. These capabilities become the bedrock for how we integrate LLMs into learning.
As an Associate Professor in graduate school programs, my approach to AI use in the classroom doesn’t follow a rigid framework. Every decision hinges on the specific learning outcome I want students to achieve in each activity.
Transparency Changes Everything
I tell students upfront that I use LLMs extensively and encourage their use, too. This transparency shifts the classroom dynamic quite noticeably. When students know their lecturer can distinguish AI-generated content from human thought, regardless of how much they prompt the system to “write less like AI,” the focus moves from hiding1 to learning.
I care about content over style. Teaching data science, AI governance, and strategy means I can focus on analytical reasoning without evaluating prose craft. If I am assessing argument construction, I am not marking down awkward phrasing or stilted transitions. I am looking at logical flow, evidence selection, and reasoning depth.

For classroom debates and argument preparation, I actively push students to use AI as a sparring partner. They can test their positions against different perspectives, explore counterarguments, and refine their thinking. When they present these arguments orally in class, the real learning emerges… how they adapt, respond to challenges, and demonstrate mastery of the material. It also helps the entire cohort as arguments become richer and deeper.
Assessment Design Drives Everything
Exams require different strategies entirely. All my exams happen on campus, but the structure varies depending on the course focus.
For quantitative or methods-heavy courses—think machine learning algorithms, statistical modeling, or data analysis—I split exams into handwritten and computer-based sections. A three-hour exam might be 90 minutes handwritten theory and concepts (with some computation), followed by 90 minutes of coding or applied work. Sometimes students work on their machines with all browsers closed and additional proctors monitoring. Other times, I design “open everything”2 sections where students access any resource they want.
For conceptual or strategy-focused courses—business strategy, policy analysis, or theoretical frameworks—the emphasis shifts to synthesis and application rather than computational skills.
The "open everything" approach works when questions demand multi-layered thinking and have no single correct answer. I'm evaluating thought processes, not memorized solutions.
The Broader Context
This flexibility excites me about higher education’s potential. We can leverage LLMs to enhance learning rather than fight against them. Graduate students bring enough foundational knowledge to use these tools productively.
But this doesn’t translate to K-12 education. My conversations with elementary and secondary educators reveal a different challenge entirely. Those students are still building the very skills that graduate students already possess—critical reading, source evaluation, academic writing conventions, research methodology, and disciplinary knowledge frameworks. K-12 and undergraduate education focus on developing these capabilities. Graduate programs can assume their presence.
The pedagogical questions become much more complex when students lack the grounding to evaluate AI output, distinguish credible sources from questionable ones, or construct independent arguments. Without these foundational skills, LLMs can become crutches rather than tools.
The graduate classroom offers us a testing ground for what education might look like when humans and AI work together effectively. But we need different approaches for students who are still learning to think critically and write clearly on their own.
I don’t know, but some students and professors still look down on the use of AI tools in the classroom, particularly LLMs.
We used to have lots of these “open everything” exams in physics during my undergraduate and graduate studies at the National Institute of Physics in UP Diliman.
