The dos and donts of tech’s most exciting and dangerous innovation.
It wasn’t that long ago that I was taking college and graduate classes and receiving course syllabi from professors across disciplines and departments. They all looked more or less the same as far as policies and information about resources were concerned. So, when I started seeing AI policies pop up on students’ syllabi a couple years ago, I took careful notice.
AI Policies in College Course Syllabi
What I noticed is that course policies are as varied as the courses themselves. In a sample of the syllabi I’ve reviewed with students in the Spring 2025 semester, 50% of syllabi made no mention to AI whatsoever, though they did reference academic integrity policies which include cautions against the use of AI in coursework. By contrast, 17% banned any and all use of AI outright, and the remaining 33% allowed for the use of AI in brainstorming and spell- and grammar-check processes but required students to acknowledge their use of these tools in writing.
Only one policy took the time to explain its position, warning students of the threat AI can pose to creativity and individuality.
Impacts to Students
First off, the variability itself of these policies is a cause for concern for students. One class may allow AI in situations where another would ban it outright and consider it a violation of academic integrity policies, meaning the same behavior in two different courses could lead to vastly different outcomes.
Students have to be more vigilant than ever in surveying the specific policies of each course they take and remembering which course holds which expectations.
Additionally, the lines between acceptable and unacceptable use of AI are, in some cases, quite blurred. For example, where is the line of demarcation between the brainstorming process and the early drafting process?
Many of the syllabi I reviewed said that generative AI could be used in the process of gathering ideas, but that definition is so broad as to be almost meaningless. Presumably, these policies intend to convey that generative AI should only be used as a starting point, as a partner in helping students explore a prompt and their existing knowledge to decide on a focus point for a paper or project. With the way these are written, though, a student could interpret the policy as allowing them to use generative AI to help them get an idea of which quote from a text to use to support their point or an idea of how to turn a thought into polished, well-crafted writing.
The vagueness is, I believe, a result of the relative newness of AI itself, but while we are still in this landscape of tentative policies about a technology that may not be fully understood by the people making and enforcing those policies, students have to be very careful to ensure they don’t accidentally cross the line of acceptable use.
The Trouble with AI: Sources
But, to me, the most important thing students need to consider are the places from which responses to their AI queries originate and how those places coincide with the intentions of academia and the futures students want to build for themselves.
AI in its current state is a consolidator of information readily available on the internet.
This is good in the sense that it can gather and synthesize content from across the globe much more quickly and efficiently than we can, but it also poses a danger.
As we all know, no one on the internet ever lies. Everything freely available is true, and we don’t have to question any of the information we take from various websites and sources.
Of course, this is a fallacy, one that teachers and parents across the world strive to teach their students and children as the baseline of internet literacy. But, while fact and response checkers do exist, AI doesn’t have the benefit of protectors with the same level of personal dedication double-checking its work for every query. While some people do endeavor to help train AI models in positive ways, there are also countless interactions of people “messing” with AI for fun, and those interactions still form part of the AI’s basis of information.
Why does this matter for students? If students can use generative AI in the idea-building process but the acceptable boundaries for that use are unclear, it’s reasonable that some may use it to offer ideas rather than build upon and flesh out ideas they’ve already conceived. If students come to rely on AI as a generator of thought, freeing them of the need to generate their own, they may not feel as much push to do the readings for the course to deepen their own familiarity with the content, meaning they won’t have the objective and knowledge-based lens to assess the validity of AI’s suggestions.
The Aim of Academia
There’s another idea to consider as students weigh the pros and cons of introducing AI into their workflow. In many ways, the point—the very heart—of education is to help students become well-rounded, thoughtful members of society who can solve problems, navigate the world around them, and communicate their beliefs and thoughts reliably.
Assignments in and of themselves may not be life-changing, but they do provide opportunities for students to practice and hone the skills that will be necessary for their future success. Planning out, writing, and editing a paper teaches the future lawyer the discipline and time management she needs to develop a fact-based and convincing argument and the future stay-at-home father the command of language he’ll need to advocate for his children.
What do we let go, what do we lose when we allow generative AI to fill in these gaps for us?
What’s more, we once again need to consider the source of generative AI’s recommendations and responses. AI gathers existing information from across the web to create the best—or, at least, what it thinks is the best—response to queries. But academia strives to produce new, groundbreaking research and thought, moving fields forward and transforming the previously impossible into a breakthrough.
By definition, AI cannot create a unique idea, nor can it replicate the unique voice, cadence, vocabulary, and logical flow of each individual who uses it.
Even in a post-industrialization world, we still value hand-thrown pottery, hand-crafted goods and wares, the perfectly imperfect mark of each individual artist. As generative AI provides an opportunity for us to outsource the creation of our intangible goods, too, what choice will we make?
How Do We Bridge the Gap?
We’ve focused so far on how not to use AI—at least if we as a society continue to value human creativity and consider it separate from the capabilities of AI. Nevertheless, it’s undeniable that AI is here to stay and will continue to reshape what it means to work, think, produce, and connect moving forward. How can we—and, if I may, how should we—position ourselves and our students to make use of AI’s strengths while maintaining a respect for the irreplaceable nature of human ingenuity?
I would encourage all students to consider AI a tool to be used by a well-trained and knowledgeable authority rather than a competing or replacement authority.
Just as the computer eased but did not replace the writer’s artistry and the calculator eased but did not replace the mathematician’s understanding, AI can ease many processes in college without usurping control of those processes.
When it comes to completing assignments, AI is a wonderful sounding board. You can talk through ideas with it the way you would a classmate—but just as you wouldn’t tell a classmate to give you an idea, you shouldn’t expect the same of AI, either. Further, just as you would question and want to verify any definitive answer or opinion a classmate provided, you want to verify what AI recommends, too.
AI can also be a useful tool in managing busy times. AI may be able to introduce you to time and stress management, task prioritization, and studying techniques that can turn an overwhelming week into something manageable. It can also help you keep yourself on track by asking for helpful aids like reminders for certain events or recommended schedules to make busy times a bit easier.
No matter how you choose to use AI, the important thing is that you remain in control.
The thoughts, prompts, logic, base ideas, and insights are yours; AI just helps you fully realize them. After all, which do you think will be more appealing to future employers: someone who has to rely on AI for idea generation and creation, or someone who has learned to leverage AI to amplify their own abilities and expertise?