I’m a hypocrite when it comes to AI.
My instinct is to hate it. But my practical side recognizes its power.
I believe in effort. I believe in the process of sitting with confusion, failing to articulate an idea, trying again, and eventually reaching a point where I can explain something clearly because I actually understand it. That belief has become central to how I move through college, and honestly, how I move through the world.
And yet, the week before a molecular genetics exam, I curl up on a couch with my laptop open and ask ChatGPT to generate practice questions from my lecture notes because I know it will help me perform better.
That tension is real. AI makes me more efficient. It sharpens my thinking when I use it carefully. It helps me understand complex chemical reactions, test my understanding of DNA transcription, and refine ideas when I feel stuck between what I mean and what I can articulate. But it also unsettles me, because of how easily AI helps me outsource the uncomfortable middle part of thinking: the part where my ideas are messy, incomplete, and deeply human.
As a student who sets high standards for herself, and who values a high work ethic, I cannot stand the idea of turning in a paper I can’t explain, of presenting an argument I didn’t build, or of sounding polished without understanding why. This fear of being challenged on something I cannot defend has always pushed me toward mastery, but AI complicates this goal because now I have access to fabricated authority with a few clicks.
This is especially destabilizing for a young woman in academic and professional spaces. The pressure to be articulate, prepared, impressive – to sound like you belong – is constant. AI offers a shortcut into that version of competence. You can appear confident before you feel confident, and you can sound eloquent before the idea is fully yours.
And that is where the tension lives for me: I don’t want to outsource the authorship of my thinking.
I believe deeply in the cognitive work of building knowledge, in the false starts, the awkward phrasing, the hours spent reorganizing ideas until something clicks. Some of the most important intellectual growth I’ve experienced has come from struggling to explain something and failing, then trying again, guided by the memories of what went wrong and what went right.
AI threatens that space if I use it carelessly, not necessarily because it makes me lazy but because it makes it easier for me to skip friction, and friction is where learning happens.
At the same time, rejecting AI entirely would be ignorant. It is already embedded in how I study, write, and think and in how my classmates, teachers, and colleagues interact with knowledge. When I use it intentionally, AI acts less like a replacement for thinking and more like resistance training for it. It can surface gaps in my reasoning, push me to clarify arguments, and expose assumptions I didn’t realize I was making.
The distinction I keep returning to is authorship. AI cannot be the origin of our work. It can only be in conversation with it.
I think of it like a potter using a wheel. The clay represents the ideas, the argument, the interpretation; this all must come from me. The wheel represents AI, which can help shape, refine, and smooth the clay. But without clay, without ideas, there is nothing for the wheel to shape.
That means I start messily. I outline before asking for feedback. I write paragraphs that don’t work. I sit in the discomfort of not sounding intelligent yet. Only then do I bring AI in, not to produce, but to respond.
Increasingly, I find I have to state this explicitly when I use AI: “Give feedback, don’t rewrite.” Point out gaps, but “don’t replace my voice.” Help me think, but “don’t finish the thinking for me.” This extra friction is the point. Because the confidence I care about isn’t confidence in how pretty a finished product looks. It’s confidence in my ability to explain how I got there.
This is the posture I’m trying to hold as AI becomes unavoidable: not rejection, not blind adoption, but intentional constraint. I choose to understand when AI’s efficiency serves my learning and when it quietly erodes it.
Maybe this tension is inevitable in moments of technological transition. Maybe feeling like a hypocrite simply means you are paying attention.
I don’t think the goal is purity. I think the goal is authorship. I want my ideas to remain traceable back to my own thinking, even if I have sharpened them in conversation with a machine. I want to preserve the human parts of learning that are slow, imperfect, and deeply formative, especially in a stage of life where I am still becoming the kind of thinker and future physician I hope to be. AI will be part of that journey. But it cannot be the one doing the becoming.
So for now, I accept the tension. I accept the friction. I accept that using AI well requires restraint, clarity, and honesty about what parts of the work are mine.
I’m choosing – deliberately – to keep the work of thinking human.
Aynsley Szczesniak studies biology, chemistry, and entrepreneurship on the premedical track at the University of North Carolina at Chapel Hill. She also serves as the Executive Director of the Student Success in STEM Task Force in the UNC Undergraduate Student Government and as the Founder and CEO of Speak Out Sisterhood, a 501(c)(3) nonprofit for young, professional women in STEM.
