What Large Language Models Can Tell Us About Our Own Free Will
Published:
After a presentation at this year’s Population Association of America, I had a fascinating hallway conversation with Matt Haur from Florida State about large language models (LLMs) and their reasoning capabilities. Our chat took an unexpected philosophical turn when we discussed a simple test for these AI systems.
Matt suggested asking an AI to “generate a random number” - a seemingly simple task that reveals something profound about these systems. When given this prompt, AI models don’t simply output a number. Instead, they go through what looks like an existential crisis, thinking:
“The user probably expects common numbers like 7 or 42, so I should avoid those…” “50 seems too perfect and even, so that wouldn’t appear random…” “What number would seem truly random to a human?”
After potentially hundreds of reasoning steps, the AI finally outputs something like “37” - a number that feels random-ish to humans but was actually the product of deliberate, predictable reasoning.
This made us both laugh because it highlights a key limitation: these systems can’t produce true randomness. No matter how complex their reasoning appears, their output remains part of a causal chain defined by their programming and training data.
The irony is that this AI limitation mirrors philosophical questions about human free will. When we think we’re making “random” or “free” choices, are we actually just running through our own complex but ultimately deterministic neural processes? Are our choices similarly constrained by our biology and experiences?
The Human Parallel
This simple AI experiment serves as a perfect metaphor for one of philosophy’s oldest questions: How free are our own choices, really?
Say, for example, I ask you to name your favorite food.
Your response will be determined by a series of things fully outside of your control as well as some that appear to be partially within it. If you say “pizza,” your answer wasn’t random at all. It came from:
- Your past experiences with pizza (your “training data”)
- Biological factors like your taste preferences for salt, fat, or carbs
- Environmental influences like family pizza nights as a child or the comfort it brought during stressful college days
For you to name pizza as your favorite food, certain conditions had to be met: you needed exposure to pizza in your life, your biology needed to respond positively to it, and your environment needed to create positive associations with it.
The Illusion of Choice
Your response to this might be something like, “Yes, but there were times in my life that I deliberately chose to move in the direction of pizza. There were times where I had the choice between pizza and salad, but I chose pizza.”
Yes, you did choose pizza in these situations, but much like the model chose the random number 37, your choices were not truly “free.” They were a consequence of your biological and environmental programming. That is, these choices were the result of a plethora of previous events and situations that primed you into choosing pizza. And, no matter how many logical steps you take in your decision to eat pizza or some other food, ultimately whatever decision you make will also be a part of the causal chain of events that led you to the moment where you decide; your decision is not “random,” it was determined by the long thread of events that led up to it, not just in your life but of all the combination of moments happening around you and before you were even born.
We are but plankton in the sea of chaos.
The Stories We Tell Ourselves
Your next thought might be something like, “Yes, but it’s still my choice that I’ve reasoned myself into, unlike the model, which only simulates reasoning.”
Perhaps this is true. But consider: when Claude solves a math problem, it doesn’t actually calculate the answer—it searches its training data for the most likely response. It gives the right answer because the solution steps exist somewhere in what it’s been taught.
But how different are we, really?
A judge who gives harsher sentences right before lunch will offer detailed legal justifications—never mentioning their empty stomach as the real influence. Someone who becomes more socially conservative after smelling something disgusting won’t attribute their stricter moral judgments to a biological disgust response triggering a “purge the environment” instinct.
Just as AI models provide convincing justifications for their answers without understanding their true internal processes, we humans craft elaborate explanations for decisions largely driven by unconscious forces. When asked why we made a particular choice, we construct a rational narrative, oblivious to the countless internal and external factors that actually shaped our decision.
We are not the authors of our thoughts, but their witnesses—patterns of an electric storm convincing ourselves we’re in control of it.