Is Truth a Function?
Series: A Programmer’s Philosophical Reflections #01/11 | Reading time: 25-30 min | Concept: Epistemology — The Nature and Criteria of Truth
Author: Wina @ Code & Cogito
Late-Night True
Late one evening, I found myself staring at a single line of code:
if user.age >= 18:
return True
My fingers hovered over the keyboard as something clicked.
What does that True actually mean?
Not in the technical sense — I know it returns a boolean. I was asking something deeper: what makes something “true”?
Age 18 or over means you’re an adult. In one jurisdiction. But in Japan, adulthood was historically set at 20. In the United States, you can’t buy alcohol until 21. In some countries, the very concept of “legal adulthood” is itself a contested boundary.
That simple True conceals a philosophical puzzle that humans have been chewing on for 2,500 years.
And I began to realize that programmers may be the people best equipped to think about it. Because every day, we do one thing over and over: compress the ambiguity of reality into definitive boolean values.
Every if is a judgment call. Every judgment call hides a question — where does your standard come from? Is it reliable? Under what conditions does it break down?
This is the first article in the “A Programmer’s Philosophical Reflections” series. We start with the most fundamental concept of all: truth.
Let’s debug reality.
Background: The Illusion of the Boolean
The Clean Binary of Code
In code, truth is simple:
x = 5
y = 3
print(x > y) # True
No ambiguity. Clean and crisp. True is True.
But push this logic into the real world, and things start to fall apart.
def is_adult(age):
return age >= 18 # Is this really "truth"?
Is the True returned by this function the same kind of “true” as 5 > 3?
It isn’t.
5 > 3 is a mathematical truth — it holds in every corner of the universe, at every point in time.
But age >= 18 is a socially constructed judgment — it holds under a specific legal framework, a specific culture, a specific era. Change the parameters and it fails.
Not all Trues are the same kind of “true.”
This insight took philosophers over two millennia to organize into a framework.
Why Programmers Are Uniquely Suited to This Question
Because you deal with “truth” every single day:
- Did the test “really pass” or did it “falsely pass”?
- Does the spec describe the “actual requirement” or just the “apparent requirement”?
- The AI-generated answer “looks true” but is “factually false” — how do you tell?
Aristotle said it two thousand years ago: “To say of what is that it is, and of what is not that it is not, is true.” Sounds straightforward. But try writing that as code, and you’ll discover how many hidden assumptions it carries.
Three Theories of Truth: An Engineer’s Intuitive Translation
Philosophers have organized the question “what is truth?” into three major theories. Each one maps onto patterns you already know from engineering.
One: Correspondence Theory — Statements Must Match the World
Core idea: Truth is the match between a statement and reality.
It’s the closest thing to a function in your mental model:
def is_true_correspondence(statement, reality):
return statement.matches(reality)
Intuitive, comfortable — and the most dangerous.
Because you almost never have access to the complete reality. What you actually have is partial data from an API response, a one-sided user report, an incomplete set of logs.
The problem with correspondence theory isn’t logical; it’s practical: you can never be sure your reality parameter is complete.
Aristotle originated correspondence theory. For two millennia it remained the most intuitive view of truth. Until people started asking: “But how do you define ‘reality’ itself? Who tells you what reality is?”
Two: Coherence Theory — No Contradictions Within the System
Core idea: A statement is true if it’s consistent with your entire system of beliefs.
Think of it like integration testing — a set of modules must work together without contradicting each other:
def is_true_coherence(statement, belief_system):
return belief_system.add(statement).no_contradiction()
The power of coherence theory: you don’t need to go out and “check against reality.” You just need to ensure internal consistency.
But that’s also its trap: a beautifully closed system can be entirely self-consistent yet completely detached from reality.
Think of those elegantly designed, logically rigorous software architectures that completely fail to meet user needs — perfect in theory, disastrous in practice.
Hegel, Bradley, and others developed coherence theory. They believed truth was a vast logical network, each part mutually supporting the others.
Three: Pragmatism — If It Works, It’s True
Core idea: If a belief produces good results in practice, it’s true.
Think A/B testing or production metrics:
def is_true_pragmatic(statement, metrics):
return metrics.after_adopting(statement).improves()
Pragmatism has enormous appeal for engineers — we’re accustomed to the attitude of “whatever ships is good.”
But it forces you to face an uncomfortable question: does a useful lie count as truth?
The placebo effect genuinely works. If a wrong map happens to lead you to the right destination, was it “correct”?
William James and John Dewey developed pragmatism. They pulled philosophy out of the ivory tower: don’t ask “is this true?” — ask “what are the consequences of believing it?”
Can Unit Tests Prove Truth?
Here’s a paradox that programmers are uniquely positioned to appreciate.
Your tests are all green. 100% passing. Can you say your program is “correct”?
You can’t.
What tests can do is not prove “always correct.” They prove that within the inputs you thought of, the cases you designed, and the boundaries you considered, the program hasn’t been disproven.
def truth_by_tests(claim, test_cases):
return all(case.verify(claim) for case in test_cases)
# test_cases are never exhaustive
This aligns remarkably with Karl Popper’s falsificationism. Popper argued that science doesn’t establish truth through repeated verification; it approaches truth through repeated attempts at refutation. You can never prove a theory “is true” — you can only say “it hasn’t been refuted yet.”
The Philosophical Version of Code Coverage
Your test coverage is 95%. What about the other 5%?
More critically — what you’ve covered is “the world you could imagine.” The world always has boundary conditions you didn’t think of, distribution shifts you didn’t foresee, and blind spots you don’t know you have.
So truth looks more like “continuous monitoring + continuous calibration” — not a one-time declaration.
The Guarantees and Limits of Type Systems
A strong type system gives you a sense of safety. An int won’t suddenly become a str, None is explicitly handled, certain impossible states are ruled out.
But it doesn’t guarantee:
- That your understanding of the requirements is correct
- That your variable names reflect reality
- That your objective function is unbiased
A type system is like formal logic: it guarantees that inferences are consistent within the rules, but it doesn’t guarantee the rules themselves are right.
This is exactly the problem with coherence theory — formally perfect, substantially disconnected.
Unprovable: Why “Global Correctness” Is Often Unattainable
In engineering, you know two brutal realities:
- The state space of complex systems is too large to enumerate
- You can’t anticipate every use case
Philosophy has a parallel dilemma. When you try to provide evidence for a claim, you find:
- The evidence itself needs verification
- That verification requires another layer of evidence
- And so on, ad infinitum
Eventually, you land on two choices: either accept that some foundational premises need no proof (axioms), or acknowledge that certainty always has limits.
This echoes the spirit of Godel’s incompleteness theorems: any sufficiently complex formal system contains true propositions that cannot be proven within the system.
The engineer’s intuitive version:
We always work at the boundary of “acceptable risk,” never in the paradise of “absolute correctness.”
Modern Connections: AI Hallucinations and the Truth Supply Chain
In the age of generative AI, the question of truth has become more urgent than ever.
A ChatGPT answer might:
- Look consistent (coherence theory passes) — fluent sentences, logical structure, high confidence
- Not correspond to reality (correspondence theory fails) — citing papers that don’t exist, fabricating facts
- Be useful short-term (pragmatism partially passes) — helps you finish the task, but plants a hidden time bomb
This is the essence of AI hallucination: it passes the tests of coherence theory and pragmatism, but fails catastrophically on correspondence theory.
As a programmer, you already have the instinct for this — it’s like a system that passes all integration tests but explodes in production. The test cases didn’t cover that one fatal edge condition.
Truth Needs Supply Chain Management
In the AI era, truth is no longer a one-time judgment. It needs to be managed like a supply chain:
- Source tracing (source of truth) — where did this information come from?
- Cross-validation (multi-source validation) — is there an independent second source?
- Version management (truth versioning) — when and why did it change?
- Continuous monitoring (post-deploy maintenance) — is it still true after deployment?
The CI/CD pipeline you run every day is essentially a truth maintenance system.
Reflections & Takeaways: A Programmer’s Discipline of Truth
Do You Want “True” or “Reassured”?
Much of the time, when we pursue truth, we’re actually pursuing a sense of control. “This bug is fixed” — you don’t want cosmic truth; you want “I can sleep easy and ship this.”
But the world is often uncontrollable. Excellent engineers don’t chase a one-time “correct answer.” They build a judgment process that continues to function amid uncertainty.
Truth Is a Pipeline
If you had to condense today’s discussion into a single engineering model, here it is:
- Define the claim — scope it clearly. “What exactly are you trying to determine?”
- Specify the evidence — traceable sources. “What are you basing this on?”
- Consistency check — avoid self-contradiction. “Does it conflict with anything else you know?”
- Adversarial search — actively look for failure cases. “Under what conditions would this be wrong?”
- Production monitoring — watch for distribution drift. “Is it still true?”
- Versioned correction — admit fallibility. “Does it need updating?”
Truth isn’t a function’s return value. It’s more like a continuously running pipeline — requiring input, processing, validation, monitoring, and updates.
Four Mental Models to Take With You
-
Maintain healthy skepticism — Test your beliefs like you test your code. Not cynical disbelief, but a vigilant awareness of certainty’s limits.
-
Pursue multi-dimensional validation — A belief is only trustworthy if it passes at least two of the three theories (correspondence, coherence, pragmatism). If it only passes one, proceed with caution.
-
Accept incompleteness — Your test coverage will never be 100%, and neither will your belief system. Accept it, then build fault-tolerance mechanisms.
-
Refactor continuously — Periodically review and update your beliefs, just like refactoring code. Not because the old version was “wrong,” but because the world has changed.
Conclusion
Back to that late evening.
if user.age >= 18:
return True
Now you know: this True isn’t just a boolean. It’s a judgment that holds under a specific legal framework (correspondence), social consensus (coherence), and practical need (pragmatism).
Change the framework, and it might become False.
And as a programmer, you have a unique advantage — you’re already accustomed to working in uncertainty, finding structure in complexity, and correcting course through errors.
These are precisely the abilities that the pursuit of truth demands most.
Next time you write if something == True:, you might pause for a beat:
That True is far more complicated than you think.
Next Article Preview
Is Knowledge Just Data?
Your database holds terabytes of data. But what do you know?
- Data, information, knowledge, understanding, wisdom — does the DIKW pyramid actually hold up?
- Is schema design really defining a worldview?
- Why do two people looking at the same dataset reach completely opposite conclusions?
- Does knowledge need “version control”?
Next time, we’ll break epistemology down into knowledge engineering that a programmer can actually work with.
References
- Aristotle. Metaphysics. (The original text of correspondence theory)
- Popper, Karl. The Logic of Scientific Discovery. Routledge, 1959. (Falsificationism)
- James, William. Pragmatism. Longmans, Green, and Co., 1907. (Pragmatism)
- Godel, Kurt. “On Formally Undecidable Propositions.” Monatshefte fur Mathematik, 1931. (Incompleteness theorems)
- Floridi, Luciano. The Philosophy of Information. Oxford University Press, 2011. (Philosophy of information)
