Symbolic logic isn't logic — AI proves it.
- Ray Martin
- Jan 3
- 4 min read
"The Giant Equation"
One Message: Symbolic logic isn't logic — AI proves it.
Closing Question: "If machines can't reason now, could they ever?"
The Giant Equation
They told us reasoning was just calculation.
In the early 1900s, some very smart people had a very big idea. Frege. Russell. Whitehead. They wanted to put logic on solid ground. Make it rigorous. Scientific. Mechanical.
So they turned it into symbols.
P and Q. Arrows and conjunctions. Transform this into that. Follow the rules. Get the answer.
No understanding required. Just procedure.
They called it logic. But it wasn't logic. It was algebra wearing a disguise.
What Logic Actually Is
Here's what Aristotle meant by logic:
The mind grasps truth. It sees why a conclusion follows from its premises. It understands the necessity.
When you reason — really reason — you're not just shuffling symbols. You're seeing something. The connection between ideas becomes evident to you. You comprehend.
That's not calculation. That's understanding.
The old logicians knew the difference. Logic was the art of thinking well. Not the mechanics of symbol manipulation.
Then the 20th century decided to improve things.
The Great Reduction
Frege and Russell had a dream: reduce all of mathematics — and then all reasoning — to formal symbol manipulation.
No meaning required. No understanding necessary. Just:
Assign symbols to propositions
Define transformation rules
Apply the rules mechanically
Output conclusions
The symbols don't mean anything in themselves. P could be "the sky is blue" or "cats are fish." Doesn't matter. The rules work the same way.
They turned logic into a giant equation.
And they thought this was progress. Finally, reasoning made rigorous. No fuzzy human intuition. Just clean, mechanical procedure.
But they'd made a trade they didn't notice.
They kept the form of reasoning. They threw out the substance.
The Experiment
Here's the thing about big ideas: eventually someone tests them.
If reasoning is just symbol manipulation, then a machine that manipulates symbols should be able to reason.
So we built the machines.
First, simple calculators. Then computers. Then neural networks. Then large language models processing billions of parameters.
The most sophisticated symbol-manipulation engines in history.
And what do they do?
They process. They pattern-match. They calculate probabilities. They output statistically likely tokens.
But they don't reason.
Ask an AI why its conclusion follows from its premises. It can't tell you. It doesn't know. There's no "seeing" happening. No grasp of necessity. No understanding of why.
Just symbols in, transformations applied, symbols out.
The experiment ran. The hypothesis failed.
What the Engineers Know
The people who build these systems will tell you straight:
The model doesn't understand your question. It predicts which response you're most likely to accept.
When it "hallucinates" — confidently stating falsehoods — it isn't lying. Lying requires knowing the truth. The AI has no truth to betray. It outputs tokens that pattern-match to "confident true-sounding statements."
When it gives you a correct answer, it isn't right. Being right requires knowing you're right. The AI doesn't know anything. It calculated which tokens were probable.
Correct answers and incorrect answers are the same process. Same calculation. Same absence of understanding.
The giant equation doesn't know what it's solving.
The Calculator Analogy
Your pocket calculator does arithmetic.
Does it understand mathematics?
When it outputs "4" for "2+2," does it grasp why two and two make four? Does it see the necessity? Does it comprehend quantity, addition, equality?
Of course not. It's manipulating electrical signals according to rules built into its circuits.
The answer is right. The understanding is absent.
AI is the same thing, scaled up enormously.
More symbols. Faster processing. Vastly more complex patterns.
But complexity doesn't produce comprehension. Speed doesn't generate understanding. More of nothing is still nothing.
The AI does reasoning the way a calculator does math. Which is to say — it doesn't.
What Peter Kreeft Saw
The philosopher Peter Kreeft noticed something important:
Symbolic logic isn't logic at all. It's a branch of mathematics pretending to be philosophy.
Real logic asks: How does the mind grasp truth?
Symbolic logic asks: How do we manipulate these symbols correctly?
Those are completely different questions.
The first is about understanding. The second is about procedure.
You can follow procedures without understanding anything. That's what machines do. That's what symbolic logic is.
When Aristotle did logic, he was studying how minds work. How we move from premises to conclusions with genuine insight.
When modern logicians do symbolic logic, they're studying how symbols work. How we transform notations according to rules.
The symbols don't think. The rules don't understand. The procedure has no insight.
They didn't mechanize reasoning. They replaced reasoning with mechanics.
The Proof
AI is the proof that Kreeft was right.
If reasoning were just symbol manipulation, the machines would reason. They're the best symbol manipulators ever built.
They don't reason.
Therefore reasoning isn't just symbol manipulation.
There's something more. Something the symbols can't capture. Something the procedures don't contain.
When you reason, something happens that doesn't happen in the machine. You see the connection. You grasp the necessity. You understand.
The AI processes tokens. You think.
Those aren't the same thing. They never were.
The Question
So here's where this leaves us:
If reasoning isn't calculation...
If understanding can't be reduced to symbol manipulation...
If the machines do everything the theory said was sufficient, and still don't think...

Then what is reasoning? What is understanding?
What do we have that can't be formalized into rules and procedures?
The 20th century tried to eliminate that question. Tried to reduce mind to mechanism.
The machines proved they couldn't.
Which means the question is back on the table.
If machines can't reason now — with all this power, all this data, all this sophistication — could they ever?
Or is there something about thought that will never fit in the equation?
They reduced logic to algebra, then wondered why the machines couldn't think. The machines did exactly what the theory predicted. It just turns out the theory was wrong.
Reasoning isn't calculation. Understanding isn't procedure. The mind isn't a machine.
Maybe we should have asked the ancients before we started building.
RationalCatholic.com Where Faith Meets Evidence




Comments