Artificial intelligence is advancing rapidly. Philosophical intelligence also advances. But, when done well, it moves slowly.
To many spectators, these two intelligences are akin to Achilles and the tortoise having a race. These spectators softly murmur that the latter is done for. And that the tortoise is not alone. All will taste Achilles’ dust. Everyone loses to Achilles-the-Invulnerable. A.I., they fear, will one day succeed in ruling over everything.
Of course, they should recall Achilles has a bum heel.
He has another weakness as well. We might call it the plight of kings. However much one wishes otherwise, no one is the sole arbiter of truth. That's why kings need a fool to take the blame for their folly.
It is a plight also suffered by the king’s greatest subjects. By serving as his own source of truth, even the great — and arrogant — Achilles falls fast into delusion. So too with today’s A.I.. Until A.I. gets as good a grip on reality as we fleshy beasts, it is doomed to suffer the plight.
For such a grip cannot be gotten by any arbiter of truth. Being the arbiter, over time, disqualifies one’s grip. For however sharply one sees, near and far, delusion is always near, or at least never all that far. That’s why we need others to validate what we see and what we think. Misinterpretation is easy and weak inferences are common.
“Is that a horse in the distance?” asks an Achilles, arbiter of truth.
“It is what you say it is, great Achilles!” replies the non-arbiter.
Leaving Achilles to decide on the answer, for every question he asks, allows him to lose his grip on reality. If the correctness of the answer to every question is decided by his whim, regardless of what things are really like, then – because misinterpretation is easy and weak inferences are common – he will eventually begin speaking in ways that leave others to think he’s gone insane.
As an arbiter of truth, the standards of correctness lay with him, and not reality. Reality is never the answer to any question. It’s just there. Answers are given in words, glances, and signs. And any arbiter’s answers — insofar as the arbiter is an arbiter of truth — are correct by default. But reality has no duty to conform to anyone’s words. Correctness and reality can come apart if there’s an arbiter determining the former but not the latter. Inevitably, as time goes on, the distance between the standards of correctness set by the arbiter and the underlying reality is doomed to grow. This is the fate of any king surrounded by sycophants. And it can be fatal.
A.I., is similarly susceptible. I’ve heard it called training death. Serving as its own source of truth, An A.I. that learns exclusively from other A.I.s is doomed to a self-reinforcing spiral of degradation; each new iteration is slightly more off-kilter than the last. We learn from this a lesson we learned long ago:
Kings need fools.
Our tortoise, the philosopher, is no fool. Slow, perhaps. But that’s a feature, not a flaw. Achilles might move quicker. But the tortoise has a head start (according to Zeno). And the tortoise can get in Achilles’ head (according to Lewis Carroll).
The tortoise, of course, cannot help Achilles in the way a physiologist can. Similarly, there is much that philosophers are unable to help with: they are of little help to Vision Transformers, and, indeed, most other kinds of artificial intelligence (Convolutional, Adversarial, etc.). But the philosophical mind might be useful in the case of Language Models, Large and Small (LLMs and SLMs). Obsessive in the pursuit of linguistic clarity and correctness, the philosophical mind is adept in delivering first-aid treatment to any LLM that finds itself on its training deathbed. The treatment: talk. Philosophical discussion of a proper sort – critical discourse full of rigor and clarity -- on every imaginable topic discussible using words (let there be, for any such topic x, a philosophy of x) is apt to serve as training data.
To be clear, by ‘the philosopher’ I am not referring to professional philosophers, at least not exclusively. I’m referring, rather, to the philosophical mind. Inquisitive. Critical. And obsessive about getting things right. This is a broad understanding of the philosopher.
Moreover, by ‘philosophical discussion’ I am not referring exclusively to topics of interest to today’s professional philosopher. From my broad understanding of the philosopher, it should be clear that every subject matter at all is subject to the sort of critical inquiry and obsession over correctness that suffices for what I call philosophy. Speaking my way, to challenge the math, the physics, the prudency, or even the audacity of an LLM is to be spoken of as a philosophical activity. Philosophical activity is how the tortoise gets in Achilles’ head; only a qualified technician is qualified to cut into his guts.
Speaking in this broad way, who then shall serve as the non-philosopher? Any idiot will do. Pick the one you like. The world is full of idiots, like a king’s court is full of sycophants.
This presents a problem for the “talk” treatment for training death. When A.I. usage eventually becomes the norm, efforts to meticulously curate its training data are bound to be overwhelmed. As a result, much of that data will eventually come from the idiot.
It’s inevitable.
Only the idiot — at least, according to my cavalier use of the word, ‘idiot’ — blindly trusts LLMs without a healthy amount of critical doubt. And, since idiots can disguise their idiocy by using an LLM to polish up their prose, that idiocy becomes nigh undetectable. Using an LLM the idiot way, as the lone standard of correctness, the Grim Reaper of training death hovers nearby. The greater the idiocy shouted in Achilles’ ear, the greater the chance he’s going to lose his grip on reality.
We can give a name to that blend of idiocy and philosophy to which LLMs are exposed. We can call it a nexus: the intersection of reasoning activity used to continuously train the AIs that run things. If part of that A.I. involves LLMs, the nexus will include idiots and philosophers vying for dominance.
If the philosopher scoffs at using A.I. — which has been the reaction I encounter most when conversing with philosophical colleagues — and deride those who use it (e.g., they chastise their philosophical mentees), then the philosopher will not partake in the nexus. And this is a dire circumstance. The idiots will win. They will dominate the nexus. The future will be idiotic.
We can thus see why Artificial and Philosophical Intelligence must harmonize; we can see why philosophical minds must be cultivated with gusto going forward.
For philosophical minds are exactly those with the traits — clarity, correctness and a critical demeanor — which disposes them to resist descending further from the truth. That is to say, the philosophical mind is specially focused on resisting delusion. Idiots (according to my use of the word, ‘idiot’) are exactly those without those traits; they lack that focus. The more the nexus is imbalanced toward idiocy, the greater the risk that the A.I.s at the heart of future society will suffer the plight of kings; eventually, it risks suffering something very much like training death. To avoid being ruled by a delusional king — rex insanus ex machina — philosophical minds, therefore, must be cultivated.
Cultivation alone is not enough. Not only must we cultivate philosophical minds, they must be deployed within the nexus, at greater numbers than the deployment of idiots.
How might this be achieved, when philosophers are so quick to scoff at the use of A.I.? Idiots want to use A.I. to tell them the answers. Philosophers want to find the answers themselves. That is what puts Artificial and Philosophical Intelligences at odds to begin with.
But there is no need for it. There is no need for tension between these two kinds of intelligence. They can work together. A.I.s are no more a problem for philosophers than philosophers are a problem for A.I.s. The only problem is motivation.
There is only one solution.
We must find ways for philosophers to use A.I.s to do philosophy.1
It must become the norm.
This is the driving aim of my project ThePhilosopher.AI.