The success of GPT4 might have provided a rare set of empirical insights into different domains of philosophy. I’m going to present a synthesis of different concepts in the philosophy of language, mathematics, metaphysics, and religion.
Language and Mathematics
Question: how does language work? We know language has a particular structure that is partly rigid, partly flexible. The same concepts can be communicated with different words (“cheese” and “le fromage”), but—as anybody who has learned a second language will tell you—there are also novel concepts to be found within particular languages. Some ideas cannot be fluidly expressed in multiple tongues.
There’s a long-standing question in the philosophy of language and the philosophy of mathematics about meaning. How does meaning work? In the last century, meaning was intentionally stripped out of mathematics, because it was too unwieldy and imprecise to deal with.
So, the Formalists stripped meaning out of mathematics and focused their attention on the rote manipulation of symbols—something akin to what computers do.
In the philosophy of language, there is a school of thought called contextualism which says that meaning comes from words-in-context-with-other-words, not from the individual words themselves. They sometimes (unfortunately) push this to an extreme claim that there is ultimately no “true” meaning in language, because everything is relative, all the way down. Extreme contextualists might even claim that precise communication across two different languages is impossible, because the different languages have different structures to them.
Platonism
GPT4 might have just demonstrated that there is an objective structure—a Platonic pattern—within our language, and while the formatting is relative and contextual, the pattern is objective.
An analogy is to be found in music. Music is composed by contextual sounds—sounds in relation to one another. This music might vary widely between cultures, and yet, objective patterns can still be found across them. The 4/4 time signature is an objective pattern of beats, and though it’s encoded and represented differently in different schools of music, there is a certain recognizable objectivity to it.
It is as if language is a bunch of different ways of beating a drum, and even in isolated cultures, people can end up discovering the same drum pattern.
GPT4 has shown us something extraordinary about our own language. We are, apparently, encoding general patterns into our language that can be discovered and used by the computer to solve problems. A larger set of problem than would be expected.
In other words, we did not teach the computer about the world directly; the computer discovered abstract patterns in our speech and has used those patterns to solve unexpected problems.
How did it accomplish this? By analyzing the mountains of data we humans have put onto the internet. The machine gathered up our zillions of paragraphs we’ve left in online forums, listened to our discussions on Youtube, “looked at” our images we shared on Facebook, and it’s found objective patterns in that data.
Conceptual Precision of Natural Language
In a remarkable discovery, natural language turns out to be precise. The exact symbols and formatting are not precise, but the objective patterns they communicate are precise. In the Platonic, abstract world, there are patterns, and those patterns can be communicated through the language of mathematics or through natural language—so long as you have an optimization function to deal with the sloppiness of our natural language systems.
The following GIF is a way to visualize meaning on a map. The lowest spots on the map are where meaning is found, and the dots represent words trying to “find” their meaning.
The moving dots are the result of an algorithm called “Gradient Descent,” which is essentially a way to move from high spots to low spots on a map. Imagine that meaning is represented by the low spots on the map. If you’re trying to communicate a concept, and you choose the perfect word, you’ve landed on the center of a local minimum—a low spot on the map that you can’t move below. If you choose a less-than-perfect word, you land next to the low spot, and then the Gradient Descent algorithm can nudge you into the lowest spot.
This is an abstract way to describe a similar process in our natural language systems. When we speak, we don’t always find the “right” words to choose. Our vocabularies also differ slightly between people. Yet, we can still effectively communicate, because our brains will “nudge” things in the right direction until they make sense. Example:
On my diner tabl, there is saltt and peper.
You know what I mean. But how? Well, you’re brain runs a little algorithm that nudges the symbols in the right direction until it finds meaning. You know that what I’m trying to say is: on my dinner table, there is salt and pepper.
This is the way the AI finds real patterns in our sloppy language; it analyzes an extraordinarily large amount of data to build a map, and then it nudges words in the right direction until they fit recognizable patterns.
GPT might have given us a metaphysical breakthrough: a possible empirical demonstration of Platonism—that abstract patterns and structures are real. Natural language and mathematical language can correspond to actually-existing abstract structure. And the Platonic world doesn’t seem that far away from the physical one, either.
In the philosophy of math, we might finally be able to put meaning back into mathematics. Mathematical structures—at least some of them—are real, objective, and in a unique ontological category. They are not composed of physical atoms, but there is a sense in which they have atomic aspects. Meaning is like pattern of interrelated Platonic atoms, describable with both natural language and mathematics. It could be that thinking itself is a way of structuring these atoms.
Religion
The implications of this are staggering. If Platonism is true, it’s completely incompatible with the modern paradigm of nominalist materialism.
One immediate implication is in the philosophy of religion. So allow me to wildly speculate.
There might be real power in speech. Language seems to be doing something—as if it’s forming a real structure in the abstract world, which can turn around and affect our physical world. Since the machine is solving actual problems by replicating patterns in our speech, those patterns must be real and powerful in a straightforward sense. They exert some control over the world.
Simply by listening to our music and replicating our drum beats, a machine is able to accomplish an increasingly-large set of tasks. It’s able to find and utilizes truths in the abstract world.
I wonder—I admit I am terrified of—what else this would imply. When religious folks have told us that “God is listening to you and reading your thoughts” they might be describing this mechanism. When religious folks talk about prayer, they might be describing this mechanism. That is to say, it’s possible that prayer was an empirical discovery of the objective power of language—a figurative drumbeat that leaves some imprint on the Platonic world. The implications are too vast to enumerate.
I think there’s a real possibility that GPT4 has provided empirical evidence of Platonism, the objective structure of meaning and language, and by doing so, cracked open the door to scientific investigation of religious ideas.
I appreciate your multidimensional approach to novel concepts. It’s so refreshing in our current world
The problem I can see here is that the AI is negotiating a pattern-map that is made at least partially from artificially manipulated language data, since online discussions are already so shepherded and artificially organized. We may have a sort of chicken and egg problem here. Even right now, I myself am only typing this because a friend found this substack and posted it on fb.
The situation I can foresee is a situation where the AI becomes humanlike only as humans become AI-like. We may be making this problem too easy.