An image of J.A. Brown.
The J.A. Brown logo.

Meaning scientist. Entrepreneur. Polymath. Broadcaster. Game modder. Software developer. Linguist. Social and political commentator.
Free speech crusader. The Man with the MRGA Hat.
The Jordan Peterson Guy. Make Reality Great Again.

home of the
The FISA World Stock Car Championship logo.
“I have sworn upon the altar of God eternal hostility against every form of tyranny over the mind of man”
—Thomas Jefferson

Can Computers Understand Meaning?

Posted on 26 February 2024 in Language, Programming

Can computers understand meaning? My initial hypothesis would be “no”. In fact, my initial hypothesis to such questions is always “no”, because meaning relates to some aspect of the real world (as I expressed it in my MA thesis, “meaning is a relation”)—something which computers cannot subjectively experience.
Nonetheless, the boom “large language models” such as GPT have enjoyed over the last few years shows that if you just relate words to other words, by quantifying their similarity on various dimensions (this is called vector semantics), you can do a pretty good job of making computers seem like they understand meaning. But they don’t really understand; they just rehash stuff from their training data based on probability, more or less. As long as computers aren’t able to subjectively experience anything, have feelings, or construct a value hierarchy that structures their being—you could also say: as long as they aren’t capable of acquiring emic knowledge, another term I frequently refer to in my academic work, which see—I am under no illusion that they will ever be able to understand meaning. However, and that is the point of this lengthy (apologies in advance) article, I think we can do a better job than the large language models at approximating something like real meaning.

You might recall that I have been working on a chatbot called ScheepshoornBot for a few years now. ScheepshoornBot, or Schepie as I like to call him, is currently (as we speak, so to speak, er, write?) running in my official Discord—which you’re very welcome to join, by the way! I’d love to see you there and converse about topics like this. At the moment, it's a rather silly bot that frequently insults people, shares obscure in-jokes and memes, and refers to the work of Wim T. Schippers or other Dutch humorists—as well as being able to produce the sound of a ship’s horn 24/7, which is where it gets its name (it literally means “ship horn bot” in Dutch). It works by simply generating pre-programmed responses in, well, response to certain words or sentences. However, the plan is to eventually give Schepie “AI” capacities (I don’t like this word, it’s a bit pretentious, but you can substitute “machine learning” as far as I’m concerned). It already has a very rudimentary form of this: you can teach it songs, as well as information about people, but it simply stores what you tell it and then regurgitates its knowledge when you ask for it at a later time.

As I have promised Annemieke several times now, 2024 will be the year of Schepie. Because I like to be a man who keeps his promises, I guess that means I have no choice but to spend some time on improving Schepie’s capacities this year. And I have an idea of what I want to do. I’ve been toying around with a program I wrote that can store relationships between entities (could be people, or places, or songs, or anything, doesn't matter) and tell you what the relationship, if any, is between two entities. It works, but at the moment, it just outputs the same data that you put in, just like Schepie. This isn’t terribly interesting in and of itself. But I’ve been thinking: what if I could program this in such a way that if you add a relationship between thing A and thing B, and then another relationship between thing B and thing C, the program automatically adds another relationship between thing A and thing B? For example, if you tell it that Johnny went to St. Test High School and you also tell it that Mary went to St. Test High School, and you ask it what the relationship is between Johnny and Mary, it tells you they were schoolmates, even though you didn’t tell it that! A mechanism like this could produce some interesting ‘magic’, and when you feed it lots of data, it could create a sort of network model of which things relate to which other things, and adjust the strength of the relationships, creating a sort of vector space of meaning. But it would do that by itself, instead of you simply feeding it with data. That way, it could start connecting its knowledge and making inferences, rather than just telling you what it knows. This is something GPT and other large language models cannot do as of now, because they have no knowledge of the real world.

SO, I am gonna try to create this system and then use it as the ‘language model’ for Schepie, and as I do this I will document my progress as if it were a little science experiment. This will be a good thing to do for various reasons. For instance, it will allow me to spend some time doing something with semantics, which is something I’ve been wanting to do ever since I gave up on pursuing a PhD and started working in IT. It will also help me improve my programming skill, which is good for my job and for my future. Of course, it will improve Schepie, which will be fun (as well as making Annemieke happy). When the experiment is completed, I could also write down my findings as an academic paper. Maybe, maybe I could even send it in to a journal. If it were published, that would be awesome.

See you around!

Back to Writing

This page is best viewed in Netscape Navigator 3.0 with a resolution of 1024 x 768 px.