Grady Booch
Grady Booch

@Grady_Booch

12 tweets 8 reads Jun 28, 2023
"Do LLM understand?" is a question that yields passionate answers.
As for me and my house: no, LLMs do not reason and in fact are architectural incapable of reasoning.
Let's unpack that.
In a recent video of Hinton and Ng, Ng observes that β€œto the extend that the [LLM] is building a world model it’s conveying learning some understanding of the world, but that’s just my current view”. Both go on to cast some stones at Bender and Gebru.
Hinton goes on to say that β€œwe believe that this process of creating features of embeddings and then interactions between features is actually understanding” and then β€œonce you’ve taken the raw data of symbol strings and then you can predict the next symbol...
...not by things like trigrams but by huge numbers of features interacting in complicated ways to predict the features of the next word and from that make a prediction about the probability of the next words the point is that is understanding”. He goes on to say that
β€œI believe that is what our brains are doing too”.
I have lots of problems with his POV, and really have major tissue rejection with his last statement.
My POV: understanding involves the creation of a theory that can be tested; one consequence of this position is that it requires an understanding (the recursion is intentional) of the limits of that theory.
The question at hand is one that has bounced around AI for decades. Take a look at this paper by Winograd from 1980: hci.stanford.edu
Remembering that Winograd wrote this in a time when symbolic AI dominated the discourse, he and his colleagues approached a solution from a relatively formal position. Nonetheless, the ground he covers is very much the same ground that contemporary AI is walking.
Recognizing that we computer scientists all too often listen to our own thoughts and neglect the vibrant dialog taking place among psychologists, cognitive scientists, and yes even artists and poets, it's important we reach outside our world.
This I found useful psychologytoday.com wherein the author observes "The enemy of knowledge is not ignorance, but rather the illusion of knowledge; the feeling of understanding."
This, then, is the challenge: LLMs give us the feeling of understanding.
Sometimes, that is enough.
But, when it is not enough, and we persist in the illusion, we enter dangerous territory.

Report this thread