This paper shows that LLMs do not really understand language, even though popular opinion suggests that they do. Specifically, it highlights that LMMs are statistical models and that the limit of 'understanding' language is of learning how words relate to each other, not why the relate to each other.

This is an early and incomplete paper which seems to have been completed subsequently as a separate paper called "Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs" which is also attached.

Saba, W.S. (2024) ‘LLMs’ Understanding of Natural Language Revealed’. arXiv. Available at: https://doi.org/10.48550/arXiv.2407.19630.

Saba, W.S. (2023) ‘Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs’. arXiv. Available at: https://doi.org/10.48550/arXiv.2309.05918.