Artificial intelligence may not be intelligence. But what is?
Posted by Giang Son | Jul 12, 2025 | 3 min read
Some thoughts on intelligence and AI as I start reading A Thousand Brains (Jeff Hawkins).
Mostly personal hot takes.
In recent years, there have been significant progress in making “artificial intelligence” (*). Probably the most famous example of AI systems are large language models (LLMs), like ChatGPT, DeepSeek and the like. They can speak human languages as perfectly as native speakers. They can give convincing answers when prodded with questions. Recently, they can code (!!) and solve complex problems and can “reason” about what they are doing. Seemingly, they are intelligent, and on many occasions more so than humans.
But despite all that, the current AI models are using artificial neural networks (a.k.a deep learning), meaning: probabilistic pattern recognition and prediction machines. Take, for example how such neural networks may learn to do addition, like 55+26. We humans can do it (without the help of a calculator), because we now the rules: 55+26=81. A LLM would extract the features from the input 55+26 and then use those features with calculate a probability for the possible output – the highest one will get returned to the user, which in this case is indeed 81. The fact that it doesn’t know the rules matters because while they get this simple question right most of the time, LLMs struggle with doing math involving extremely large numbers. Same with other problems as well: LLMs can generate accurate output without understanding the concepts behind input.
I would hardly call this probabilistic “thought process” intelligence.
But what is intelligence anyway? What gave rise to intelligence in the human brain and not any other species? How does intelligence operate? What separate intelligence versus, say, rote memorization? Can rote memorization be considered intelligence? Is doing math a sign of intelligence? Is playing chess? Is playing musical instruments? Is socializing? I personally don’t know answers to these questions, but if I understand correctly, these are still open questions in neuroscience.
That said, even without being truly intelligent, these language models are extremely useful – “all models are wrong, but some are useful” as the saying goes in machine learning literature. I too use LLMs only a daily basis for coding and general knowledge query (and sometimes in proofreading important emails).
Nevertheless, I dream of making true thinkum dinkums – actually intelligent machines that can think – one day. I’m quite positive that at this point nobody knows how to do it, despite what they say in the press. Whatever the case, I doubt that we can do that without first understanding the essence of intelligence. (**)
(*) Terms like "artificial intelligence" and "machine learning" were very helpful in the past. By likening computer operations to human terms, they helped both tech and non-tech audience understand what this mysterious technique is trying to accomplish. Now I think they are overused to confuse and mislead.
(**) I’m reading A Thousand Brains: A New Theory of Intelligence to get a clearer view on this matter.
References:
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models