Saturday, April 01, 2023

ChatGPT and illusion of intelligence.



One of the interesting things about Large Language Models is that there are a lot of hallucinations.


The factually incorrect statements generated by LLMs such as ChatGPT etc. are entertaining, but potentially risky when one considers practical applications of the generative AIs. It is understandable then that the frequency of hallucinations are taken as an important measure in testing AI safety.


On the other hand, there are many hallucinations on the side of humans, too. When we interact with LLMs, we have the illusion that these systems are genuinely intelligent. A Google researcher famously was convinced that they were conscious and became a whistleblower in 2022. The perception that LLMs such as GPT4 are intelligent or conscious is likely to turn out to be illusory, especially the latter, if you look into the structure and dynamics behind these systems. Otherwise, how are we to justify the fact that we are quite happily treating GPT4 and other LLMs as tools, disregarding their quality of experience in their stream of consciousness?


Let's put aside the problem of consciousness for the time being. When you come to think about it, intelligence is ultimately merely an illusion, and cannot be verified by a series of objective testing. This would be true about humans as well as artificial agents. When we believe an agent to be intelligent, we do not have a definite measure as in the case of the mass or electrical charge of particles. We only have an impression that the agent in question is intelligent, rather like the hallucinations LLMs famously exhibit from time to time.


The tentative conclusion therefore would be that it is all make-believe when it comes to the assessment of intelligence, concerning humans or otherwise. It is no wonder that the Turing test is no longer regarded as a valid verification of intelligence, since it was based on illusions from the beginning. The truly interesting challenge to decipher intelligence beyond the realm of illusion starts from here.


Related youtube video.

Ken Mogi's Street Brain Radio episode 29.


Perceived ability of Large Language Models and illusions of free will and intelligence. 



No comments: