Saturday, April 01, 2023

ChatGPT and illusion of intelligence.



One of the interesting things about Large Language Models is that there are a lot of hallucinations.


The factually incorrect statements generated by LLMs such as ChatGPT etc. are entertaining, but potentially risky when one considers practical applications of the generative AIs. It is understandable then that the frequency of hallucinations are taken as an important measure in testing AI safety.


On the other hand, there are many hallucinations on the side of humans, too. When we interact with LLMs, we have the illusion that these systems are genuinely intelligent. A Google researcher famously was convinced that they were conscious and became a whistleblower in 2022. The perception that LLMs such as GPT4 are intelligent or conscious is likely to turn out to be illusory, especially the latter, if you look into the structure and dynamics behind these systems. Otherwise, how are we to justify the fact that we are quite happily treating GPT4 and other LLMs as tools, disregarding their quality of experience in their stream of consciousness?


Let's put aside the problem of consciousness for the time being. When you come to think about it, intelligence is ultimately merely an illusion, and cannot be verified by a series of objective testing. This would be true about humans as well as artificial agents. When we believe an agent to be intelligent, we do not have a definite measure as in the case of the mass or electrical charge of particles. We only have an impression that the agent in question is intelligent, rather like the hallucinations LLMs famously exhibit from time to time.


The tentative conclusion therefore would be that it is all make-believe when it comes to the assessment of intelligence, concerning humans or otherwise. It is no wonder that the Turing test is no longer regarded as a valid verification of intelligence, since it was based on illusions from the beginning. The truly interesting challenge to decipher intelligence beyond the realm of illusion starts from here.


Related youtube video.

Ken Mogi's Street Brain Radio episode 29.


Perceived ability of Large Language Models and illusions of free will and intelligence. 



Monday, March 27, 2023

Reasons behind excellent performance of Large Language Models.



The superb functionalities of Large Language Models (LLMs) such as ChatGPT, GPT4, Bard, etc. would a puzzler even for people who have been optimistic about the potentials of artificial intelligence.  


The fact that at this stage artificial intelligence systems have achieved this level of perceived success would tell a lot about natural language as well as AI.


The way natural language is organized, no matter what word orders we generate or receive, they would be accepted as good as long as certain grammatical rules and contextual constraints are satisfied. Within that particular domain, anything goes.


So this is presumably how it works. Once the LLMs have studied the statistical patterns on the available texts on the web (which have been generated by humans), they would be able to produce endless examples of word sequences satisfying the contextual constraints specified by the prompt, while doing OK grammatically. 


The fact that AI systems with the present level of sophistication can generate texts perceived to be proper and good is thus a glimpse into the nature of natural language itself. While the achievement is certainly remarkable, it remains to be seen whether that would be considered as a hallmark of artificial general intelligence given the incredible flexibility of the natural language system within a contextual constraint, which has been studied and exploited by the LLMs.


In addition, the emergence of complexity exhibited in the word sequences produced by LLMs would qualify as trajectories in life histories. In life, we make choices and take actions, satisfying certain constraints while remaining interestingly unpredictable. If the choices and actions become too predictable, they would be taken advantage of by other players in the great game of life.


From this point of view, the outputs of LLMs could be taken as exhibitions of life histories by artificial intelligence systems in terms of the word orders generated.


(A short summary of the arguments in Ken Mogi's Street Brain Radio episode 28: The reasons behind the excellence of Large Language Models)