Saturday, April 01, 2023

The idea of copying the consciousness of oneself appears to be doubtful, if not outright ridiculous.




There are people who almost casually endorse mind-uploading or whole brain emulation as methods for copying self-consciousness, which is a real puzzler for this author. 


For me, the idea of copying the consciousness of oneself appears to be doubtful, if not outright ridiculous.


In the latest episode of my Street Brain Radio series I explored the reasons why I am a skeptic in this matter while walking on the streets of Tokyo.


In a nutshell, self-consciousness would depend on metacognition, and that would not be possible to copy. In addition, when information is in the conscious domain (as opposed to the unconscious domain) metacognitive processes would again be essential so that it is not straightforward to copy them.


Ken Mogi's Street Brain Radio is a poor man's answer to Lex Fridman's podcast, which is of course brilliant. I like the way of exploring fundamental questions at length, without paying too much attention to the potential audience.


Related video.


The impossibility of copying self-consciousness and metacognitive information.







ChatGPT and illusion of intelligence.



One of the interesting things about Large Language Models is that there are a lot of hallucinations.


The factually incorrect statements generated by LLMs such as ChatGPT etc. are entertaining, but potentially risky when one considers practical applications of the generative AIs. It is understandable then that the frequency of hallucinations are taken as an important measure in testing AI safety.


On the other hand, there are many hallucinations on the side of humans, too. When we interact with LLMs, we have the illusion that these systems are genuinely intelligent. A Google researcher famously was convinced that they were conscious and became a whistleblower in 2022. The perception that LLMs such as GPT4 are intelligent or conscious is likely to turn out to be illusory, especially the latter, if you look into the structure and dynamics behind these systems. Otherwise, how are we to justify the fact that we are quite happily treating GPT4 and other LLMs as tools, disregarding their quality of experience in their stream of consciousness?


Let's put aside the problem of consciousness for the time being. When you come to think about it, intelligence is ultimately merely an illusion, and cannot be verified by a series of objective testing. This would be true about humans as well as artificial agents. When we believe an agent to be intelligent, we do not have a definite measure as in the case of the mass or electrical charge of particles. We only have an impression that the agent in question is intelligent, rather like the hallucinations LLMs famously exhibit from time to time.


The tentative conclusion therefore would be that it is all make-believe when it comes to the assessment of intelligence, concerning humans or otherwise. It is no wonder that the Turing test is no longer regarded as a valid verification of intelligence, since it was based on illusions from the beginning. The truly interesting challenge to decipher intelligence beyond the realm of illusion starts from here.


Related youtube video.

Ken Mogi's Street Brain Radio episode 29.


Perceived ability of Large Language Models and illusions of free will and intelligence. 



Monday, March 27, 2023

Reasons behind excellent performance of Large Language Models.



The superb functionalities of Large Language Models (LLMs) such as ChatGPT, GPT4, Bard, etc. would a puzzler even for people who have been optimistic about the potentials of artificial intelligence.  


The fact that at this stage artificial intelligence systems have achieved this level of perceived success would tell a lot about natural language as well as AI.


The way natural language is organized, no matter what word orders we generate or receive, they would be accepted as good as long as certain grammatical rules and contextual constraints are satisfied. Within that particular domain, anything goes.


So this is presumably how it works. Once the LLMs have studied the statistical patterns on the available texts on the web (which have been generated by humans), they would be able to produce endless examples of word sequences satisfying the contextual constraints specified by the prompt, while doing OK grammatically. 


The fact that AI systems with the present level of sophistication can generate texts perceived to be proper and good is thus a glimpse into the nature of natural language itself. While the achievement is certainly remarkable, it remains to be seen whether that would be considered as a hallmark of artificial general intelligence given the incredible flexibility of the natural language system within a contextual constraint, which has been studied and exploited by the LLMs.


In addition, the emergence of complexity exhibited in the word sequences produced by LLMs would qualify as trajectories in life histories. In life, we make choices and take actions, satisfying certain constraints while remaining interestingly unpredictable. If the choices and actions become too predictable, they would be taken advantage of by other players in the great game of life.


From this point of view, the outputs of LLMs could be taken as exhibitions of life histories by artificial intelligence systems in terms of the word orders generated.


(A short summary of the arguments in Ken Mogi's Street Brain Radio episode 28: The reasons behind the excellence of Large Language Models) 






Sunday, March 26, 2023

On the silence of NHK in the wake of the BBC documentary on Johnny Kitagawa: It's yodomi, not nagomi.



The silence of NHK in the wake of the BBC documentary on Johnny Kitagawa, founder of the largest boy bands talent agency in Japan, was the greatest disappointment in the public broadcaster so far in my life.


I do not want to recount the why's and how's about scandal here, as they are too cumbersome and miserable. I also do not want to describe how the practice of nepotism is letting down Japanese entertainment industry as a whole. 


I just want to clarify one thing. As the author of The Way of Nagomi, I would like to declare that the silence of NHK, in this unjustifiable consideration for the unfair practices of Johnny and Associates, is not nagomi at all. 


There is quite a different Japanese word for this lack of professional journalism. Yodomi. NHK's attitude in this matter is yodomi, not nagomi. 

FYI, yodomi refers to stagnation, lack of life, blandness, dirt, bad smell, as you would find in a gutter full of garbage. Nagomi is more pro-life, based on good will, with an emphasis on humane values. It would have been nagomi for NHK to report on the scandal fairly and rigorously, while casting the talents from Johnny and Associates in appropriate manners. 


The way of Nagomi is much deeper than the shallow, cowardly, and clumsy yodomi exhibited by NHK on this matter. Shameful.


As regards the scandal about Johnny Kitagawa, people involved in the silence of the NHK are all in the gutter. But I do hope that some of them are looking at the stars.