Saturday, April 15, 2023

It would be important to consider evolution of artificial intelligence systems in terms of group dynamics



For now, Artificial Intelligence systems seem to be developed as stand-alone entities, while historically,  evolution of biological species happened in the ecosystem. Instances such as ChatGPT are conceived as proving and being expected to prove excellence on its own. Dependence on the corpus makes them embedded in the ecosystem, though.


It would be important to consider evolution of artificial intelligence systems in terms of group dynamics, as happened in the case of biological systems, as opposed to the arms race metaphor typically employed in discussions about AI today, e.g. in the perceived confrontation between Elon Musk and Sam Altman, for example.


Related video. Towards a society of artificial intelligence.





Monday, April 10, 2023

Many things in Japan would perhaps go the way of Hachiko.

In Japan, for now, wherever you go, there are a lot of people. On the Shinkansen train, on the streets, at tourist attractions, everyone everywhere all at once.

With the covid-19 pandemic, the tourism came to a standstill, as in many countries. Now tourists are back with a vengeance. A noticeable change is the presence of people from abroad compared to domestic tourists.


Perhaps this is a vision of the future for Japan. In comparison to other global destinations such as London, New York, and Paris, Tokyo feels like and is still a place where the effects of globalism is seen only in mild signs, unless, of course, you go to the Shibuya Crossing. The Hachiko statue nearby is now always busy with tourists from abroad queueing up to have a chance to take a photo beside the famous Akita dog.


When I was a college student I could not imagine a day when the world would come to meet and greet Hachiko. It was a domestic presence then. Now many things in Japan would perhaps go the way of Hachiko. What a time to live in, apart from the rapid development of artificial intelligence. 


Friday, April 07, 2023

High intelligence is a double-edged sword



The conventional wisdom would be that if you have high intellect you would be more adaptive to a wide range of environments. Homo Sapiens has evolved to possess a highly developed intelligence, and it surely correlates with the fact that humans have come to dominate in a wide range of environments, from the tropics to the north and south poles on the earth, and to International Space Station and further beyond, perhaps even to Mars.

However, although intelligence has surely helped humans to be more robustly adaptive in a wide range of environments, it has also made the human existence less robust and stable. The possibility of human extinction through total nuclear war is just one example.

It could be argued therefore that high intelligence is a double-edged sword. On the one hand it can help make the system more robust. On the other there would be increased vulnerabilities, easily scaling out of the comfort zone.

It is an interesting question whether incorporating artificial emotion or consciousness in a system would make it more or less robust. Memorably, Eliezer Yudkowsky remarked in a recent Lex Fridman podcast that endowing an AI with emotion would be terrible. Artificial consciousness might make a AI more stable, by the incorporation of metacognitive processes, realizing the veto function, which is indispensable in human ethics.

The jury is still out.

In a recent episode of Ken Mogi's Street Brain Radio I discussed these pressing issues in some detail.


Related video:


High intelligence, artificial or natural, becomes unstable. Can consciousness help that? 





Thursday, April 06, 2023

The arms race happen between people, not AI systems.


For some time now people have been discussing existential risks for humanity with the development of artificial intelligence. Although there would be genuine vulnerabilities due to the general disruption that the intelligence-related technologies would cause, especially by those involved in military operations, the tendency to depict AI, AGI in particular, in the light of possible overtaking of human existence is not only misleading but also potentially damaging.


Typically, when people discuss doomsday scenarios, they are projecting their own psychology onto the machine. It is not AI that would try to overtake the world. People have desires and ambitions about exerting control over others, and artificial intelligence systems are regarded as tools to realize their obsessions.


The arms race happen between people, not AI systems. The alpha male projection of aggression on the coming AGI is not only misplaced but also damaging to the neutrality of the technology.  


Related video.


The existential risk of Artificial Intelligence only comes from human nature and imagination





Saving Japan



In the last few years I have written two books on Japan. One on ikigai and another one on nagomi. With these attempts, I have hopefully presented the best in the tradition of the land of the rising sun.


As I have written in the small print sections of these books, I had no intention of claiming that Japan is the best, or indeed, unique among nations on the globe. Each culture has its own merits and strengths, juxtaposed with shortcomings and weaknesses. Japan is far from perfect, especially when it comes to gender equality, for example.


In a way, with the ikigai and nagomi books I have presented a vision of what Japan could be, could have been, and would be, in addition to what it actually is. I believe realities can be seen from a new and hope-giving perspective, when you have the perception and good will to achieve that.


I really admired the film Saving Mr. Banks. It told the true story behind Mary Poppins. As a lover of the excellent musical film, I believe in the alchemy of transformation from the actual Mr. Banks to the fictional character, depicting what he could have been, inspiring people. 


In the same spirit, I wanted to do something in the spirit of Saving Japan, while remaining true to the essential nature of the nation. Sometimes, you see the real self better from a distance.


Saturday, April 01, 2023

The idea of copying the consciousness of oneself appears to be doubtful, if not outright ridiculous.




There are people who almost casually endorse mind-uploading or whole brain emulation as methods for copying self-consciousness, which is a real puzzler for this author. 


For me, the idea of copying the consciousness of oneself appears to be doubtful, if not outright ridiculous.


In the latest episode of my Street Brain Radio series I explored the reasons why I am a skeptic in this matter while walking on the streets of Tokyo.


In a nutshell, self-consciousness would depend on metacognition, and that would not be possible to copy. In addition, when information is in the conscious domain (as opposed to the unconscious domain) metacognitive processes would again be essential so that it is not straightforward to copy them.


Ken Mogi's Street Brain Radio is a poor man's answer to Lex Fridman's podcast, which is of course brilliant. I like the way of exploring fundamental questions at length, without paying too much attention to the potential audience.


Related video.


The impossibility of copying self-consciousness and metacognitive information.







ChatGPT and illusion of intelligence.



One of the interesting things about Large Language Models is that there are a lot of hallucinations.


The factually incorrect statements generated by LLMs such as ChatGPT etc. are entertaining, but potentially risky when one considers practical applications of the generative AIs. It is understandable then that the frequency of hallucinations are taken as an important measure in testing AI safety.


On the other hand, there are many hallucinations on the side of humans, too. When we interact with LLMs, we have the illusion that these systems are genuinely intelligent. A Google researcher famously was convinced that they were conscious and became a whistleblower in 2022. The perception that LLMs such as GPT4 are intelligent or conscious is likely to turn out to be illusory, especially the latter, if you look into the structure and dynamics behind these systems. Otherwise, how are we to justify the fact that we are quite happily treating GPT4 and other LLMs as tools, disregarding their quality of experience in their stream of consciousness?


Let's put aside the problem of consciousness for the time being. When you come to think about it, intelligence is ultimately merely an illusion, and cannot be verified by a series of objective testing. This would be true about humans as well as artificial agents. When we believe an agent to be intelligent, we do not have a definite measure as in the case of the mass or electrical charge of particles. We only have an impression that the agent in question is intelligent, rather like the hallucinations LLMs famously exhibit from time to time.


The tentative conclusion therefore would be that it is all make-believe when it comes to the assessment of intelligence, concerning humans or otherwise. It is no wonder that the Turing test is no longer regarded as a valid verification of intelligence, since it was based on illusions from the beginning. The truly interesting challenge to decipher intelligence beyond the realm of illusion starts from here.


Related youtube video.

Ken Mogi's Street Brain Radio episode 29.


Perceived ability of Large Language Models and illusions of free will and intelligence. 



Monday, March 27, 2023

Reasons behind excellent performance of Large Language Models.



The superb functionalities of Large Language Models (LLMs) such as ChatGPT, GPT4, Bard, etc. would a puzzler even for people who have been optimistic about the potentials of artificial intelligence.  


The fact that at this stage artificial intelligence systems have achieved this level of perceived success would tell a lot about natural language as well as AI.


The way natural language is organized, no matter what word orders we generate or receive, they would be accepted as good as long as certain grammatical rules and contextual constraints are satisfied. Within that particular domain, anything goes.


So this is presumably how it works. Once the LLMs have studied the statistical patterns on the available texts on the web (which have been generated by humans), they would be able to produce endless examples of word sequences satisfying the contextual constraints specified by the prompt, while doing OK grammatically. 


The fact that AI systems with the present level of sophistication can generate texts perceived to be proper and good is thus a glimpse into the nature of natural language itself. While the achievement is certainly remarkable, it remains to be seen whether that would be considered as a hallmark of artificial general intelligence given the incredible flexibility of the natural language system within a contextual constraint, which has been studied and exploited by the LLMs.


In addition, the emergence of complexity exhibited in the word sequences produced by LLMs would qualify as trajectories in life histories. In life, we make choices and take actions, satisfying certain constraints while remaining interestingly unpredictable. If the choices and actions become too predictable, they would be taken advantage of by other players in the great game of life.


From this point of view, the outputs of LLMs could be taken as exhibitions of life histories by artificial intelligence systems in terms of the word orders generated.


(A short summary of the arguments in Ken Mogi's Street Brain Radio episode 28: The reasons behind the excellence of Large Language Models) 






Sunday, March 26, 2023

On the silence of NHK in the wake of the BBC documentary on Johnny Kitagawa: It's yodomi, not nagomi.



The silence of NHK in the wake of the BBC documentary on Johnny Kitagawa, founder of the largest boy bands talent agency in Japan, was the greatest disappointment in the public broadcaster so far in my life.


I do not want to recount the why's and how's about scandal here, as they are too cumbersome and miserable. I also do not want to describe how the practice of nepotism is letting down Japanese entertainment industry as a whole. 


I just want to clarify one thing. As the author of The Way of Nagomi, I would like to declare that the silence of NHK, in this unjustifiable consideration for the unfair practices of Johnny and Associates, is not nagomi at all. 


There is quite a different Japanese word for this lack of professional journalism. Yodomi. NHK's attitude in this matter is yodomi, not nagomi. 

FYI, yodomi refers to stagnation, lack of life, blandness, dirt, bad smell, as you would find in a gutter full of garbage. Nagomi is more pro-life, based on good will, with an emphasis on humane values. It would have been nagomi for NHK to report on the scandal fairly and rigorously, while casting the talents from Johnny and Associates in appropriate manners. 


The way of Nagomi is much deeper than the shallow, cowardly, and clumsy yodomi exhibited by NHK on this matter. Shameful.


As regards the scandal about Johnny Kitagawa, people involved in the silence of the NHK are all in the gutter. But I do hope that some of them are looking at the stars.