Wednesday, January 08, 2025

Collapse of generative AIs?



Some people are starting to predict a collapse of generative AIs, but I am perhaps more skeptical of the AI skeptics than of AI itself. The fact is that nobody, even the AI gods, don't know for sure. Butterfly effects are everywhere. It is difficult to predict the future, not only for AI (predictive AIs have bad track records) but also for humans.


Having said that, it may be argued that the road from intelligence (natural or artificial) to economic prosperity is not straightforward. Intelligent people do not necessarily make a lot of money. The correlation between IQ and income, if any, is very weak. Even if AI makes great progress from here, it does not necessarily follow that individuals or companies employing AI would be more productive.


The key missing link would be the embodiment of intelligence. Even if there is high intelligence, there need to be cleverly crafted schemes to make it socially and economically relevant. A newly discovered theorem in mathematics, for example, might provide brand new encryption schemes. In order for it to be materialized, certain sets of requirements need to be satisfied. The same goes for the road from AI to new cures of cancer, innovative ways to curb aging, and realization of nuclear fusion, all of which, needless to say, would provide huge utility and result in economic gains.


The benefits of AI are indirect compared to new energy resources. Right now, humans are converting a lot of energy into a huge amount of compute in the hope to achieve AGI and ASI. Even if humans are successful in that feat, ways to employ the superb intelligence to increase utility need to be considered. The societal and economic embodiment of AI towards increased utility would be part of the AI alignment schemes in general, and one of the most crucial challenges of our time.

Tuesday, January 07, 2025

Why Elon Musk is so powerful.



Mr Musk has been a marvel for obvious reasons, but in the last few weeks his influence has gown out of proportions. The bromance with Mr Donald Trump, the President-elect, is certainly a factor, but that alone cannot explain the Musk phenomenon that is sweeping the globe now. 


Wise people have always argued that it is not that AI would take over humanity. It would rather be that humans empowered by AI would overwhelm people less fortunate. The Musk phenomenon, bolstered by his success in Tesla and SpaceX, was given the crucial boost by his purchase of twitter (now converted to X). The house-made Grok is always on X, and X is evidently the embodiment of AI-powered dominance of the world, at least somewhere on the roadmap. Mr Musk has been one of the founders of Open AI. With his new startup xAI and much beyond to come, together with the track record of serial successes, give Mr. Musk power in reality and in imagination. If he could win in choosing sides in the American Presidential election, probably he will win in this arms race of AI towards AGI and ASI, at least will be on the winning side, an educated guess will suggest.


So as Mr. Musk goes about the business of interfering with European politics, even suggesting to King Charles to dissolve the parliament, there is an image of a man stroking a trademark white cat. AI would not conquer humans. People smart enough to employ AI would conquer humans. Mr. Musk is at the right place at the right time with the right track record. How the rest will turn out to be history is yet to be seen.

Monday, January 06, 2025

We don't understand what the language game is.



In board games such as chess, go, and shogi, the AIs have beaten human champions. Indeed, today, nobody is in the doubt as to whether AIs have edge over humans. The battle between AIs and humans are over in these fields.


When it comes to Large Language Models, the situations is not so clear. Although people are generally under the impression that the Turing test is now probably moot, especially because you can formulate the arguments in any ways you prefer, there is no clear measure to judge whether LLMs are doing the job better than humans.


The fundamental problem is our lack of understanding of the nature of the language game. Although Ludwig Wittgenstein described it in a passing manner in his Philosophical Investigations, the description is far from adequate. To this day, we do not have a clear model of what the language game is.


We humans don't know what the language game is exactly, and yet we engage ourselves in it every day. The Large Language Models are being developed and employed without a definite idea of what cognitive function it is addressing.  

Sunday, January 05, 2025

How to measure the intelligence of AGI and/or ASI/


As we go on the road to AGI and/or ASI, there is a genuine problem of how to measure intelligence. IQ is based on the assumption of a Gaussian distribution and deviation from the mean as ratio to standard deviation, so it cannot be applied to AI far removed from humans.


Assessing intelligence purely by the vastness of memory and the speed of calculation would be a part of the equation, but not the essential part. Defining AGI and ASI in terms of the tasks they could perform would be helpful, but then we humans might not be able to conceptualize all the relevant tasks.


There is also the problem of Vingean uncertainty and xAI. If ASI ever materializes, it might not be possible for us humans to understand its functionality. It would be difficult to require explainable performance because that would mean mediocrity within the range of human intelligence.


The only hope would be instrumental convergence. Here, defining AGI and ASI in terms of embodied cognition would prove robust and essential.