As we go on the road to AGI and/or ASI, there is a genuine problem of how to measure intelligence. IQ is based on the assumption of a Gaussian distribution and deviation from the mean as ratio to standard deviation, so it cannot be applied to AI far removed from humans.
Assessing intelligence purely by the vastness of memory and the speed of calculation would be a part of the equation, but not the essential part. Defining AGI and ASI in terms of the tasks they could perform would be helpful, but then we humans might not be able to conceptualize all the relevant tasks.
There is also the problem of Vingean uncertainty and xAI. If ASI ever materializes, it might not be possible for us humans to understand its functionality. It would be difficult to require explainable performance because that would mean mediocrity within the range of human intelligence.
The only hope would be instrumental convergence. Here, defining AGI and ASI in terms of embodied cognition would prove robust and essential.
1 comment:
Hello Ken!
I am a founder of the podcast "AI Founders' Podcast" to interview visionary founders and people who support the startup ecosystem.
https://youtu.be/sDbsBZqIFs8
I would love to interview you! (Because I listened to the podcast of you and Rei Inamoto. I love your positive energy to encourage the Japanese people who want to go to abroad) And I am hosting the 4 startup events in mid-January, I would love to invite you. If you are interested, please reach out to my LinkedIn.
Thank you.
Kaori Rei
Post a Comment