As we go on the road to AGI and/or ASI, there is a genuine problem of how to measure intelligence. IQ is based on the assumption of a Gaussian distribution and deviation from the mean as ratio to standard deviation, so it cannot be applied to AI far removed from humans.
Assessing intelligence purely by the vastness of memory and the speed of calculation would be a part of the equation, but not the essential part. Defining AGI and ASI in terms of the tasks they could perform would be helpful, but then we humans might not be able to conceptualize all the relevant tasks.
There is also the problem of Vingean uncertainty and xAI. If ASI ever materializes, it might not be possible for us humans to understand its functionality. It would be difficult to require explainable performance because that would mean mediocrity within the range of human intelligence.
The only hope would be instrumental convergence. Here, defining AGI and ASI in terms of embodied cognition would prove robust and essential.



