There is a fundamental problem in the concept of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).
It is possible to conceive of a system with great computational capabilities. However, at a particular time under a specific context, the hypothetical AGI system can execute only one computation. Other possible computations exist only in the counterfactual.
When it comes to designing the "personality" of an AGI, in line with, for example, Eliezer Yudkowsky's Friendly AI concept, the system would implement only one of the possible configurations in the vast personality space at one time.
Thus, AGI can never be general, given the physical constraints in space and time.
Indeed, Spinoza's argument about the infinity of God in his magnum opus Ethica beautifully addresses this issue. In this historic treatise, Spinoza states that God, the absolute infinite, has nothing to do with intelligence or personality, which by nature necessitates states of finite configurations.
If an AGI system is truly general, then it should have nothing to do with intelligence. The same for ASI. As it stands, an AGI or ASI is likely to exist only as a sharply tuned specialist machine, rather than the conventionally conceptualized system of ubiquitous and omnipotent nature.
We perhaps need to sort things out before we set about this supposed race to AGI, or even as we run on the competition track.