Tuesday, August 08, 2023

We don't have to cite Dostoevsky to call out the incredible shallowness of game theoretic thinking.

Born and raised in Japan, I am naturally aware of the destruction that nuclear weapons bring about, as exemplified by the tragedies in Hiroshima and Nagasaki. I would definitely like to see them abolished. I can see at the same time how difficult the process would be. Once the powers that be have such capabilities of mass destruction, it would be difficult to persuade them to abandon the weapons. British comedian Diane Morgan cried bitterly as the character Philomena Cunk when she learned that humanity has not abolished nuclear weapons. 


Quite MAD, isn't it? We are so mad that we need comedy to face the reality.

We are not alone, and perhaps there have been experiments on the difficulty of abolishing nuclear weapons on the cosmic scale. When considering the Fermi Paradox, I always thought that the apparent absence of intelligent extraterrestrial life out there is due to the short life expectancy of any advanced civilizations. Once they reach a stage where they could produce nuclear weapons, they would implode, annihilating themselves through unavoidable contingencies. Perhaps earthlings would follow suit soon enough if we are not careful. 

Abolition of nuclear weapons would need a serious examination of the game theoretic logic behind Mutually Assured Destruction. It is literally MAD as the acronym suggests. Game theory is great in its own way, but it does not scale very much when it comes to ethics.

For me, game theory always appeared to be rather superficial, in its premises that agents would behave according to some evaluation functions. It is useful, but it is obviously not the whole story.

We don't have to cite Dostoevsky to call out the incredible shallowness of game theoretic thinking, but it is difficult still to make humans behave any differently in a world increasingly dominated by AI think, both theoretically and emotionally. I am a great fan of the present AI developments. I am avidly interested in AI alignment problems. At the same time, I can see how this whole process has trapped us in a rather nasty rabbit hole, and we probably need to start thinking rather seriously about ways out, or even ways further in so that we can get somewhere else through some wormholes of concepts. 

1 comment:

Ken Mogi said...

I thought the Philomena Cunk scene was truly great. We should all be ashamed.