Sunday, August 13, 2023

AI doomerism might actually be a form of end-of-history illusion.



On the web, I have come across some arguments concerning whether the discussions on AI risks are culturally conditioned. Specifically, there were some suggestions that the so-called AI doomers, people who are concerned that the end of humanity is near because of AI, tend to come from a certain corner of the religious spectrum. While there might be some high profile cases which superficially suggest such a correlation, I don't think it is accurate or indeed appropriate to link opinions about the existential risk to specific positions in belief.


To be fair, lines of arguments such as AI doomerism, simulation hypothesis, and mind uploading (not suggesting here that these ideas are  necessarily mutually resonant) might be influenced by culture in the broad sense. No matter what their cultural backgrounds might be, once they are formulated rigorously, everything would eventually boil down to logic and empirical evidence. I am of the opinion that the pros and cons of AI doomerism can be and should be discussed on pure logic, separate from religious connotations, if any at all.


Having written this, I do feel that there are certain cognitive biases that make people inclined towards AI doomerism. It might actually be a form of end-of-history illusion. While we must take necessary precautions to adversary effects of AI, the possibilities for humanity are far from over. Life would with all certainty keep going, with or without AI, or, for that matter, with or without humans as we know it. We tend to be too narrowly focused. 






No comments: