AI Snake Oil by Arvind Narayanan and Sayash Kapoor was a very interesting read. The authors make a salient point that compared to generative AIs, predictive AIs perform very badly. From gun shootings to the prospect of a person's academic performance, or from the next hit song to the breakout of civil wars, AI technologies categorically cannot predict the future. For sure, humans are equally bad at predicting, but the point is that AIs cannot be expected to do any better. Any illusion on this would lead to snail oil.
In addition to the disaster of predictive AI, another related, and significant deficiency of AI is content moderation. The authors argue how and why filtering out potentially harmful posts on the social media are hard. Some of the difficulties come from the incredible ingenuity of humans to bypass any perceived restrictions or algorithmic structuring. A side effect of this is that a lot of people need to be employed to label bad contents manually, a task with negative effects on mental health.
Topics such as top-N accuracy, the Gartner hype cycle, and the reproducibility crisis in AI research are very effectively analyzed and streamlined. I recommend anyone interested in AI's status quo and beyond to read this wonderfully written book.
No comments:
Post a Comment