zizek16_ Frank Rumpenhorstpicture alliance via Getty Images_chatbot Frank Rumpenhorst/picture alliance via Getty Images

Artificial Idiocy

The problem with the new chatbots is not just that they are often stupid and naive; it is that they are not “stupid” or “naive” enough to pick up on the nuances, ironies, and revealing contradictions that constitute human culture and communication. Worse, by relying on them, we risk succumbing to the same obtuseness.

LJUBLJANA – There is nothing new about “chatbots” that are capable of maintaining a conversation in natural language, understanding a user’s basic intent, and offering responses based on preset rules and data. But the capacity of such chatbots has been dramatically augmented in recent months, leading to handwringing and panic in many circles.

Much has been said about chatbots auguring the end of the traditional student essay. But an issue that warrants closer attention is how chatbots should respond when human interlocutors use aggressive, sexist, or racist remarks to prompt the bot to present its own foul-mouthed fantasies in return. Should AIs be programmed to answer at the same level as the questions that are being posed?

If we decide that some kind of regulation is in order, we must then determine how far the censorship should go. Will political positions that some cohorts deem “offensive” be prohibited? What about expressions of solidarity with West Bank Palestinians, or the claim that Israel is an apartheid state (which former US President Jimmy Carter once put into the title of a book)? Will these be blocked as “anti-Semitic”?

https://prosyn.org/wHku7yD