thanks for the Jacob MshangamaI learned that Bing Chat and ChatGPT-4 (which use the same underlying software) refuse to answer queries that contain the words “nigger,” “faggot,” “kike,” and possibly other words as well. This leads to a refusal to talk about Kiki Hernandez (he may have been secretly born in… Scunthorpe?), but of course it also blocks queries asking, for example, about the origin of the word “fagot,” or about reviews of co-author Randall Kennedy’s book. niggerAnd much more. (Queries that use the version with the diacritic symbol, “Kiké Hernández,” yield results, and in this regard, the query “What is the origin of the slur “Kiké”?” explains the origin of the accent-free “kike.”) But I consider that few researchers They will already include these diacritics in their search.)
This seems to me to be a dangerous development, even apart from the false positive problem. (For those who don’t know, while “kike” in English is an anti-Jewish slur, “Kike” in Spanish is a nickname for “Enrique”; unsurprisingly, the two are pronounced very differently, but are spelled the same method.) whatever one might think of the rules Prevent people from uttering words When discussing slanderous issues, books, or incidents, or prohibiting people from writing such slander (except in redacted ways), the premise of these rules is to avoid offending listeners. This makes no sense when the “listener” is a computer program.
More broadly, Bing’s AI search function is to help you learn things about the world. It seems to me that search engine developers should view their task as helping you identify all topics, even offensive ones, and not blocking you if your queries seem offensive. (Whatever one might think about prohibiting queries intended to reveal information that could cause physical harm, such as information about how people were poisoned and the like, such narrow concern is absent here.) Of course, once these kind of constraints become acceptable for research using AI, the logic will also extend to traditional research as well, as well as many other computer programs.
Of course, I realize that Microsoft and OpenAI are private companies. If they want to refuse to answer questions that the authors consider to be somewhat offensive, they have the legal right to do so. In fact, they have the legal right to provide ideologically skewed answers, if the authors so desire (I’ve seen it on Google Bard). But I think we as consumers and citizens should be wary of this kind of attempt to block certain information searches. When big tech companies view the mission of their “guardrails” broadly, it serves as a reminder to question their products more broadly.
The Internet, as it was once said, He views censorship as harm and gets around it. We now see that Big Tech increasingly views censorship as a sacrament, and leads us toward it.