OpenAI recently made the case that its GPT-4 product should be used for so-called “content moderation.” And that just means more censorship.
GPT-4 is OpenAI’s most advanced version of its large language model, also known as an AI interface. ChatGPT was one of the first versions of this product. Unfortunately, research showed that ChatGPT contained a “significant and systemic left-wing bias.”
OpenAI’s Aug. 15 blog post explaining how its AI can be used for “content moderation” does not provide assurances that biases have been removed. The post describes how “content moderation” models can be customized for each platform, thus introducing the Big Tech platform’s inherent biases into the moderation process. Further, OpenAI confirms the potential for bias, stating: “[j]udgments by language models are vulnerable to undesired biases that might have been introduced into the model during training.”