OpenAI is now actively monitoring user interactions on ChatGPT, as disclosed in a recent blog post. The company’s review process targets conversations that potentially involve violence or harm. If a chat raises serious red flags, the review team can escalate the matter and share the user’s chat data with law enforcement. This move is a direct response to growing concerns about AI safety. A recent incident involved an individual who engaged in extensive conversations with ChatGPT before allegedly committing a crime and subsequently taking their own life. The implications of this monitoring are substantial, as it directly impacts the perceived privacy of ChatGPT conversations.
