ChatGPT’s privacy legal disclosure warning by OpenAI CEO Sam Altman raises global concern over lack of legal protection for AI user conversations
OpenAI CEO Sam Altman has issued a stern warning that ChatGPT’s privacy legal disclosure could expose sensitive user conversations in court, sparking renewed global concern over the lack of legal protections surrounding artificial intelligence interactions.
Also read: OpenAI launches Sora: How AI can create videos from a text prompt
In a revealing appearance on the This Past Weekend podcast hosted by Theo Von, Altman acknowledged that millions of users, especially young people, are increasingly turning to generative AI tools like ChatGPT for personal advice and emotional support.
But unlike doctors, lawyers or therapists, conversations with AI are not protected under legal privilege.
“People use it, young people especially, as a therapist, a life coach,” Altman said.
“Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. If you go talk to ChatGPT about the most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that.”
He described the situation as “very screwed up”, calling for legal reforms to extend privacy privileges to AI interactions.
“I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever,” he added.
Altman’s statement has triggered concern across tech, legal and civil society circles, particularly given the scale of ChatGPT’s reach.
The platform now records between 800 million and one billion weekly active users globally, doubling its figures from around 400 million earlier this year and 300 million in late 2024.
It also boasts approximately 10 million paying subscribers and one million commercial users, with OpenAI aiming to reach the one billion milestone before 2026.
The exponential user growth reflects how embedded AI has become in personal, professional, and even emotional domains.
However, the ChatGPT’s privacy legal disclosure risk now casts a shadow over this trust, as regulatory frameworks lag behind technological adoption.
If people don’t feel safe talking to AI, we lose the opportunity to make it truly helpful.
Altman stressed that OpenAI is aware of the trust users place in its platform but acknowledged that the current legal landscape offers no protection if governments or courts request access to conversations.
Although platforms like ChatGPT employ safeguards to prevent misuse and data leaks, the absence of statutory data privileges means that users’ most private exchanges could be revealed in legal proceedings.
Podcast host Theo Von admitted to Altman that he had been hesitant to fully embrace AI tools due to uncertainty over who might access his personal data.
“I think that makes sense to really want the privacy clarity before you use it a lot, the legal clarity,” Altman replied.
Privacy advocates argue that this grey area could undermine the potential of AI tools if users cannot trust that their interactions are secure.
In some jurisdictions, data protection laws like Europe’s GDPR offer certain guarantees, but no uniform global standard currently ensures that AI conversations are treated with the same confidentiality as traditional professional advice.
“This legal gap creates a chilling effect,” said a digital rights lawyer in Lagos.
“People stop being honest with AI, or worse, avoid it altogether, because they fear exposure. That undermines the entire point of these tools.”
Altman’s remarks are likely to renew pressure on lawmakers worldwide to update legal systems to account for new realities of AI-human engagement.
Whether through new legislation or reinterpreting existing laws, many experts agree that user privacy must evolve in step with the technology.
Also read: Meta AI introduces celebrity voices for enhanced interaction
As dependency on AI tools deepens, the question is no longer technical—it is fundamentally ethical and legal. Without adequate protections, the very trust that fuels AI growth may be eroded.

Discover more from Freelanews
Subscribe to get the latest posts sent to your email.
Discussion about this post