Mark Cuban Voices Sharp Concerns Over OpenAI’s ChatGPT Update: Risks for Schools and Parents

Creator:

Billionaire investor Mark Cuban has publicly criticized OpenAI’s plan to relax ChatGPT’s safety controls, warning that the changes could undermine trust among parents and educators.

Quick Read

  • Mark Cuban has publicly criticized OpenAI’s plan to relax ChatGPT safety restrictions.
  • He warns that parents and schools may lose trust in ChatGPT if adult content is allowed.
  • OpenAI CEO Sam Altman promises robust age-gating for mature features, launching in December.
  • Industry experts are divided on whether technical safeguards can truly protect minors.

When a tech giant like OpenAI announces a major shift in its approach to safety, the world listens—and sometimes, it pushes back. This week, billionaire investor and outspoken entrepreneur Mark Cuban made headlines by directly challenging OpenAI CEO Sam Altman’s plan to relax ChatGPT’s safety restrictions, a move that could allow adult-oriented content for verified users as soon as December.

OpenAI’s New Direction: Freer, More Human ChatGPT

Sam Altman, the public face of OpenAI, recently outlined his vision for a less restrictive ChatGPT. In a post on X (formerly Twitter), Altman said the company would soon release an update enabling greater personality flexibility, including options for “very human-like” personalities and, controversially, even “erotica” for adult users. The promise is a chatbot experience that feels more authentic and less sanitized—a direction Altman claims is in response to user feedback and evolving expectations around AI interactions.

Altman justified the previous restrictions as necessary to protect mental health, but now believes the company can “safely relax the restrictions in most cases.” The changes are set to roll out in the coming weeks, with a robust age-gating system planned for December to ensure only verified adults can access mature content.

Mark Cuban’s Warning: Trust at Risk for Parents and Schools

Mark Cuban, never one to shy away from controversy, responded with a pointed critique. In his view, OpenAI’s new policy is a gamble that could backfire—hard. “No parent is going to trust that their kids can’t get through your age gating,” Cuban wrote, predicting that families would abandon ChatGPT for safer alternatives. He also raised alarms about the platform’s potential misuse in schools, where even verified 18-year-olds could exploit adult features and share explicit material with younger students.

“What could go wrong?” Cuban asked, underscoring his skepticism about OpenAI’s ability to manage the risks. He suggested that schools might reconsider their use of ChatGPT altogether, given the difficulty of ensuring that inappropriate content doesn’t filter down to minors. The implication is clear: while the promise of a more ‘human’ AI is appealing to some, it may come at the cost of widespread trust among two crucial user groups—parents and educators.

The Tension Between Openness and Safety in AI

Cuban’s concerns echo a broader debate in the tech industry: how much freedom should users have when interacting with advanced AI, and what responsibilities do developers have to shield vulnerable populations from harm? For OpenAI, the challenge is balancing innovation with caution. Relaxing guardrails might attract adult users seeking more genuine conversation, but it risks alienating those who depend on strict safety measures.

Altman’s promise of “age-gating” is central to the company’s risk mitigation strategy. Yet Cuban’s skepticism reflects a common worry among parents and teachers—that technical barriers can be circumvented, and that once the door to adult content is opened, it’s difficult to ensure it stays closed to minors. The question isn’t just about technology, but about trust: can OpenAI convince the public that its safeguards are truly effective?

Industry Response and the Road Ahead

The reaction to OpenAI’s announcement has rippled beyond Cuban’s comments. Industry analysts and educators are watching closely, weighing the benefits of a more expressive AI against the risks of exposure to inappropriate material. Some experts argue that robust verification systems and transparent policies are essential, while others warn that no system is foolproof, especially in environments like schools where oversight can be limited.

The stakes are high. ChatGPT is already embedded in classrooms and households worldwide, used for everything from homework help to personal advice. If parents and educators lose confidence in the platform’s safety, they may seek alternatives—potentially reshaping the market for AI-powered learning tools. As noted by Benzinga, the next few months will be a crucial test of whether OpenAI can deliver on its promise of safety without sacrificing the accessibility and utility that made ChatGPT popular.

For Cuban, the issue is not just theoretical. By publicly challenging OpenAI’s direction, he’s calling attention to the real-world consequences of tech decisions that ripple through society. His warning is a reminder that innovation must be matched by responsibility, especially when it comes to tools used by millions of young people.

As December approaches, all eyes will be on OpenAI’s rollout of its new age-gating system. Will it be enough to reassure skeptical parents and teachers? Or will Cuban’s prediction of a backlash prove accurate, forcing OpenAI to reconsider its approach?

Mark Cuban’s critique brings into sharp focus the ongoing tension between technological progress and public trust. While OpenAI’s push for a more ‘human’ AI aims to expand possibilities, its success will hinge on whether it can safeguard those most at risk—children and students—without eroding the confidence of the broader community.

LATEST NEWS