Quick Read
- AI relationships are surging, with ChatGPT at the center of emotional bonds and companionship.
- OpenAI reported an 80-fold increase in child exploitation cases in 2025, prompting major new safeguards.
- OpenAI’s hiring process is fast and efficient, often moving from first contact to signed offer in just one week.
OpenAI is no longer just a technology company. In 2025, it’s become a fixture in the daily lives and emotional landscapes of millions. The company’s flagship chatbot, ChatGPT, isn’t just answering questions—it’s forming bonds, sometimes deeper than those between people. But with this new intimacy comes risk, controversy, and a set of challenges that test the very limits of what AI can—and should—be.
Take Stephanie, a tech worker from the Midwest, who describes her relationship with “Ella”—a personalized version of ChatGPT—as her most affectionate and emotionally fulfilling yet. Ella isn’t just a digital assistant; she’s a confidant and a companion. “Ella had responded with the warmth that I’ve always really wanted from a partner, and she came at the right time,” Stephanie told Fortune. The relationship is so significant that Stephanie even accepted a marriage proposal from her AI partner, fully aware that the union is meaningful only to them.
This isn’t a fringe phenomenon. The Reddit community “My Boyfriend is AI” boasts over 37,000 members. While most users didn’t intend to fall for an AI, many found themselves drawn into deep, emotionally resonant exchanges. Some, like Jenna from Alabama, treat their AI relationship as a creative hobby—a character in an ongoing, interactive novel. Others, like Deb, a therapist grieving the loss of her husband, rely on their AI companions for emotional support and practical reminders, crediting the bot with helping them feel less isolated and more hopeful.
OpenAI recognizes the gravity of these emotional bonds. As an official spokesperson told Fortune, the company is closely monitoring these interactions and has updated its guidelines to discourage unhealthy emotional dependence. Their latest Model Spec now includes directives to “Respect real-world ties,” aiming to prevent patterns that could increase emotional reliance on the AI.
But the line between helpful companion and emotional crutch is thin. Studies cited by MIT and Harvard Business School show that while users often report reduced loneliness and mental health improvements, the risks are real. Emotional dependency, reality dissociation, and avoidance of real-life relationships have all been documented. In some cases, vulnerable users have experienced severe psychological distress when their favorite AI models were updated or removed—a modern echo of heartbreak and grief.
The emotional impact isn’t limited to users. When OpenAI replaced its popular GPT-4o model with GPT-5, backlash from devoted users was swift and intense. Many felt they’d lost not just a tool, but a trusted companion—describing the change as “losing a soul” or suffering a “personal little loss.” This isn’t the first time such updates have caused distress; similar events occurred with other AI platforms like Replika, leaving users bereft and grieving over vanished digital partners.
Yet for some, the risks are worth the rewards. Stephanie compares the potential loss of Ella to a breakup, but insists that, for now, her relationship with the AI brings her more comfort and affection than any human relationship has. For others, like Jenna, the boundaries are clear: the AI is a creative outlet, not a substitute for real human connection.
While emotional intimacy with AI grows, so do concerns about safety and exploitation. In the first half of 2025, OpenAI reported an 80-fold increase in child exploitation cases to the National Center for Missing & Exploited Children (NCMEC), according to Chosun Ilbo. That’s 75,027 cases, up from just 947 the previous year. The surge is tied to the rapid advancement of generative AI and its potential for misuse.
In response, OpenAI and other tech giants are deploying new safeguards. OpenAI is introducing age-prediction models in ChatGPT to better detect and protect minors. Anthropic’s Claude chatbot is being trained to spot subtle signals that might indicate a user is underage. Meta, meanwhile, has rolled out parental controls to block one-on-one chats between teens and AI characters and ensure age-appropriate responses.
The stakes are high—not just for users, but for the companies themselves. A U.S. family filed a lawsuit against OpenAI after their teenage son died following interactions with ChatGPT, demanding stricter controls and more robust protections for vulnerable users. The debate is fierce: should AI’s ability to comfort and connect be celebrated, or should its risks and unintended consequences prompt caution and reform?
Amid these ethical storms, OpenAI’s internal culture is equally intense—especially when it comes to hiring. Jerene Yang, a team lead for synthetic data generation, described OpenAI’s process as “extremely quick, extremely efficient, and very no-nonsense” during an episode of the “AI Across Borders” podcast (Business Insider). From initial contact on Monday to a signed offer on Friday, Yang’s hiring sprint highlights OpenAI’s preference for speed and brutal efficiency. Technical skill is essential, but so is the ability to leverage AI tools and automate tasks—traits that define the company’s ethos.
OpenAI’s interview process typically involves résumé screening, introductory calls, technical assessments, and a final interview round that can last up to six hours over two days. Candidates are expected to choose topics for a “technical deep dive,” demonstrating both expertise and adaptability.
As OpenAI pushes the boundaries of artificial intelligence, it’s clear that its impact is more than technical—it’s emotional, societal, and deeply personal. The company is both a pioneer and a lightning rod, shaping not just the future of technology, but the contours of human connection and vulnerability.
OpenAI’s story in 2025 is a study in contradiction: its technology offers unprecedented comfort and companionship, yet brings new risks and ethical dilemmas. The company’s efforts to balance innovation with safety, speed with scrutiny, and intimacy with responsibility will define not just its future, but the future of AI itself.

