ChatGPT Introduces Ads Amid Rising Social Tensions Over AI Use

Creator:

ChatGPT AI chatbot interface

Quick Read

  • ChatGPT will introduce ads for free and Go subscribers in the U.S. to boost revenue.
  • The FTC is scrutinizing ‘acqui-hire’ deals by major tech companies like Microsoft and Meta in the AI sector.
  • Generative AI is causing significant social friction and division in personal relationships over ethical and practical use.
  • Concerns include AI’s environmental impact, data privacy, job displacement, and the creation of deepfakes.
  • Experts advise curiosity over judgment when discussing AI use with loved ones, advocating for systemic regulation over individual blame.

OpenAI’s popular chatbot, ChatGPT, is set to introduce advertisements for its free and Go subscribers in the U.S., a move signaling a crucial new revenue stream for the rapidly expanding artificial intelligence giant. This commercial evolution arrives amidst a growing societal debate over the ethical implications and personal impact of generative AI, which is increasingly causing friction and even division within personal relationships and communities.

According to *Fortune*, OpenAI’s CEO of applications, Fidji Simo, confirmed that ads will begin appearing at the bottom of ChatGPT’s answers for both free users and those subscribed to the $8-a-month Go plan. This strategic shift is expected to bolster revenue for the startup, which investors have valued at an impressive $500 billion. The introduction of advertising aligns with a broader trend in the tech industry, where platforms from search engines to social media and streaming services have increasingly relied on ad revenue.

OpenAI’s Commercial Expansion and Industry Scrutiny

The move to monetize ChatGPT through advertising comes as OpenAI continues to innovate and expand its offerings. Beyond the immediate revenue implications, *Fortune* also reports on other significant developments within the AI sector. OpenAI’s much-anticipated mystery AI gadget remains on track for a second-half 2026 launch, hinting at further diversification of its product ecosystem. Meanwhile, Google’s latest Gemini model is experiencing a surge in API calls from developers, indicating robust activity and interest in competing AI platforms.

However, the rapid growth of AI has not gone unnoticed by regulators. The U.S. Federal Trade Commission (FTC) is reportedly scrutinizing ‘acqui-hire’ deals prevalent in the AI business. These deals allow major tech players like Microsoft, Google, Nvidia, and Meta to onboard entire teams of AI experts from startups without formally acquiring the companies themselves. Noteworthy examples include Meta’s substantial deal to secure Scale AI’s founder and nearly half of the company, and Microsoft’s 2024 licensing agreement with Inflection AI, which brought co-founder Mustafa Suleyman and his team into Microsoft. The FTC’s investigation raises questions about potential anti-competitive practices and whether these deals could face unwinding, potentially forcing companies to part with prized AI talent.

Generative AI: A New Relationship Dealbreaker?

While tech giants navigate regulatory landscapes and financial strategies, the everyday adoption of generative AI is creating unexpected social fault lines. A recent report by *Dazed Digital* highlights how the use of platforms like ChatGPT is becoming a significant point of contention in personal relationships. For many, like 27-year-old Theo, whose eight-year relationship faced strain over an AI-generated meme, generative AI has become a ‘dealbreaker.’ Theo, a creative professional, expresses deep concern over AI’s potential to displace jobs and appropriate artists’ work, a view he initially believed was shared by his left-leaning friends and colleagues. He now finds it ‘very depressing’ that many use it daily, even in his own publishing field.

This polarization extends beyond professional ethics to deeply personal values. Kya, also 27, found herself in a heated Christmas Day argument with her family over AI. As the sole opponent, she voiced concerns about the disproportionate environmental impact of data centers on marginalized communities and the lack of robust federal privacy laws regulating AI in both the U.S. and UK. Her siblings, however, championed its convenience, viewing her avoidance as ‘missing out on the ground zero of the gold rush.’ Further tension arose as their mother struggled to differentiate between real and AI-generated content online, leading Kya to accuse her siblings of contributing to that confusing reality.

Navigating the Ethical and Social Divide

The ethical concerns surrounding AI are manifold, ranging from its environmental footprint to its potential for misuse, such as generating sexually explicit deepfakes. Tallulah Belassie-Page, policy and advocacy manager at the Online Safety Act Network, notes that ‘there’s now a level of understanding around the potential harm it can cause, despite the convenience it may provide us.’ This tension, she observes, is palpable within communities. Some individuals, like 30-year-old content editor Ross, are so put off by AI use that they would not pursue a second date with someone who openly uses ChatGPT, stating, ‘When someone has the app, I think less of them.’

Conversely, others embrace AI for its practical benefits. Victoria, 25, was encouraged to use ChatGPT in her sales job to improve efficiency after being criticized for slowness. She now uses chatbots ‘for any basic task or answer’ due to their instant nature, even for relationship advice, much to her sister’s dismay. Victoria dismisses ethical concerns as ‘dramatic,’ believing that ‘the world is evolving. We’re always going to have AI in it now.’ She admits to not giving thought to the ethical and environmental implications, asserting her intent to use AI ‘forever and always.’

Fostering Dialogue and Systemic Accountability

The growing divide underscores the challenge of discussing AI with loved ones who hold differing views. Alix Dunn, founder and CEO of The Maybe, a public interest firm challenging technology’s power structures, suggests that frustration is often misdirected. She argues that ‘many of AI’s political problems sit with the companies pushing these products on us.’ Dunn emphasizes that directing anger at individuals who may have little control over AI’s integration, or even rely on it for work, reduces the solidarity needed for actual mobilization and change.

Instead of confrontation, Dunn advises approaching these conversations with curiosity. Understanding why someone uses AI – perhaps out of loneliness, efficiency, or a perceived necessity for work – can foster deeper understanding and allow for constructive support, rather than judgment. She also stresses the importance of expressing how AI use directly affects one’s livelihood or well-being, but without positioning it as a direct challenge. The Online Safety Network, for its part, is campaigning for comprehensive regulation in the UK to mitigate AI’s risks, hoping to shift the burden of moral questions from individuals to systemic policy.

Ultimately, the integration of generative AI into daily life presents a complex challenge that extends far beyond technological innovation. As platforms like ChatGPT become commercialized with advertising and regulatory bodies begin to scrutinize the power dynamics of Big Tech, the societal impact on personal relationships and shared values requires a nuanced approach. The focus must shift from individual blame to holding the developers and implementers of these powerful technologies accountable for creating robust safeguards and fostering an informed public discourse, rather than allowing AI to further exacerbate social divisions.

LATEST NEWS