Quick Read
- Instagram and Facebook users report accounts wrongly disabled by Meta under child endangerment allegations.
- Affected users describe a frustrating appeals process with minimal human support.
- Some accounts were reinstated after media intervention or multiple appeals, but others remain disabled.
- Meta cites its policy to remove harmful content using technology and human review, but did not comment on specific cases or AI use.
- The issue has led to loss of personal memories, business income, and access to support groups for affected individuals.
CHICAGO (Azat TV) – Instagram and Facebook users across the U.S. are reporting widespread and frustrating instances of their accounts being wrongly disabled by Meta, often under severe accusations of “child endangerment” or “child sexual exploitation.” These incidents, brought to light by local investigations, highlight significant challenges with Meta’s automated content moderation and its appeals process, leaving individuals cut off from personal memories, support networks, and even their livelihoods.
The issue gained prominence after multiple individuals reached out to the ABC7 I-Team, detailing how their Meta-owned accounts were abruptly suspended. In all four cases investigated by the I-Team, account holders asserted they were unjustly accused of violating community standards related to child safety, leading to the immediate deactivation of their Facebook and Instagram pages.
User Accounts Flagged for “Child Endangerment”
Natalie Martinez, a mother from Homewood, Illinois, expressed her disbelief when her Instagram account, along with linked Facebook and sub-accounts for her son’s soccer team, were disabled. The notifications from Meta stated her page “Doesn’t follow community standards on child sexual exploitation, abuse, and nudity.” Martinez, who used her Instagram to connect with a diabetic support group for her daughter, found herself isolated. She managed to get her accounts back through the digital appeals process within three days, only for them to be disabled again shortly thereafter, necessitating another appeal.
Similarly, Michael Calabro from Andersonville, Illinois, reported his Instagram photography business account and linked personal Facebook page were disabled in October 2025 under identical child exploitation allegations. Calabro stated he had done “absolutely nothing wrong” and highlighted the financial impact on his business, alongside losing connections to his cancer support community. After ABC7 began inquiring about his case in November, Calabro’s accounts were reactivated the following week.
Another individual, Jenna Shelton of Hobart, Indiana, has been actively fighting for the reactivation of her Instagram and Facebook accounts, including her daughter’s linked page, since October 2025. She described the situation as “incredibly frustrating,” fearing the permanent loss of precious memories.
The Frustrating Appeals Process on Instagram
A recurring theme among affected users is the profound frustration with Meta’s appeals process, particularly the apparent lack of human customer support. Martinez noted her inability to contact anyone at Meta by phone to dispute the serious allegations. While some, like Martinez and Calabro, eventually saw their accounts reinstated, the process was often protracted and fraught with uncertainty.
This sentiment is echoed in an online petition on Change.org, which has garnered over 55,000 verified signatures, accusing Meta of “wrongfully disabling accounts with no human customer support.” The petition underscores a broader public concern regarding the efficacy and fairness of Meta’s enforcement mechanisms.
Meta’s Response and Broader Context
When approached by the I-Team regarding these specific cases, Meta did not issue an official statement. Instead, the company provided its general policy, which states it removes “harmful content that goes against our policies” and enforces these policies using “technology and human review.” However, a Meta spokesperson did not directly answer questions about the specific use of artificial intelligence in detecting community standard violations.
These incidents follow similar reports from the Bay Area in August, where ABC7 previously uncovered cases after Meta had deleted 600,000 accounts as part of a teen safety initiative aimed at combating predatory behavior. At that time, Meta stated it takes action on accounts violating policies and offers an appeal process for perceived mistakes. The current wave of complaints suggests that while Meta is committed to combating harmful content, its automated systems may be casting too wide a net, leading to significant collateral damage for legitimate users.
Impact on Users and Community
Beyond the inconvenience, the disabling of accounts has tangible and often severe consequences. For individuals like Martinez, it meant a sudden severing of vital support networks. For Calabro, it translated directly into lost business and income. The loss of digital memories, photos, and messages, as articulated by Shelton, represents an irreplaceable personal void. The experience, described by Martinez as “scary and isolating,” highlights the deep integration of social media into daily life and the critical need for robust, fair, and accessible moderation and appeals systems.
The confluence of automated content moderation and the lack of accessible human support for appeals presents a significant challenge for Meta, risking the alienation of legitimate users and undermining trust in its platforms, even as it strives to enforce critical safety standards.

