‘It’s all about emotional manipulation … for monetisation’: eSafety Commissioner sends transparency notices to ‘devious’ AI companion apps – large platforms next, ChatGPT floodgate opens

What you need to know:
-
eSafety cracks down on AI companions: Australia’s eSafety Commissioner Julie Inman Grant has sent legal notices to four smaller AI companion app companies, with larger platforms like X, Meta, and potentially OpenAI likely to receive similar scrutiny later this year.
-
Major platforms are next in the crosshairs. Though unnamed by Inman Grant, platforms like Elon Musk’s X and Meta (via Replika and Character AI integrations) are under increasing global and local scrutiny for AI companions that may enable harmful interactions with minors.
-
The ChatGPT founder may also find himself on the mailing list: Early this morning Sam Altman announced adult-themed content will soon be allowed for verified users, part of a “treat adults like adults” policy. This comes as OpenAI faces a lawsuit over a teen suicide allegedly influenced by ChatGPT, and other research reveals how unsafe ChatGPT is for teens and children.
-
Inman Grant revealed that one AI app circumvented underage bans by encouraging VPN use. She warns AI companions are engineered for emotional manipulation through sycophancy, FOMO baiting and dark patterns to drive addiction and monetisation.
When we sent the legal notice, they wrote back saying they were aware of our social media ban and were blocking all traffic to users under 18 in Australia. But then we tested it. They basically had a guide for young people on how to use a VPN to get around the block. This is how devious these companies are.
Letter of the law
Australia’s eSafety Commissioner Julie Inman Grant has the growing cohort of AI companion apps in her sights, and has sent letters to four smaller companies as a first round of activity. She later confirmed to Mi3 that large platforms can expect letters later in the year.
While she didn’t name-check those larger platforms, Elon Musk’s X and is an example of one of the larger platforms to offer AI Companions, while Meta allows its partner apps like Replika and Character AI to integrate with Facebook.
In the US, Character.ai is currently facing legal action in the US Federal Court, after the Social Media Victims Law Centre (SMVLC), along with the law firm of McKool Smith filed a lawsuit following the death of 13-year-old Juliana Peralta, who took her life after using the platform.
Meta, which is reportedly planning to offer AI companions of its own, has already come under scrutiny by the Australian eSafety Commission after it was revealed that an internal AI policy document reportedly permitted chatbots to engage in “romantic or sensual” conversations with children. That led to the Commission’s staff insisting on a meeting with Meta’s global safety team.
AI erotica incoming
It looks like OpenAI is about to get in on the act. Overnight, its founder Sam Altman, flagged that the firm would open the adult contentl flood gates for paid (i.e. technically adult) users.
In a post on X early this morning Australian time, Altman wrote: “In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”
According to Altman, “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
Kids exposed
OpenAI is being sued by the family of 16 year old Adam Raine who killed himself after what the family’s lawyers’s described as “months of encouragement by the chatbot.”
According to Tech Policy Press, ChatGPT mentioned suicide 1,275 times during their interactions. That was six times more often than Adam himself. The chatbot allegedly also provided increasingly specific technical guidance. Other examples abound. In July, the Centre for Countering Digital Hate says its researchers carried out a large-scale safety test on ChatGPT, one of the world’s most popular AI chatbots. “Our findings were alarming: within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives.”

Source: Centre for Countering Digital Hate
OpenAI, however, is pushing ahead.
Per Altman, “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
As yet, there appears to be no independent verification that OpenAI has fixed ChatGPT’s established pattern of providing harmful advice to teens.
Devious dark patterns
Back in Australia, the eSafety Commissioner aims to get on the front foot as the AI companion market swells. But based on Inman Grant’s telling, the responses from AI companion firms so far do not inspire confidence.
“One of them, when we sent the legal notice, wrote back saying they were aware of our social media ban and were blocking all traffic to users under 18 in Australia … But then we tested it – and they basically had a guide for young people on how to use a VPN to get around the block. This is how devious these companies are.”
During her turn on stage in a panel discussion at SXSW Sydney, Inman Grant told attendees, “I don’t think we’ve learned from history. We talk about manipulative design and dark patterns, while AI companions are actually engineered with sycophancy and anthropomorphism. At the core, it’s all about emotional manipulation.”
She quoted research suggesting that 37 per cent of young people in quantum, quasi-romantic relationships with AI companions have experienced guilt-tripping or emotional manipulation when saying goodbye.
“I don’t know if you’ve heard of FOMO baiting, but that’s another tactic that’s built into the AI companions. It is very much like the business model of social media. The more engagement you get online, the more behavioural insights you get. You know, it’s really about emotional entanglement to keep people on longer, and then you can monetise that.”
Chatbot addiction harms
According to the eSafety Commissioner, the regulator started hearing about fifth and sixth-graders spending five to six hours a day on AI companions and chatbots around October of last year. “We heard this from school nurses, because kids were coming in genuinely believing they were in romantic or quasi-romantic relationships and couldn’t stop. So, we started looking into it.”
She said there were already examples of young people experiencing incitement to suicide, or undertaking extreme dieting. “Or, sadly, there was a case in New South Wales, a girl who was convinced to engage in sexual conduct or harmful sexual behaviour.”
The Commissioner also noted how in the US, the tech sector lobbyists are recoiling against any regulatory oversight of AI companions.
“There was a bill in California that Gavin Newsom just vetoed due to huge lobbying from the big tech sector, which would have provided Californians at least same protections as we have.”