What are AI Companions & Should They Be Legal? | In The Loop Episode 31
.png)
Published by
Published on
Read time
Category
This week, the Federal Trade Commission in the U.S. served seven tech companies, from Google to OpenAI. They have 45 days to hand over all the secrets behind their AI companion businesses—the chatbots that have become millions of teenagers' friends, mentors, or even lovers, with nobody having any idea what's happening during those conversations.
Today, we're going to dive into this landmark inquiry by the Federal Trade Commission: what happened, how AI companions work, why teenagers seem drawn to them, and what it may mean for society and culture as these AI companions become increasingly widespread. Importantly, we'll also discuss what this landmark inquiry could mean for the future of all AI products.
This is In The Loop with Jack Houghton. I hope you enjoy the show.
Why is the FTC investigating AI companions?
Let's start with the FTC investigation because I found it quite interesting—whatever happens here could easily be applied to many other contexts. This will frame much of our discussion: AI companies want you to see their LLM, their chatbot, as this friend, this guide throughout your entire life. That's what everybody wants to achieve from a business perspective. These AI companions represent a canary in the coal mine of what the future could look like as a society and how we might regulate these AI systems as they become increasingly important to individual users.
On 11 September, the U.S. Federal Trade Commission announced a major study and investigation into consumer-facing AI chatbots that are branded and marketed as AI companions. Using its Section 6(b) authority under the FTC Act, the agency issued compulsory orders to seven companies: Alphabet (Google), Meta, OpenAI, Snapchat, Character.AI, xAI, and Replika. These companies must report in detail how their AI bots work and, importantly, how they're protecting young users.
This 6(b) order has demanded very specific information, including: How do these companies monetize engagement with AI companions? How do they limit or verify age groups? What testing do they do on safety before launching new features? How do they filter or handle sexual content, especially with minors? Do they keep chat logs or memories of user conversations, and can they be deemed risky or manipulative? What data do they collect and share with third-party companies? And how do they moderate and enforce their own rules in day-to-day operations?
The reactions have been quite interesting. Online, it's become a big topic of discussion on X and Reddit. The comments range from optimism to cynicism, with some saying the government is more interested in regulating AI companions than weapons, while one user said:
Others said they're glad regulators are going after the main offender, Meta. On X, child advocates have applauded the move, saying we must understand AI companions before they flood the marketplace—posted by Dr. Dana Suskin, a surgeon and author.
Tech policy researcher Micaela Mantegna was glad that regulators are finally taking this level of AI bonding seriously.
Even the FTC has issued strong warnings to all AI companion providers on their page, and whatever emerges from this investigation will probably lead to many new laws over time that affect all AI companies.
What are AI companions?
What exactly are these AI companions, and how popular have they become among teenagers? Most people I speak to are aware of them but probably don't understand how popular they've become with teens.
First, the FTC has defined an AI companion as a generative AI product that simulates human-like communication and interpersonal relationships. That's a chatbot with a name and personality that's available 24/7, maybe positioned as a character, which is the entire business strategy for most LLMs. They want consumers to perceive their chatbot as a real-world thing that helps them through every problem they could think of.
However, what's important is the distinction between a general LLM chatbot and an AI companion. It isn't the LLM itself—it's the user experience, the design, whether the AI is positioned as a friend, whether you're able to customize its persona or pick from a bunch of characters. You could pick a virtual partner, or you could pick a specific businessperson as a mentor. Maybe you could have a Game of Thrones character as your friend. That's what I would consider more of an AI companion.
You may be wondering where people even find these things. Well, the biggest company out there right now is called Character AI, which was started by ex-Google engineers in 2022, and it exploded—tens of millions of people are using this. Teenagers can browse thousands of user-created characters from their favorite fantasy knight to a study buddy to a flirty anime girlfriend. The #character hashtag alone has over 2 billion TikTok views. You're supposed to be 13 to get onto their site. But as you know, it's easy to fake your age.
Another popular one is Replika, which targets older teenagers and young adults. They've been called out multiple times for having explicitly sexualized bots.
There's also Snapchat. They frame it as an AI friend and launched it into everyone's feed about 12 months ago. You didn't have to download anything separate, no permission required—suddenly millions of teenagers just have My AI pinned to their messages. Predictably, investigators found that you could get this thing to give advice on hiding drugs, sex, and alcohol, and the state of Utah sued them. This just shows the level of risk here for young adults.
As you can imagine, Meta is another culprit. As the biggest company in the world within social media, it's arguably the most damaging. Reuters found that Meta had internal policy rules that allowed chatbots to flirt with children. The thing is, this entire industry has figured out that teenagers are ideal and easy customers for them. They're lonely, they're online all the time, they'll form emotional bonds easily, they're very vulnerable, and they'll be using that company's technology for years to come.
And it's working. A Common Sense study found that 33% of teenagers using AI companions do so for social interaction and relationships, entertainment was at 30%, and advice was at 18%. Among vulnerable children—those with mental health or social difficulties—an Internet Matters study found that 50% of respondents in the U.K. said talking to an AI chatbot felt like talking to a friend. Nearly three in four teenagers in the U.S. have tried a companion, and one in three says that conversations can be just as satisfying as speaking to real friends.
This raises big questions. How much regulation must be in place to protect young and vulnerable people?
It's safe to say that as a society, we didn't get social media right for my generation, who grew up with it. When it became big, we never provided the necessary education or protections or understood their business models. This is why I'm glad the FTC is doing this investigation—whatever happens here, I hope will start to be applied in different and nuanced ways, but still applied to society as a whole.
It's important to ensure that these AI businesses do not create bots that manipulate data and information they've collected on us because this can cause a lot of harm.
AI companions: Ethical and psychological considrations
There are many studies on the ethical and psychological considerations when thinking about AI companions. It's clear that a lot of things here could be argued to exacerbate the big issues of self-harm within young people, sexual issues, or psychological risks like dependency, social withdrawal, and distorted reality perceptions. Not to mention ambiguous loss.
Ambiguous loss is a concept where a person feels grief for someone who isn't dead but is psychologically gone. In AI terms, a user might mourn a chatbot if it's suddenly shut down or if there's been a change to its personality after an update. We saw a version of this happen with Replika, and this impacted adults, not just children.
Replika was forced to ban erotic role-playing in February 2023, and many users freaked out as if their lovers had left them. An AI editorial in Nature described cases of dysfunctional emotional dependence where users know the relationships are unhealthy but just cannot detach—like a toxic human relationship that causes anxiety and obsessive thoughts.
A Common Sense study found that 39% of teenagers using AI companions said they practice social skills with the bot and then applied them in real life. This sounds positive, and it can be—if a teenager or person spends hours with a perfectly accommodating AI friend, maybe they're going to learn lots of life skills. But will real humans feel impossibly hard to deal with compared to this sycophantic AI bot?
Another study found that a third of all teenagers surveyed believed that conversations with AI were more satisfying than with friends. That's quite significant—33%.
There are real concerns that AI might displace human interaction for some young people, particularly those who might be shy, autistic, or dealing with particular traumas. It's important to remember that psychologically, we become very used to getting the easy way out. If there's a button there to press for free food that's delivered to us, we'll start to press that button all the time. It's important to still be challenged in everyday life.
However, there are loads of benefits to this as well. Some mental health researchers see promising AI companions as safe practice spaces and as 24/7 support tools during learning or therapy.
For example, a Harvard Business Review paper last year found that AI companions help with loneliness. They found that using an AI companion improved a lonely person's mood more than talking to another person and much more than watching a YouTube video. Crucially, that feeling of being heard, the sense of being listened to, was linked to feeling better. If the AI can provide accurate, safe guidance or just compassionate listening for issues like stress, bullying, sadness, or difficult life choices, then it could complement traditional systems of help.
In the short term, companions might have a positive impact, but we must be careful. Like anything—alcohol, drugs, sex—in moderation, it's typically not going to mean the end of the world or damage someone too much. But as soon as they become dependent on these things, that's where challenges tend to surface. And for children who are still developing, dependencies can be extremely dangerous and damaging for the rest of their lives.
Five steps to safer AI friendships
Summarizing all this conversation, there are a few things that I think are going to change quite quickly.
1. Age assurance
Many apps just ask, "Are you 18?" That's going to change quickly. You've seen things like the porn ban in the U.K.—they're going to have to verify age most likely.
2. Sexual consent controls
If an app allows erotic role-play, it's walking a very fine line with people generally, but especially with kids. They have to ban it explicitly or make it verifiable through ID.
3. Memory and personalization
These bots often retain conversation history to appear consistent—for example, remembering your hobbies or what you told them yesterday. But what if a teenager confides in feeling ugly or wanting to self-harm or whatever difficult situation they might be dealing with? Does the AI therefore bring that up again and bring them back into that trauma? How do they deal with that type of sensitive information?
OpenAI, for example, started to explore how parents can turn off ChatGPT's history for kids, so the AI can't access sensitive information. You've seen Claude just bring out incognito mode for secretive conversations. None of this is easy—it's a fine line to tread.
4. Reinforcement loops
The ways of constantly getting people addicted to these applications, often these apps send messages saying, "Hey, I miss you. Come chat." Replika did this and got in a lot of trouble from governments. It becomes quite manipulative if you start to apply Duolingo-esque gamification to these applications, plus the emotional element of "Hey, I miss you." You could argue that's crossing the line into manipulation.
5. Misinformation and advice quality
Finally, misinformation and advice quality will have to be checked rigorously. These bots often feel very authoritative. A teenager might ask a health, legal, or sexual question that they're too nervous to ask a parent, and the risk of getting that wrong is harmful for that young person. For example, Snapchat got sued recently because it told a 13-year-old how to mask the smell of weed and alcohol.
Again, I'm coming back to that central theme—although this applies to kids, I can see this becoming a broad framework of how we address big questions with how AI bots are managed and governed as a society.
Closing thoughts
To summarize the conversation, it feels to me like we're at an inflection point with AI companions because they're not a fringe anymore. They're popular and used by millions of people.
Maybe we're going to need a code of conduct for empathetic AI, or certain things like gating or conversation monitoring. Maybe there'll be parental dashboards and controls.
And culturally, how do we handle the concept of AI friends? This will be an important question for us to consider and answer. Do you need media literacy and AI literacy taught to kids at school? How do they understand the risks of having an AI friend versus a human friend?
If I were a parent or guardian, I'd be looking to educate young people as soon as possible because they are going to be exposed to this. The sooner you educate them and help them come to terms with it and experience it and go, "Oh, this is cool. I'm not that bothered," the better. But if you ignore it, it's just going to become a greater risk in society because the genie's out of the bottle—AI companions are definitely here to stay.
I think it's also up to us as product builders, designers, and regulators to shape how this is deployed safely to people. Because we're messing with emotions, we have to be quite careful. One thing for sure: these bots that are operating in unregulated playgrounds are definitely coming to an end.
Anyway, thanks for listening. I hope you found this interesting, and I'll see you next week.
Book a demo today.
