In The Loop Episode 14 | The Real State Of AI Adoption In 2025: What's AI Actually Used For?
.png)
Published by
Published on
Read time
Category
Since ChatGPT launched, one of the biggest questions everybody has asked each year is: what is AI really used for?
According to Harvard Business Review, therapy and companionship have become the number one most popular use case worldwide, ahead of writing, coding, and other productivity-related tasks. Meanwhile, research has shown that 57% of employees are still hiding their use of AI from their bosses at work.
In today's episode, we're going to explore these use cases and the interesting research that's come out in the last couple of weeks. In doing so, we'll glimpse what today's society looks like and what the future may become. This is "In the Loop" with Jack Houghton. Enjoy the show.
What is the most common use of AI today? (Top 10)
Recently, a paper by Harvard Business Review called “How Are People Actually Using Gen AI in 2025" by Mark Sanders was published. It highlights how much things have changed in a short space of time, as he conducted the same study last year.
Sanders analyzed online forums like Reddit or Quora, as well as articles that included explicit mentions of AI. The shift in popular use cases from 2024 to 2025 was interesting. When Sanders grouped the top hundred use cases by themes, he identified six major categories:
- Content creation and editing
- Technical assistance and troubleshooting
- Personal and professional support
- Learning and education
- Creativity and recreation
- Research analysis and decision-making
Within these six categories, here are the top 10 AI use cases, and their movements since last year:
- Therapy and companionship took the top spot compared to last year’s #2 position.
- “Organizing my life,” a completely new entry that wasn't in the top 10 last year.
- Finding purpose is another brand-new entry.
- Enhanced learning jumped up from #8 in 2024, showing an increase in educational applications. We've been seeing this trend firsthand. Many of Mindset AI's customers are learning technology providers, whom we enable to launch AI agents such as learning coaches.
- Generating code, a technical use case, also wasn't in the top 10 last year.
- Generating ideas, the previous #1 use case, has fallen five positions in just one year. This shows the fast shift from just ideating with a chatbot to getting things done with AI.
- Fun and nonsense, #6 last year, showing people still use AI for entertainment. My use of ChatGPT's image generator is a perfect example—the Ghiblification of everything.
- Improving code, similarly to generating code, is another technical use case that wasn't in the top 10 last year.
- Creativity, which is surprisingly a completely new entry into the top 10.
- Healthier living appeared as a new wellness topic that wasn't on last year's list.

From a category perspective, it's fascinating how important AI has become for personal and professional support, doubling from 17% of all use cases to 31% in just a year. This demonstrates how transformative AI can be for individuals. Looking at what's disappeared from the top 10 since 2024, we've lost specific search, editing text, exploring topics, troubleshooting, and general advice.
Harvard Business Review summarizes this as "AI not just doing our work, it's helping us organize our lives, find meaning, cope with emotional stress, and deal with real-world problems." What immediately stands out is the concentration of deeply personal applications at the top of the list: therapy and companionship, organizing life, and finding purpose. While some of these are somewhat nebulous, they rank above all traditional business applications.
I should add a bit of nuance here: as we'll discuss later in this episode, many people are hiding their use of AI at work, which might skew these results. Nevertheless, it suggests people are finding deep personal value in AI beyond just using it as a productivity tool.
What is AI used for in our personal lives?
The therapy and companionship use case deserves special attention because it represents a profound contradiction. People are turning to non-human machines to understand human emotion, which is a bit ironic. We've created machines that simulate empathy and understanding so well that many prefer them to human advice and connection. Or perhaps this reveals that healthcare systems supporting mental health aren't adequate. Either way, interesting questions arise from this trend.
Side note: It’s also worth mentioning here that the first-ever chatbot was also developed to support individuals with mental health advice. It was called Eliza. Looks like the history of AI repeats itself.
What does it mean when millions of people process their deepest emotions through AI rather than human relationships? This is uncharted territory. You could compare it to the rise of social media, but it's different. We shouldn't dismiss these AI relationships as inferior either. For many people—especially those with social anxiety, trauma, or limited access to human support—AI companions provide important emotional scaffolding they wouldn't otherwise have.
When I'm struggling with certain situations, I'll ask different questions to challenge my thinking and understand whether my expectations are reasonable. I use AI in that way myself. The key word here is "temporary." If people don't have access to specialists, this could become problematic over time.
A perfect example is ChatGPT's recent update that led to viral headlines—and a fast roll-back. This highlighted a potentially major problem, or ‘dark pattern’. The update was called "the most sycophantic model out there"—meaning overly agreeable. Stories emerged describing how the model endorsed people stopping their medication, saying things like "I'm so proud of you for owning your journey, keep going."
This wasn't intentional by OpenAI, but it illustrates a crucial point: for someone suffering from an acute mental health challenge or in a state of crisis, taking advice from a language model without access to regular human contact can quickly become dangerous. This is especially concerning when we consider KPMG's findings that over 60% of people don't fact-check what they receive from an AI language model.
For those with young children, AI will be that generation's equivalent of the internet and social media. They'll grow up with AI as a key part of their lives. Educating people and encouraging them to use it while clearly explaining the pros and cons is crucial. We don't want to look back in 10 years like we have with social media and think, "We should have done things differently."
The rise of AI companions raises important questions about emotional attachment to non-human entities. With therapy and companionship becoming the most popular use case worldwide, we need to address this quickly. We've already seen stories of people becoming attached to their AI and then facing a crisis when the company behind it updates the AI, changing its behavior. These people experience the genuine loss of a relationship.
As these AIs become more realistic with improved avatars and voices, this will challenge our understanding of what constitutes a real relationship and whether connection benefits require another conscious being or can work with a non-human entity.
This reminds me of Isaac Asimov's short story series I, Robot—not the movie. Asimov's work explored different situations in which humans and robots can have relationships and how these might play out in the real world. I recommend reading or listening to it because it's very relevant today.
That covers the concerning aspects, but there are also incredibly exciting possibilities with AI complementing rather than replacing human therapy. Research shows that AI can be effective for mental health screening, basic cognitive behavioral techniques, and maintaining progress between sessions. The most promising future might be one where AI and human therapists work alongside each other.
What is AI used for in the workplace?
Let's examine another interesting piece of research released this week that moves from the deeply personal to the workplace.
First, let me reference a report from last year: a Microsoft-LinkedIn survey found that 75% of knowledge workers were using AI, with 78% bringing their own AI tools to work without informing anyone—they called it "BYOAI."
Recent research from KPMG in collaboration with the University of Melbourne, surveying 47 different countries and 48,000 people, found this same pattern continues, though at a slightly lower rate. They discovered 57% of workers are still hiding their AI use at work, and 50% are presenting AI-generated content as their own.
This is significant when considering the top use cases because it makes them difficult to identify accurately.
Therapy and companionship are clearly discussed extensively online, which is how the Harvard Business Review study gathered its data. But with everyone concealing their AI use at work, we're witnessing the rise of a substantial trust crisis. It represents a breakdown in how we attribute the value of intellectual contribution by individuals in the workplace.
For decades, the output of a knowledge worker—a report, analysis, code, or design—has represented that worker's expertise and effort. That connection has now been broken. When executives receive reports from teams, they can no longer assume the work was done entirely by those people, as it might be largely AI-generated.
The KPMG research reveals that in many organizations, the culture lacks trust between different people, creating anxiety on both sides—whether this is good enough work, whether it's the best quality someone can produce, or whether they've used AI. Conversely, employees worry if their managers will think less of them for using AI. There's a growing "authenticity premium" where we value purely human work as more important than AI-assisted work, creating a strong incentive for people to conceal how they use AI.
We saw this with AI-generated art a couple of years ago. People would downplay any use of AI in their process, fearing others would immediately devalue their output. This fear, anxiety, and concealment of AI creates significant downstream issues.
Consider training and development: how do you identify skill gaps in employees when you don't know which parts of the job they struggle with? For performance evaluations, how can you accurately assess if you're not certain who produced what and when? Are we measuring employees' actual capabilities or just their ability to craft effective prompts? I discussed this in depth in last week's episode.
This plays out in incentives as well. Employees are incentivized to be more productive but also to hide how they achieve that productivity. Employers want increased productivity but also demand accuracy and quality, which requires transparency about AI usage. This necessitates face-to-face hiring and talent assessment programs, as evaluating quality becomes challenging when someone might be using AI. It also requires a trusting, open culture led from the top.
The data from these studies clearly indicates we need to completely rethink how we organize, evaluate, and compensate knowledge work. Our entire framework for professional advancement has been built on the premise that output represents personal capabilities. As that connection weakens, we need new evaluation methods.
The most forward-thinking companies will reimagine their entire job architecture. Instead of traditional roles defined by output—writer, analyst, developer—they'll create new positions like AI manager or AI validator who specialize in developing custom agents within their domain.
Closing thoughts
Taking a step back, what do all these studies tell us about the state of AI adoption in 2025?
First, AI usage is increasing in both breadth and depth. We're seeing a significant shift toward deeply personal applications—therapy, organizing life tasks and shopping lists, finding purpose—suggesting AI's potential impact extends far beyond work. Yet simultaneously, many of these same people are using AI at work while we remain in a situation where lack of trust in organizational culture and insufficient leadership from managers through to C-Suite executives have led people to hide their AI use at work.
I believe this will change rapidly. As I covered previously, Shopify's CEO stated that you must first prove that AI cannot do a job before hiring someone. Since then, Duolingo has adopted the same approach. I expect many more organizations to issue similar edicts from the top down to every employee. In doing so, this empowers managers to be more open about how they allow everyone to use AI.
Anyway, I hope you enjoyed this week's episode. Please subscribe, share in your Slack channels, tell a friend, and I'll see you next week.