In The Loop Episode 16 | The Top Five AI Features SaaS Companies Are Shipping In 2025 (And Why They Work)
.png)
Published by
Published on
Read time
Category
8By the end of this episode, you'll understand the 5 categories of AI features that technology companies are launching—from small additions to complete AI-native applications
I was inspired to create this breakdown after reading the latest Deep newsletter by Rich Holmes from the Department of Product. He published an interesting analysis after analyzing over 25 major AI feature launches from companies like Stripe, Google, LinkedIn, Notion, and Spotify over the past few months. I've then shaped this thinking based on what I am seeing with our customers & what we are building right now.
Type 1: Embedded UX
The first category is what I'm calling Embedded UX.
What distinguishes this category is that these features are often invisible in the background or appear at just the right moment, appearing contextually, with small pop-ups and widgets.
These features are not trying to get users to engage in entirely new behaviors—they’re just solving smaller jobs and tasks for people, almost as a layer on top of their current technology platform.
Let's discuss some examples that have launched recently:
Kindle's Recap feature demonstrates this approach perfectly. It's aimed at solving the problem of "narrative continuity"—that moment when you, for example, pick up a book in a series months after reading the previous one and can't remember key plot points or characters. Rather than forcing readers to search online for summaries or abandon the series altogether, Kindle now generates a concise recap of the previous book when you open the new one.
What makes this implementation particularly good is its contextual awareness—it only appears when it's actually needed and doesn't interrupt the core reading experience.
It doesn't ask readers to fundamentally change how they use the product; it just removes a pain point that might otherwise cause abandonment or frustration. It does this in a way that adds very little new user interface or requirement for you to press buttons.
Another example is from Google Chrome, who recently launched an on-device AI system using their Gemini model that scans for potential security threats in real-time. When it detects something suspicious—like a tech support scam or phishing attempt—it displays a warning without disrupting your browsing.
Chrome creates a separate security "feature" that users have to actively engage with—its a widget that's directly integrated directly into the normal browsing experience, appearing only when needed as a pop-up widget.
Attio, another example discussed by Aakash Gupta, is growing rapidly in an extremely crowded market. They have a CRM for managing sales deals so they are up against the likes of Salesforce.
When a user creates a new workflow, the system automatically generates a descriptive name and summary based on the workflow's function. It’s a small thing in the top left. There's no flashing ‘AI badge’, no special interface to learn, just a small moment of delight when users realize they can skip a tedious step in the process.
Another example also from Attio: when someone responds to a meeting request with "I'm busy all week," Attio's system automatically adjusts the scheduling. Simple but helpful.
Again, the technology disappears into the background and pops up contextually when its needed.
As Aakash Gupta put it, instead of these companies asking, "How can we add AI to this feature?" the question becomes "How can AI make this experience feel magical?"
The former leads to bolted-on capabilities. The latter drives fundamental reimagination of core product experiences.
The common thread is the integration with existing experiences. They don't ask users to fundamentally change their behavior or learn new interaction patterns
These implementations often represent one of the lowest technical barriers to entry for AI features because they augment rather than replace existing functionality. However, don't mistake that for being low-impact—as you can imagine, they make a massive difference.
In essence, these features is an AI agent, a workflow, set of tools and ability to trigger based on something happening, but instead of the final output appearing to a user via a chat interface, the result appears inside the existing user interface, or pops up over the top of the app.
So let's take it down a level and make this more tangible by explaining how that Attio feature summarizes something a user builds in their system without the user needing to lift a finger.
First, an AI agent, whose purpose is to summarize things, is created. It makes sure the output of its responses is concise, with bullet points. It also has a set of skills too—which are workflows. This workflow triggers based on a user doing a specific thing, like creating a new automation in Attio's platform. The agent recognizes the signal, which tells it to complete a task. That agent then decides to use that special ability it has access to i.e., the workflow called ‘Summarize this thing’
The workflow steps are
- trigger based on a user doing something in the app,
- summarize the automation,
- re-organize it and re-write it for the user,
- publish.
Now that summary doesn’t appear in a chat—it appears in the user interface of Attio.
By the way, Mindset AI is launching a new workflow builder next month, plus user interface widgets that can contextually appear for users to unlock those exact types of use cases, all without needing to program anything. We’re so excited about this and can’t wait to show it off. If it sounds interesting, subscribe to our newsletter and you’ll be the first to see—after our customers, of course.
Let's move on to the other category of features.
Type 2: Agents that control existing functionality
The second category represents a more fundamental shift in how users interact with software. AI agents that don't just assist users but actively operate existing interfaces on their behalf.
Rather than requiring users to navigate complex UI, these agents allow users to speak in natural language and then the agent just does it for them, by controlling the features of those applications.
A great example here is Stripe's Dashboard Assistant.
A merchant can simply type "refund transaction #ABC123" and the assistant will navigate to the appropriate section of the dashboard, locate the transaction, initiate the refund process, and complete it with minimal human confirmation. It can create products, update pricing, and manage other administrative tasks that would typically require multiple steps through Stripe's interface.
A key takeaway here is that, for decades, companies have built products around user interfaces with buttons, forms, and dropdowns. Users have had to learn where specific things are located or how to complete certain tasks using the software. What often starts as an innovative idea, over time, becomes frustrating to use, overly complex, and outdated.
AI agents that control application features fundamentally change this.
Users no longer need to know where the "create and send a financial report" button is located in Stripe's interface—they just need to tell the agent what they want to accomplish, and the agent handles the navigation and execution.
For big SaaS companies that have been around for a while, this approach offers a compelling strategy for competing with AI-native startups.
Rather than rebuilding products from scratch, they can layer agent interfaces on top of their existing functionality. This leverages their core IP and technological advantage while delivering the conversational experience users are increasingly expecting.
Mindset AI is launching an invite-only private Beta of this ability later this year. This is something I am personally incredibly excited about. We will enable companies to make technology more human in months—instead of years of unsuccessful AI projects. Get on our mailing list if you want to be invited to the Beta.
For product teams considering this approach, the key question is: Which parts of your product require the most user training but could be done via simple natural language? These areas represent the highest-value opportunities for agents that control your features.
Type 3: Domain-specialized workflow agents
The third category is domain-specific, vertical AI agents that can take action across many systems.
What distinguishes this category is its laser focus on specific Jobs To Be Done (JTBD) for well-defined user types, often centered on a job role or task, such as the sales outreach agent.
A sales software called Salesloft launched AI agents for specific tasks in that job, which can then take actions in their CRM to manage deals, or push activity updates into communication channels, or pull contact information from social media.
LinkedIn's approach is another example: they've launched multiple specialized conversational AI agents for different user segments. Their latest is a dedicated interview preparation agent on LinkedIn that helps job seekers practice interviewing skills.
What's particularly powerful about these domain-specialized agents is that they often extend beyond the core product functionality into adjacent places.
For example, Microsoft has launched its Deep Research Agent, which researches a company's internal SharePoint and external web sources. This would have been a domain for Google.
This is a big opportunity for companies, which is why many people are saying SaaS as we know it is dead. This basically means the traditional strategies, business models, and product approaches of tech companies are becoming outdated.
Product teams at these companies should start by identifying different user segments with specific recurring needs, asking themselves, “What are the highest-value JTBDs for each segment?” Which of those could be improved or automated with a specialized agent? This approach would help prioritize AI investments that will deliver maximum value to specific user groups
We see this a lot at Mindset AI. We’ve had 200+ AI agent coaches and tutors built on our platform, each made to be great at that one thing.
Type 4: Gen AI-focused (just a glance)
I’m going to spend very little time on this group because it kind of blends into other categories, but it is still worth a glance-over: the fourth type is Gen AI-first features.
A really good example here is how Google Meet has just launched their ‘generate with AI’ feature. So, for example, you can now change your background with AI language models. So you can ask it to create a pirate or a football field background.
Another great example is Spotify, which launched an AI DJ feature, called DJ X, as well as their AI playlist generator that you can ask to produce a playlist for specific moods or certain types of situations.
You could argue this blends into other categories, so let’s move on to the final category we're seeing launched in the market today.
Type 5: AI-native standalone applications
The fifth and final category I’ll discuss today is standalone AI applications that aren't upgrades to existing products but entirely new experiences built from the ground up around AI capabilities.
What distinguishes these products is that AI isn't just a feature—it's the fundamental architecture that drives the entire user experience. These applications often reimagine established categories based on what becomes possible when AI is the starting point rather than an addition.
Google's NotebookLM is a great example of this approach and discussed by Richard Holmes. It isn't simply an enhancement to an existing note-taking app—it's a fundamentally new way of working with information. Users upload documents and the system automatically extracts insights, identifies relationships between concepts, and enables conversations with that knowledge base.
We're also seeing this pattern with document generation tools like Jasper AI and Copy AI, which are marketing tools build around AI. They aren't simply adding AI to existing word processors like Google Docs or Microsoft Word, but creating entirely new content creation workflows optimized around AI.
Legacy approaches or established user expectations don't constrain these AI-native applications. They're free to reimagine entire categories based on what AI makes possible, which often results in fundamentally different user experiences.
For established companies, these standalone applications often represent both the greatest opportunity and threat. They could render existing product categories obsolete, but they also open possibilities for entirely new offerings that weren't previously feasible. These are also the highest risk for big tech companies because you are starting a product from ground zero.
It’s a wrap
Anyway, that's it for today, Thanks for listening. I hope this In The Loop episode was interesting. I will see you next week.
Did you know that In The Loop is available on
Please follow and rate the show on you favorite podcast platform—it really helps get the word out.