Back to resources

Top Three Announcements From OpenAI DevDay 2025 | In The Loop Episode 33

Top Three Announcements From OpenAI DevDay 2025 | In The Loop Episode 33

Published by

Jack Houghton
Anna Kocsis

Published on

October 9, 2025
October 9, 2025

Read time

8
min read

Category

Podcast
Table of contents

OpenAI might have just made its biggest strategic bet yet—and it’s not about making AI smarter. They’re shifting focus from raw intelligence and new language model updates to moving up the application layer, trying to make AI more useful.

This week, OpenAI DevDay 2025 took place—one of the biggest events of the year for companies like OpenAI, where they share their vision and new product features with developers around the world. For those who’ve been listening regularly, you’ll understand how critical it is for companies like OpenAI to retain their developer ecosystem. It always gives us an interesting snapshot of OpenAI’s strategy and where the future might be heading.

In today’s In The Loop episode, we’ll go through three of the biggest feature releases announced during DevDay—whether you should care, what you should know, and what it tells us about the future of AI over the next few years. This is In The Loop with Jack Houghton. Hope you enjoy the show.

1. ChatGPT Apps SDK

Let’s start with ChatGPT's Apps (and SDK) announcement. They launched a new “apps” feature in ChatGPT that lets you bring in third-party applications from major companies like Spotify, Zillow, Canva, Booking.com, Expedia, and Figma directly inside the ChatGPT interface.

What’s interesting is that they’ve tried this before—first with GPT plugins (which failed), then with the GPT Store (which also failed). This is their third attempt at a platform strategy—an App Store-style approach. It essentially lets you use other applications within ChatGPT, combining ChatGPT’s intelligence and context about you with the context and capabilities of third-party apps.

The key innovation here is deep context integration. These aren’t just embedded apps like Expedia or Zillow sitting inside ChatGPT—they actually communicate back and forth, sharing data and context.

OpenAI also launched an Apps SDK that lets developers build these integrations into ChatGPT. It’s all built on top of Model Context Protocols (MCPs), which OpenAI has aligned with as an open standard, alongside much of the industry. 

They’ve also launched an app directory and plan to introduce monetization features soon, including the Agentic Commerce Protocol for direct in-chat purchases. That protocol is part of a broader trend—standardizing how AI systems can purchase goods directly through conversational interfaces like ChatGPT.

The scale here is huge. ChatGPT now has 800 million weekly active users—which is staggering—and around 4 million developers building on OpenAI’s platform in some form. Just through its API, OpenAI is processing about 6 billion tokens per minute. For perspective, that’s more weekly users and activity than TikTok ever had.

This points to the future of all software interactions. Every application—whether consumer, B2B SaaS, FinTech, or biotech—will eventually become conversational.

That’s exactly what we do with our customers at Mindset AI. We work with B2B SaaS providers, product teams, and designers to help them turn their platforms into conversational interfaces that can control features, access user context, and take action across applications. We’re seeing it happen in real time, and this announcement is another major proof point.

Think about it: in many cases, you don’t want to jump between ten different interfaces just to complete a task. 

The demos were impressive—one user logged into Kocsis through ChatGPT to watch educational content and courses. Another demo integrated Zillow, which for U.K. listeners is a platform for buying and selling homes. There was also one with Spotify.

This follows a familiar pattern from major platform providers like Facebook, Google, and Apple. Right now, we’re at step zero—intense competition between several providers, with no clear winner yet. OpenAI has clearly identified context and memory as their differentiator, and they’re trying to capitalize on that advantage by opening a third-party platform through the App SDK. But execution will be key.

If you think back to Apple building iTunes, there were countless other companies trying to do the same thing—even ones that should have done it well—but they failed because the experience didn’t feel natural. Sony, for example, tried to build its own version, as did other music producers, but they all missed the mark.

That’s something I felt a bit during ChatGPT’s demos—it occasionally came off as gimmicky. In one, they asked ChatGPT to design a company logo through Canva, then build a presentation deck. The deck example made sense, but using ChatGPT to design a logo just felt odd. You’d probably just go straight to Canva for that.

It highlights how people should view this: for certain use cases—like searching for or buying homes—it’s perfect. But for deep work, users still want to engage directly with dedicated platforms. That might change as ChatGPT’s technology evolves, but for now, some tasks feel better suited to standalone tools.

Ultimately, this move is about OpenAI’s bid to capture and centralize user context. Once users experience something like Coursera through ChatGPT as a personal tutor, they may not want to go back to a standalone experience. ChatGPT holds all the context—everything leading up to the session, during it, and after—which positions OpenAI to become the everyday assistant by owning every layer of interaction context. I think this is worth digging into further.

2. Context capture

The idea of context and OpenAI’s attempt to capture all of it. Apps running inside ChatGPT will access full conversational context, share information back and forth, and integrate with ChatGPT’s memory feature to deliver personalized experiences across applications.

OpenAI calls this “talking to apps” functionality. The first time you use one of these integrations, ChatGPT will show a privacy policy outlining what data may be shared—similar to Google’s approach. These apps can pull context from your chat history to provide more personalized responses based on previous conversations.

What we’re seeing is OpenAI’s push to become the everyday assistant. But that raises serious questions. Some of it makes perfect sense—like using Zillow to browse houses and then asking GPT follow-up questions such as, “How close is this to a good school or park?” or “Are there any new restaurants opening nearby?” Because GPT has context from your browsing, it can deliver highly relevant answers. Once users experience that level of contextual help, going back to a less connected experience will be tough.

This is where OpenAI could start to pressure Google. While Google is far ahead today, that gap could narrow quickly. At Mindset, we see this shift happening all the time. We’ve just launched a memory feature in beta, helping customers turn their applications into interconnected agents that feel like a single conversation. Users can navigate apps through natural dialogue and seamlessly take actions across tools.

It’s fascinating to see how ChatGPT is mirroring and expanding the kind of technology and design work we’re doing at Mindset AI—alongside some incredible partners—through contextual awareness. I expect most major providers will follow suit.

When we combine these two worlds—our customers making apps conversational and ChatGPT pushing deeper context integration—technology will increasingly start to feel human.

That said, I’m not entirely comfortable with OpenAI holding that much context about me. I don’t love the idea of my entire chat history being used for web search or, eventually, targeted advertising. The more detail they collect, the more they’ll try to monetize it.

Some people are fine with that; I’m more cautious. We never really got social media right in the early days, and many of us now regret the lack of decisions made around regulation, privacy, and data protection. This feels like déjà vu.

3. AgentKit: OpenAI’s agent builder

Moving on to the third major announcement from DevDay 2025—AgentKit.

AgentKit is essentially a workflow builder for creating and managing AI agents. It includes a visual drag-and-drop builder for agent workflows—similar to tools like N8N orchestration or Zapier—and a chatbox interface where you can test and interact with those workflows.

It also comes with an evaluation platform using trace grading, which scores multi-step agent decisions to ensure they’re producing relevant, high-quality outputs. There’s a connector registry designed to manage enterprise integrations, and it supports MCP out of the box. OpenAI also teased a reinforcement fine-tuning feature for creating highly specialized agents.

I like a lot of their approach here. It’s very similar to what we’ve seen from N8N, Zapier, and Lindy AI. I haven’t yet tried OpenAI’s version, but I’ve used the others, and the functionality looks nearly identical.

This reinforces something we’ve said for a long time at Mindset AI: building agentic experiences should happen through configuration, not code. Sure, there are cases where code makes sense, but at its core, an agent workflow is just a sequence of prompts, tools, and contextual triggers—each step feeding the model the right information at the right time.

All the real magic happens at the language model layer. So seeing OpenAI validate our view that agents should be built in configuration, not code, is encouraging.

However, I do think this launch feels narrow in scope. Everything is confined within OpenAI’s own ecosystem. You can only use their models, their evaluation system, their deployment tools—it’s all locked in.

And while they’re adopting open standards like MCP, their overall strategy is clear: once you’re in the OpenAI ecosystem, you don’t leave. It mirrors Apple’s playbook—build a massive distribution platform and keep users inside it.

That makes sense strategically, especially as OpenAI prepares to release new product experiences with Jony Ive—the legendary designer behind many of Apple’s best products. But it also reinforces their intent to create an all-encompassing ecosystem, which many developers won’t want.

From a developer or enterprise perspective, that kind of lock-in is problematic. It might work well for internal use cases, but not for organizations that want flexibility or best-of-breed solutions.

Platform dependency is always risky. These platforms lure developers with open access, exciting tools, and wide distribution—but over time, they tighten control. We’ve seen it before: social media algorithms cutting organic reach, App Store revenue shares climbing, or companies like Facebook competing directly with their own developers.

At Mindset AI, nearly all our customers want the freedom to use best-of-breed tools available, not be locked into one ecosystem. Nobody knows what the next decade of AI will look like, and flexibility is key. You want to build long-term value without having to rip out and rebuild everything just because you chose the wrong platform.

Stay In The Loop

What OpenAI DevDay 2025 tells us about the future of AI

Anyway, let’s wrap up and summarize what we’ve been talking about. There’s been a wave of new features—the App Store, the Agent Builder, the Workflow Builder, and new context-sharing capabilities for companies building on ChatGPT.

What we’re really seeing is the foundation of an operating system for AI. OpenAI is trying to own all interactions between humans and systems. And it’s only a matter of time before Google, Microsoft, and Apple respond—especially given recent reports that Apple may be looking to replace Tim Cook as CEO.

We’re heading into a major platform battle that will define how we interact with technology over the next few years. I, for one, am excited to see how it plays out.

What I’ve found most encouraging from this DevDay is that OpenAI’s releases will open the door for more people to experiment with agentic technology—to build, test, and innovate for themselves.

That’s it for this week. Thanks for tuning in, and I’ll see you next time.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.