Back to resources

Three Reasons Why Apple Is Cooked | In The Loop Episode 20

Three Reasons Why Apple Is Cooked | In The Loop Episode 20

Published by

Jack Houghton
Anna Kocsis

Published on

June 19, 2025
June 19, 2025

Read time

9
min read

Category

Podcast
Table of contents

This week was one of the biggest tech conferences of the year, WWDC 2025,` and lately, Apple hasn’t been in the news for many positive reasons. Spoiler alert: it only got worse.

The conference was disappointing at best. What made it even worse was that just before it kicked off, Apple’s team released a research paper that took a direct swipe at the major AI labs, claiming that large language models don’t reason—they just memorize.

In today’s episode of In The Loop, we’re diving into the conference, that research paper, and the news that Jony Ive—Apple’s most iconic designer—has just joined OpenAI. I’ll explain why I think this spells serious trouble for Apple.

This is In The Loop with Jack Houghton. I hope you enjoy the show.

All eyes on Apple WWDC 2025 but…

Let’s start with this year’s big keynote at the Apple WWDC25 conference—the moment when they usually announce the most exciting updates, features, and strategies of the year. Naturally, you'd expect some AI news, maybe a major Siri update, or something genuinely interesting.

So, what was the big headline?

Surprisingly, their headline announcement was that they’ve revolutionized the user interface with something they’re calling Liquid Glass.

Now, before you get too excited, let me translate that from marketing speak to plain English: they basically took a trend called glassmorphism—popular back in 2022 and arguably in Windows back in 2016—and slapped a new brand name on it. It’s a lot like what they did with “Apple Intelligence.”

If you’ve never heard of glassmorphism, it’s a design fad where everything looks like frosted bathroom windows. Rebranding it as Liquid Glass—just like the Apple Intelligence rebrand—is a perfect example of what we’re seeing more and more: branding over substance.

If this trend continues—toward more frosted, semi-opaque design—we’re going to end up with interfaces that are barely usable. For people with visual impairments, this is especially problematic. Good luck navigating a frosted-glass App Store.

What I found particularly amusing is that Windows did this 15 years ago with Windows Vista. And anyone who remembers Vista knows how that turned out—probably the worst OS update in history.

There was a tweet from Tom Warren that summed it all up. He said:

Now, this might make me sound like an Apple-hater, but that’s not the case at all. I’ve got a Mac, an iPhone, an iPad—I’m firmly in the ecosystem. But I’m incredibly disappointed. And honestly, I think Apple’s in real trouble.

So let me walk you through the three reasons why.

Reason #1: Siri—and Apple in general—lagging behind

The first reason Apple’s in trouble is Siri—and the fact that they still don’t have a unified vision for what Siri is supposed to be. That’s kind of unbelievable to me. OpenAI is literally showing them what to do: a universal assistant across every device. That’s the vision—and it’s not even radical. It’s just some old guys on a stage spelling it out.

This isn’t a small oversight. This failure has been building for over a decade. It’s a masterclass in how to fumble one of the biggest tech revolutions since the smartphone.

Siri launched in 2011—14 years ago—and somehow, it’s arguably gotten worse.

Meanwhile, Google has launched Gemini, which can actually hold real conversations. It understands the context of your apps and your screen. You can point your camera at things in the real world, and it can tell you what it sees. It integrates across applications seamlessly.

One big reason for Apple’s failure here? Internal dysfunction. For years, Apple’s been at war with itself. John Giannandrea—who they poached from Google in 2018 to run all of Apple’s AI—was supposed to lead this charge. But Craig Federighi, Apple’s head of software, reportedly had zero faith in him. So what did he do? He started a competing AI team.

Yes—Apple had two separate AI teams in the same company, competing for resources and building rival products. The team under Federighi even nicknamed the other team “The Aimless.” That’s a pretty toxic culture when employees start mocking each other like that.

And it gets worse. When ChatGPT launched, Giannandrea dismissed LLMs, saying they had very limited real-world utility.

This kind of culture reminds me of early Apple—when Steve Jobs was tearing through the company doing incredible things, but in a very divisive way. Except now, the vision is missing.

The dysfunction got so bad that Tim Cook reportedly lost confidence in Giannandrea and stripped Siri from his control. He handed it over to Mike Rockwell, the guy behind the Vision Pro—that $3,500 headset that maybe 1,000 people bought.

But it’s not just culture. Apple’s infrastructure is also outdated. Their internal data centers are running on around 50,000 GPUs—many of them five years old. Their AI teams have even been begging Google and Amazon for computing power. The same Google and Amazon they’re supposedly competing with.

And remember when Apple promised a huge set of AI features for the iPhone 16 launch? Well, it turns out only half of their proposed chip budget got approved. Apple’s CFO told the AI team to “do more with less.”

Now, I’m not saying spending more automatically leads to better results. But when you’re a tech company not willing to invest, not making bold bets, not willing to tear up the playbook or your entire business model—you're in trouble.

In this moment, product–market fit is shifting faster than ever. Everyone’s needs and expectations are changing. And what frustrates me most is this: Steve Jobs wouldn’t have let this happen.

He would’ve cut 75% of the fat. He would’ve canceled every product that wasn’t poised to deliver triple-digit growth. He would’ve ruthlessly focused the teams. And I guarantee he’d be in the design and software meetings every single day, making sure it shipped.

That kind of founder energy—the obsessive, hands-on leadership that drives breakthrough results—is what made Apple great. And I guarantee it’s what they’ve lost.

And that brings us to the second reason.

es to catch up, they’ve defaulted to defending their position with academic skepticism. A few episodes ago, I said that if I were Apple, I’d just go out and buy an AI lab. Spend big. Leapfrog the competition. Own a foundational model so you can own every inch of the stack—from silicon to interface—and supercharge the Apple user experience across every device.

That’s always been Apple’s strategy: full-stack control.

But instead, they released a research paper. Right before WWDC. When the world was expecting something bold.

This paper is called The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning in LLMs.

Here’s what they did:

They tested models like Claude on the Tower of Hanoi puzzle—a logic task with a known algorithm.

Claude could solve it with six disks but struggled at seven.

Apple’s conclusion? LLMs don’t reason. They just pattern match.

Predictably, this became fuel for AI skeptics—like Gary Marcus—who used it to reiterate his opinion that LLMs are useless.

But here’s the kicker: It wasn’t about reasoning. It was about token limits—the model equivalent of running out of paper to write your answer.

One researcher replicated the experiment and showed the models were:

Explicitly stating they couldn't complete the tasks due to output constraints.

Offering the full algorithm as an explanation.

Writing the code to solve it when allowed.

Apple's researchers, meanwhile:

Prevented the models from coding,

Then acted surprised when they couldn’t solve coding problems.

Other researchers ran the same tests, but allowed the models to use their full capabilities—reasoning, writing, coding—and the models smashed the task, even with hundreds of disks.

So what does Apple do? It plants its flag on “LLMs can’t reason” and essentially says: “This whole thing is fake anyway.”

This tweet summarized it perfectly:

“Be Apple. Richest company in the world. Every advantage imaginable. Go all-in on AI. Make countless promises. Get lapped by everyone. Then, two years in with nothing to show for it, write a paper about how none of it matters.”

[TWITTER EMBED]

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">&gt; be apple<br>&gt; richest company in the world, every advantage imaginable<br>&gt; go all in on AI, make countless promises<br>&gt; get immediately lapped by everyone<br>&gt; 2 years into the race, nothing to show for it<br>&gt; give up, write a paper about how it&#39;s all fake and gay and doesn&#39;t matter anyway <a href="https://t.co/kZk8OKc7pC">https://t.co/kZk8OKc7pC</a></p>&mdash; henry (@arithmoquine) <a href="https://twitter.com/arithmoquine/status/1931256646598082948?ref_src=twsrc%5Etfw">June 7, 2025</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

And honestly? I don’t think it even matters whether these models meet some academic threshold of "reasoning." What people care about is: Can it do this task better than the last model? Can it do it faster?

That’s it.

Apple is a consumer company. They should be obsessed with what people want to do, faster and easier—not defending whether GPT-4 meets a philosophy professor’s bar for cognition.

This entire mindset shift—away from risk and product intuition and into academic footnotes—is what’s hurting them. And it brings us to the third reason they’re in trouble…

Reason #2: Their take on AI— and The Illusion of Thinking paper

The second reason I think Apple is in serious trouble is because of how they think about the world—and AI in particular.

They know they're behind. But instead of making bold moves to catch up, they’ve defaulted to defending their position with academic skepticism. A few episodes ago, I said that if I were Apple, I’d just go out and buy an AI lab. Spend big. Leapfrog the competition. Own a foundational model so you can own every inch of the stack—from silicon to interface—and supercharge the Apple user experience across every device.

That’s always been Apple’s strategy: full-stack control.

But instead, they released a research paper. Right before WWDC. When the world was expecting something bold.

This paper is called The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning in LLMs.

Here’s what they did:

  • They tested models like Claude on the Tower of Hanoi puzzle—a logic task with a known algorithm.
  • Claude could solve it with six disks but struggled at seven.
  • Apple’s conclusion? LLMs don’t reason. They just pattern match.

Predictably, this became fuel for AI skeptics—like Gary Marcus—who used it to reiterate his opinion that LLMs are useless.

But here’s the kicker: It wasn’t about reasoning. It was about token limits—the model equivalent of running out of paper to write your answer.

One researcher replicated the experiment and showed the models were:

  • Explicitly stating they couldn't complete the tasks due to output constraints.
  • Offering the full algorithm as an explanation.
  • Writing the code to solve it when allowed.

Apple's researchers, meanwhile:

  • Prevented the models from coding,
  • Then acted surprised when they couldn’t solve coding problems.

Other researchers ran the same tests, but allowed the models to use their full capabilities—reasoning, writing, coding—and the models smashed the task, even with hundreds of disks.

So what does Apple do? It plants its flag on “LLMs can’t reason” and essentially says: “This whole thing is fake anyway.”

This tweet summari

And honestly? I don’t think it even matters whether these models meet some academic threshold of "reasoning." What people care about is: Can it do this task better than the last model? Can it do it faster?

That’s it.

Apple is a consumer company. They should be obsessed with what people want to do, faster and easier—not defending whether GPT-4 meets a philosophy professor’s bar for cognition.

This entire mindset shift—away from risk and product intuition and into academic footnotes—is what’s hurting them. And it brings us to the third reason they’re in trouble…

Reason #3: Their visionary designer is developing the next generation of AI hardware at OpenAI

The third reason I think Apple’s in trouble—and this is the one I find most interesting—is that their core business might not be there forever. Or if it is, it’s going to change drastically. And Apple has shown very little ability to fund, execute, or more importantly, manage that kind of change.

Now, if you don’t know Jony Ive, he’s the guy—aside from Steve Jobs and maybe Tim Cook—who made Apple what it is. He led the design of the iPod, iPhone, iPad, and basically every iconic Apple product. He turned Apple from a computer company into a global lifestyle brand.

And now? He’s working with OpenAI.

He sold his design firm to them and partnered with Sam Altman to build what they’re calling the “next-generation AI device.” Altman has said he wants to ship 100 million units of this new product and believes it could add $1 trillion in value to OpenAI.

There are even leaked internal recordings where Altman describes the device as:

Fully aware of its surroundings,

  • Subtle enough to sit on your desk or in your pocket,
  • Positioned as a third major device—after your MacBook and iPhone.

Meanwhile, what’s Apple doing? Making app icons a little more transparent in iOS 18 and publishing research papers arguing that AI doesn’t really “reason.”

Let that sink in. The designer who helped define Apple’s identity is now building the next big thing—with Apple’s biggest AI rival.

And it’s not just OpenAI. Meta is already selling Ray-Ban smart glasses:

  • They take great photos and videos,
  • Have solid audio,
  • Look good enough for people to actually wear them.

Google is right behind:

  • Partnering with Warby Parker,
  • Shipping good-looking Gemini-powered glasses,
  • Offering live translation, visual search, AI tools, all deeply integrated.

So here’s the uncomfortable question: What is Apple going to counter with? Siri?

Siri has been the butt of jokes for years. If the future of consumer tech is AI-first, and device-first, Apple is looking flat-footed.

Now, I’m not saying the iPhone is going to vanish overnight.

But I am saying that Apple doesn’t look capable of adapting to this shift. And if they don’t, they’ll lose their grip on the device market they’ve dominated for nearly two decades.

Closing thoughts

So I’ll leave you with one final important take before I say goodbye today.

I don’t think this is really about AI, or design language, or even corporate dysfunction. I think it’s about a company that built its entire modern identity on being ahead of the curve—on taking existing technology and making it better, simpler, more beautiful, and redefining whole product categories in the process.

The iPhone wasn’t the first smartphone. It was just one of the few that didn’t suck. That’s what Apple’s always been great at—letting others fumble around, and then showing up and doing it right. But this time, they’ve shown up very late, empty-handed, relying on everyone else’s technology, and—worse—spending time publishing research papers that feel, to me at least, kind of petty.

And what’s frustrating is that this was avoidable. Siri launched in 2011. They had a seven-year head start. But instead of building on it, they got in their own way. Internal politics, egos, and organizational noise killed any real progress. And while that was happening, OpenAI was busy changing the world.

Now, to be fair, Apple still makes great products. I’ve always appreciated how incredibly difficult it is to build and scale beautiful, functional hardware. And Apple’s done that better than almost anyone. But as someone who’s long admired the way they brought world-class industrial design to the mainstream, I can’t help but feel disappointed. This whole thing is a lesson in what not to do if you’re leading a company right now.

Embrace focus. Make bold bets. Don’t sit still—because a few years from now, you’ll probably regret it.

That’s it for today. I hope you enjoyed the episode. And as always, feel free to share it in a Slack channel, drop it in a WhatsApp group, or send it to a friend who’s still all-in on Apple and might want to hear a different perspective. See you next week.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Book a demo