What Is AI Workslop & How To Fix It | In The Loop Episode 34
.png)
Published by
Published on
Read time
Category
A new word that just entered the workplace lexicon—workslop. If you haven’t heard of it yet, you will soon. Stanford researchers recently coined the term, and it’s spreading like wildfire online.
A co-authored Harvard Business Review & Stanford report released this September addressed one of the biggest complaints about AI: most of the work it produces is a bit crap. The study found that 41% of workers surveyed said they’d received workslop from their colleagues in just the past month.
In today’s episode, we’re discussing the rise of workslop, the real financial cost of it, whether this is all just noise, and how we can fix it going forward. This is In The Loop with Jack Houghton. I hope you enjoy the show.
What is AI workslop?
Workslop refers to AI-generated work—content that looks polished at first glance but is ultimately rubbish.
This might be a slide deck or report that looks impressive when you skim through it, but as soon as you dig deeper or try to use it for something practical, you find wrong numbers, repeated text, every word capitalized, or weak and repetitive arguments. I’ve got a take on the real cause of this, which we’ll come to.
In a survey of over 1,100 U.S. workers, 40% said they’d received workslop. According to the study, it takes about two hours of extra work to fix it. As a result, the researchers estimated it costs about $186 per employee per month—roughly $9 million a year for a company with 10,000 employees.
That might sound exaggerated, but it highlights a real issue at work right now. We all know the people who do this—the team members who send spotty AI work. As AI becomes more mature and widespread, those people will pay a price. Nearly half the surveyed workers said they see slop senders as less creative, less capable, and less reliable.
As one Redditor said:
Another quote from the study makes it even clearer: “If you use AI to pretend to do 20 tasks, you’re a Trojan horse forcing other people to do the real work.” In other words, by doing rubbish work, you’re creating more work for others.
Workslop itself isn’t new—it’s just a new blind spot enabled by technology. In the past, it looked like a rushed piece of work copied and pasted from somewhere on Google. Back then, it was harder to spot—and harder to produce. But now it’s simple to do at scale because AI is so good at generating words.
This mirrors a lot of conversations we’ve had internally as a company. Barry, the CEO and co-founder of Mindset AI, compares it to giving someone a Formula 1 car. Just because you have one doesn’t mean you can drive it. It can go fast—but if you don’t know how to handle it, you’ll crash.
Data from well-regarded studies shows that AI can accelerate work when it’s used correctly. A recent MIT and Nielsen study found that ChatGPT users writing business documents finished about 59% faster and produced better outputs than those who didn’t use AI. So it’s not that AI inevitably causes slop—it just needs guidance so it doesn’t produce rubbish.
If it does produce slop, the issue lies with the person, not the tool. It happens when someone looks at an AI draft and says, “That’ll do.” You have to test it, interrogate it, and actually understand the topic well enough to spot what’s wrong. You have to use initiative to move it from 65% or 80% done to something that’s truly finished and ready to send.
Every medium has its nuances—writing, coding, creating art—none of it can be done well without applying real thought and analysis. I’m not saying people don’t care; many do. But this is where the efficiency trap starts to creep in with AI workslop. It’s an interesting phenomenon—how clever, well-intentioned people end up creating rubbish.
The efficiency trap: Why workslop happens
Dr. Cornelia Walther at Wharton has documented what she calls the efficiency trap. It explains how workslop can spread through an organization despite everyone’s best intentions. She outlines four predictable stages.
H3: Stage 1: Initial productivity gains and experimentation
In the first stage, workers experiment with AI cautiously. They use it selectively and maintain a lot of control. Productivity genuinely improves—tasks that once took days might now take hours—and everyone gets excited.
H3: Stage 2: Managerial recalibration and integration
In the second stage, management starts to notice and get involved. Operating under resource optimization goals, they raise expectations for team output. The logic makes sense: if technology helps people deliver more in less time, why not request more deliverables?
As AI becomes normalized in everyday workflows, what began as a tool becomes a habit.
H3: Stage 3: Dependency acceleration and systematic reliance
By stage three, these escalating demands lead people to delegate increasingly complex tasks to AI. What began as selective assistance evolves into reliance. Tasks that require individual analysis or critical thinking start becoming AI-driven by default.
Demand accelerates, and the lack of clear processes around this work causes workslop to increase.
H3: Stage 4: Performance expectation lock-in and AI addiction
At this point, productivity improvements become the new baseline. Deadlines shorten, giving people less time to do more. The number and complexity of projects both rise, and those efficiency gains become permanently built into performance standards.
Workers reach what Walther calls technological addiction—feeling psychologically incapable of meeting demands without using AI—and workslop continues to grow.
This is one of the reasons I don’t believe AI is a bubble. I’ve witnessed this, I’ve felt it, and I’ve been part of enabling it.
Your first instinct might be that people just need better training, because that’s often the default response when talking about AI at work. And you’d be half right—but there’s also a human element here that adds nuance.
A large study from BCG, Harvard, MIT, and Wharton examined 758 consultants—roughly 7% of BCG’s workforce—who were working with ChatGPT-4. For tasks within AI’s capabilities, training really helped. Average performers improved by 43%, and top performers saw gains of around 70%. The quality of their outputs increased by more than 40% compared to control groups—a big jump.
But for tasks slightly outside AI’s capability range, training didn’t help. In fact, it made things worse. Those who had been trained to create better prompts were more likely to accept AI outputs without question, assuming that if they prompted correctly, the response must also be correct.
Training created overconfidence, without fostering the critical evaluation skills, domain expertise, or processes needed to catch mistakes.
Prof. Ethan Mollick, a fantastic thinker in this space, calls this the jagged technological frontier—where AI’s capabilities aren’t uniform. Some tasks are incredibly easy for AI, while others that seem almost identical are surprisingly difficult. That boundary is unpredictable, and training alone doesn’t help people navigate it.
A computer science professor on Reddit’s r/ChatGPT captured this perfectly:
And that’s why the reality is this isn’t a technology problem—it’s a human systems problem.
How to fix the bigger-picture issue of AI workslop
So how do we fix this mess? To put it plainly, domain expertise matters more than ever. If you’re not sure what good looks like, AI can generate plausible first drafts that might go over your head. But if you know your field, you’re the one who can spot the mess—the hallucinations and nonsense that sometimes comes out of it. You see this a lot online, with writers or coders complaining that AI often makes subtle errors that a novice wouldn’t notice.
This ties into an analogy from MIT and Harvard: the centaur vs. cyborg model. Some workers act like centaurs, dividing tasks between AI and themselves—“ChatGPT, give me an outline; I’ll write the content; you review it.” Others try to become cyborgs, fully blending AI into their workflow, sometimes over-relying on the system. The most effective teams use a hybrid approach, leveraging AI for speed—creating templates, drafting text, or handling routine tasks—but with a human expert steering the output to maintain quality.
Training is another key factor. Many workers jump into new tools with very little guidance. Without understanding the underlying processes, they’ll struggle to get good results. A BCG study found that only about a third of employees feel properly trained on AI, which explains why mistakes and inefficiencies are so common.
Process and quality control also play a huge role. Few workplaces have strong review cycles around AI-generated drafts. In the past, an email, report, or slide deck would be created by a person and then reviewed by a colleague. Now AI can generate content so quickly that review steps are often skipped, especially under tight deadlines.
Think of product teams, for example. You might produce an outline of a feature brief, then a specification document that turns that vision into a plan. Each stage requires review. AI can speed up these steps by 60–70%, but without proper checks, errors can multiply just as fast. The same goes for writing: AI can ideate, research, and draft, but human review is essential. Guardrails, context, and frameworks should be built into the process to ensure quality.
Finally, expectations need to be managed. Many managers assume that more AI output equals better results. That pressure often drives workers to produce content quickly, sacrificing quality. Every project has levers—speed, scope, and quality. If you always pull the speed lever, the quality lever gets affected too.
Solving this is about resetting expectations. Training programs should focus on using AI to plan and ideate, while leaving the heavy thinking and final execution to humans. The goal is not endless acceleration—it’s sustainable, high-quality output.
Closing thoughts
To conclude the episode, the story should be clear by now. Workslop isn’t an AI problem—it’s a symptom of underprepared people and processes. When we rush it, under-train ourselves, hide its use, or fail to put processes around this “Formula One car,” we get sloppy results.
Conversely, when an organization invests in smart AI habits, the benefits can be real and substantial. For listeners, the takeaway is actionable: if you want good AI output at work or for personal use, treat it like any other tool. Learn to use it properly, double-check its work, loop humans into the process, and assume it won’t get the answer right the first time.
Thanks for listening. That’s it for this episode—I’ll see you next week.
Book a demo today.
