Back to resources

In The Loop Episode 13 | Cluely: The AI App That Made Cheating Viral—And Maybe Acceptable?

In The Loop Episode 13 | Cluely: The AI App That Made Cheating Viral—And Maybe Acceptable?

Published by

Jack Houghton
Anna Kocsis

Published on

May 1, 2025
May 1, 2025

Read time

7
min read

Category

Podcast
Table of contents

In April 2025, a startup called Cluely caused significant debate and controversy with a provocative statement: "We want to cheat on everything." The founders of this company used their AI tool to secure offers from some of the most prestigious companies in the world. Their marketing approach sparked considerable debate centered on common fears about AI—that it makes us lazy, discourages personal development, and produces subpar output straight from ChatGPT.

However, many argue the opposite. What this startup has really done is bring forward an essential discussion: what constitutes cheating in an AI-driven world? We're going to explore exactly this topic in today's episode. This is In The Loop with Jack Houghton. Hope you enjoy the show.

Cluely’s story

So a topic has kind of gone viral online: in early 2025, a 21-year-old called Roy Lee raised just over $5 million for his company. Cluely. You have probably seen the viral ad on TikTok or LinkedIn or wherever.

He had actually been suspended from Columbia University for what was described as cheating on tech interviews. Lee and his co-founder built an AI tool that made it easy to pass technology companies’ interview processes for internships, specifically the lead code assessments. This is a common technical assessment used in tech interviews to assess a person's competence level. Lee claims that he had gone through the student handbook carefully and proved that he didn't violate any policies.

He wanted to demonstrate that his AI tool could secure offers from the world's best firms. He actually received offers from Meta, TikTok, Capital One, and Amazon. He even recorded the Amazon interview and published it online, which sparked further outrage and led to his suspension from Columbia University after Amazon pressured them to take disciplinary action.

Despite this, he doubled down on his viral, controversial approach. He announced the company with a product video and a manifesto titled "We Want to Cheat on Everything." The tagline was "Invisible AI to cheat on anything," and the video now has over 10 million views across platforms. Take a look.

The actual product listens to audio and feeds users information in real-time when they're on a laptop, displaying answers that could be used during job interviews, sales calls, exams, or lectures.

Cluely's stance and marketing narrative triggered heated discussions and an evident clash of values. The backlash extends beyond the product itself to competing visions about technology's role in society and how it should interact with humans.

Is the use of AI cheating? Or common sense?

You can categorize the criticisms into three distinct buckets: the authenticity argument, the importance of human development, and societal impact assessment. Each represents different concerns about this tool and AI's broader role in humanity and society.

Let's dig into those.

Authenticity

Some of the most passionate reactions online have centered on dishonesty. Cluely’s value proposition is that you can cheat and lie, enabling people to misrepresent their capabilities or knowledge. This objection hinges on a distinction between tools that enhance capability versus those that merely simulate it.

For example, calculators extend your mathematical reasoning abilities, but Cluely's proposition fundamentally misrepresents the user's actual understanding of a topic.

Human development

The second criticism concerns human potential and development. Many argue that humans develop skills through struggling in real-life situations. This viewpoint suggests that by removing the friction of not knowing something, AI eliminates opportunities for learning.

The argument emphasizes the importance of durable skills like creative thinking or ethical judgment—capabilities that AI shouldn't replicate, or we wouldn't need humans. It reflects a vision of technology not as a replacement for human struggle but as an enabler helping people overcome challenges.

Societal impact

The third criticism discusses what adopting "cheating technology" means for society and its impact on collective ability and trust.

If candidates and students are assumed to be using AI to cheat, recruiters and educators face difficulties trusting CVs, proposals, or other submissions. When everyone cheats, people stop reporting it, creating widespread cynicism. This argument is based on the idea that when individuals cheat without caring about personal or societal development, collective intelligence collapses.

The debate about technological progress

Before sharing my own take, let’s take a look at Lee's responses to his critics: he reframed the debate around technological progress itself.

He argued that every meaningful technology has faced moral panic, from the printing press to VR, and that the history of innovation is simply the history of redefining social acceptability. He drew comparisons to food delivery services and childcare outsourcing, taking a libertarian perspective where market adoption validates value. He positioned critics as technological conservatives resisting inevitable change and attempted to shift the conversation from ethics to adoption.

Regarding my view, I think it's easy to forget that these problems exist with or without this company. A previous episode covered the memo from Shopify’s CEO stating no one’s allowed to hire people unless it is proven that AI can't do the job—all that's happened here is they've said the quiet part out loud.

Both arguments for and against this are valid. While I'm not always a fan of controversial messaging, I admire someone successfully executing such a viral stunt. Starting a company, gaining momentum, and attracting attention is challenging.

I believe narratives matter in society. Both big and small narratives significantly drive culture. Today, advertising and media on television or YouTube comprise most of these narratives. Narratives promoting laziness concern me.

Understanding information is more critical than merely remembering it, and I fundamentally believe in people pushing themselves to be their best. However, this doesn't mean I don't believe in working smart. I want people to use these tools because I think the perception of what they're for is essential—they're not there to make you lazy but to help you progress.

This will pave the way for new tools that everyday people find valuable, but I hope people understand the distinction between knowing something and understanding it.

What becomes clear from this debate is that there will be a spectrum of contexts where AI assistance is more or less acceptable. On one end, most people wouldn't object to a salesperson having better real-time answers to questions. On the other end, using AI to fake knowledge in romantic relationships seems problematic. Cluely likely chose this example to generate viral attention—and generate attention they have.

It's the middle ground where things get complicated, like the coding exam example. Companies use assessments to evaluate candidate abilities, but a candidate using an AI tool is essentially demonstrating how they'd work in practice with these tools, which are now commonplace in the workplace.

This raises questions about the value of learning skills the hard way versus using available tools. You could argue that using these tools has become a key competency.

History, innovation, impact

This type of question has historical precedent. Throughout history, every major innovation has caused an outcry about creating lazy or less educated people.

When radio emerged, people worried it would harm children's ability to read. When calculators appeared, teachers protested, and some schools banned them, fearing the loss of mental arithmetic skills. When the internet made plagiarism easier, many feared Googling would make people stupid.

In each case, society adapted—tools became normalized, new norms emerged, and the panic subsided. But somehow, we collectively forgot we went through that process of change.

What's interesting are the things that will need to change.

Education is one area with some fascinating approaches already. Edinburgh University now allows students to use AI, but they must showcase and cite how they used it. Since January 2025, their economics students must attach a one-page "prompt appendix" to every assignment, showing the exact AI query, the raw response from the LLM, and notes on what they kept or cut.

Assessments will also need to change rapidly. One approach could be for teachers, after a written or digital exam, to conduct face-to-face conversations where knowledgeable experts can quickly gauge someone's understanding. While not 100% accurate, in this era of "vibe marketing" and "vibe coding," this gives insight into how much information students actually understand versus how well they can use AI tools.

The market is already responding to these challenges.

Investors are calling for new business models like interview centers where candidates have no access to AI assistance during human-to-human conversations. You can imagine these appearing first in major cities but eventually spreading everywhere as businesses need to verify genuine understanding.

Closing thoughts

So, in conclusion, will AI turn us into cheaters and make us incredibly lazy?

I guess the answer isn't black and white—it never is.

Technology has always changed how we work and learn, rendering some skills obsolete while creating demand for others. The calculator didn't make us worse at math; it freed us from mundane equations and allowed us to focus on higher-level mathematical thinking. Similarly, AI might reduce our need to memorize or process ideas in traditional ways.

Ultimately, Cluely took a provocative marketing approach by saying the quiet part out loud, which served its purpose by generating attention and sparking important conversations.

But the real work lies ahead. Educators, employers, policymakers, and individuals must navigate these changes and establish new norms and working methods. We need to update our understanding of skills and credentials, rethink how teachers assess learning, and develop new norms about using AI in different contexts.

One thing seems certain: tools like Cluely aren't going away. Whether we call it cheating or augmentation, AI assistance is here to stay. The question isn't whether we accept or reject it, but how we integrate it into our lives.

Perhaps we should focus less on the tools themselves and more on our goals as individuals and society. We should form narratives that we want future generations to adopt. What capabilities do we truly value? What kinds of thinking and creativity do we want to preserve? How will we ensure technology serves our goals rather than undermining them? These questions will determine whether AI makes us lazy and dishonest or helps us become more capable and fulfilled people.

Thank you for listening to today's episode. If you found it thought-provoking or interesting, please subscribe, share it with a friend or colleague on Slack and rate the show.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Because your employees don't have time for repetitive work.

Book a demo