Have you heard the news?

  • AI is replacing all developers next week - or in 3 or 6 or 12 or 18 months!
  • And somehow at the same time, AI is helping developers get jobs by “cheating” in tech interviews!

Don’t those two things directly oppose and contradict each other? But I digress.

The thing is, AI replacing developers is straight up impossible, at least in the currently deployed LLM architectures. The idea is nonsensical. I need to write a blog post detailing both why it’s impossible and why people continue to shout it from the rooftops, and I think I’ll do that next.

But right now I want to talk about the second thing - the more interesting and real thing. In case you haven’t caught the news, there has been a massive surge of developers cheating in tech interviews by using LLMs to beat LeetCode.

Developers Are Cheating In Interviews

Here are just a few of the many recent articles on the subject:

CNBC: Meet the 21-year-old helping coders use AI to cheat in Google and other tech job interviews

The problem has become so prevalent that Pichai suggested during a Google town hall in February that his hiring managers consider returning to in-person job interviews.

Entrepreneur: Job Seekers Are Getting Increasingly Bold By ‘Cheating’ in Interviews — and AI Is Making It Worse

“A lot of the efforts to cheat come from the fact that hiring is so broken. So you’re just like, ‘Oh, my God, how do I get through? How do I get seen? How to get assessed fairly?’” Lindsey Zuloaga, the chief data scientist at HireVue, told BI.

MSN: Is it cheating? AI use during job interviews sparks debate over whether to restrict emerging tools

“A tech leader recently told me they suspect that 80% of their candidates use LLMs on top-of-funnel code tests — despite being explicitly told not to,” said Jeff Spector, co-founder and president of Karat, a Seattle startup that helps companies conduct technical interviews.

Here’s the thing: I don’t blame ’em one bit. As Ice-T says: don’t hate the player, hate the game - and the game of tech interviewing has been broken for years.

We’re taking smart people, giving them generic, well-known, completely solved academic algorithm questions for interviews, and then acting shocked when they use an LLM that has literally been trained on those questions to answer them expertly. What else could the outcome have been really?

LeetCode Interviews Suck

Ever done a coding interview or screening for a company where the question was “write an algorithm to recursively delete a node in a balanced binary search tree” but then the job description was “in this HTML-forms-over-database-tables clone-stamping job, you’ll be stamping exciting new clones of database table represented by HTML forms!”? I sure have. I’ve had some crazy interviewing experiences actually:

  • I had a manager tell me he needed to confiscate my phone so I didn’t cheat - in 2011. When I refused to hand him my phone, he handed me a pad of paper and a pen (not even a pencil!) and asked me to write a recursive balanced binary search tree insert - on paper - in pen. I’d bet you 100 dollarbucks that guy had never written a recursive method in his entire career.

  • I had an interview where the “super hard” tech challenge question was a one-liner: “implement Huffman Encoding”. I had to freaking look it up (and I told them I was doing so). You know why I had to look it up? Because I’ve never used it. No one has ever used it. Of every developer I’ve ever met in my entire life, not one of them has ever written a Huffman Encoding algorithm in anything ever. Because it’s a solved problem.

  • I got asked if I knew what the => symbol meant. No specific programming language specified. Dude just stood up and drew it on a whiteboard in front of me. I said the meaning depended on the programming language. The guy rolled his eyes at me and walked out of the room. Ironically I got the job.

Let me ask you this: when’s the last time you wrote a binary search tree at work? Or used recursion for a practical reason (like graph traversal)? On that note, when was the last time you wrote a graph? Or used a linked list? (Ignore that last one, crypto crew).

I’ve worked at jobs where we actually had reason to do some of these things, and even then I’d say it accounted for less than 1% of my time. I remember writing a custom Quicksort with 3-way partition, and still not being able to beat the native QuickSort implementation in .NET because it cheated and invoked kernel level functions under the hood that I couldn’t easily access or safely myself. But we didn’t ask about this stuff in interviews, because it was a tiny part of the job.

So why do we, as an industry and community, continue to ask people to do these academic algorithms in interviews, when - if we’re really being honest with ourselves - we basically never do them on the job?

How did we reach a point where every technical screener is another bloody LeetCode test asking me to solve prefix sum or sliding window or DFS or BFS or whatever other obscure and mostly unused algorithmic concept?

I get that there are jobs out there that actually need academic algorithm expertise. For example, data scientists need to understand top k and such. But really, 90% of tech companies and roles are using these LeetCode questions while maybe 10% (if we’re being generous in our estimates) are actually doing this academic stuff every day. The rest are just cargo culting. Like that time in the 2000s when everyone was asking you why manhole covers are round and how many quarters you’d need to stack to reach the moon (thanks for that Google, so great).

These academic algorithms are pure compsci. Sure, they have value. Of course they solve a problem or even set of problems. But I’m saying these problems are mostly academic - and the practical, non-academic tech teams don’t deal in this stuff at all. We all just clone stamp HTML forms over database tables (see above). And sometimes we send that form data other places like queues and other kinds of databases. We do the practical stuff that makes a business go. No executive is ever kicking open the board room doors in a panic, yelling “EVERYONE! WE NEED TO IMPLEMENT A RECURSIVE BINARY SEARCH RIGHT FREAKING NOW OR THIS COMPANY IS BANKRUPT!”

LLMs Train on LeetCode Algorithms

None of this academic stuff happens at a typical company. But you know where it does happen? Academic papers, books, and work. And getting to the juicier part of this topic, you wanna guess what LLMs are trained on? Academic papers, books, and work! Except when that stuff is copyrighted, because LLM companies totally don’t plagiarize things to train their models. In the immortal words of Homer Simpson: “oh by the way, I was being sarcastic.”

LLMs also train on social media site data. Like that LeetCode subreddit where all the LeetCode takers talk to each other about LeetCode problems and solutions.

So in many (literal, actual) ways, LLMs have actually been directly trained to beat LeetCode interviews. And eventually people realized that LLMs were trained on these well-known algorithms, and started employing them to beat LeetCode interviews. And the thing is - it works, at least well enough that companies are now saying that they need to move their technical interview processes back on-site in order to thwart this “cheating”.

How Cheating Happens

There are a few key conditions that allow candidates to cheat on the tech interviews:

  • When the interview question is academic or well-known. LLMs are trained on public data sets and written works. Fun dystopian fact: at this point they’ve pretty much consumed the collective knowledge of the entire Internet. And so if a question is well-known in the public domain, then it has been discussed a lot on the internet and / or in written works, and that means the LLM has been well-trained on it and can answer is super effectively.
  • When the technical screening is automated and involves no human interaction. Often candidates are given async “take home” tests to do, with companies reassured that the testing software or platform claims to have cutting-edge anti-cheat capabilities. But the thing is, the cheaters are always one step ahead of the current anti-cheat and writing new methods to thwart it, and so that “bigger-mouse, better-mousetrap” dance goes on eternally and you can’t be sure the candidate didn’t cheat.
  • When an interviewer doesn’t ask follow-up questions. LLMs are good at producting high-level, generic answers to many questions. But if you ask a candidate to walk you through their code, or even explain it line by line, you’ll pretty quickly be able to tell who wrote the code and who had an LLM write it for them.

LLMs produce answers without understanding. Programmers produce well-understood answers. Just ask a programmer to explain their answer, and all cheating is completely routed. No one can fake understanding. But you also can’t ask a candidate for their understanding in an automated coding test with no human interaction.

Do Collaborative Interviews

Tech companies don’t need to bring technical interviews back on-site. Flying candidates out for in-person sessions is expensive and time-consuming. But relying on automated, academic-style assessments isn’t a viable alternative either. LLMs have effectively solved these problems, making them unreliable for evaluating real developer skills.

The key to effective technical interviews is balancing structured assessment with human interaction. Interviews should be collaborative, interactive, and designed to simulate real-world problem-solving.

A solid technical interview process should involve:

  • Tailored questions that reflect the actual work a candidate will be doing, rather than relying on well-known algorithm puzzles.
  • Live, interactive coding sessions where candidates and interviewers work together, much like they would on the job.
  • Follow-up discussions to ensure candidates truly understand their solutions, rather than relying on generated answers.

One of the simplest ways to detect AI generated code is to ask the candidate to explain their solution. Someone who wrote the code themselves will easily walk through their logic. In contrast, candidates who copy from an AI tool often hesitate, struggle with explanations, or fail to make necessary modifications.

Rather than reverting to outdated methods, companies should embrace structured, human-driven coding interviews. When interviews focus on genuine collaboration and problem-solving, both candidates and hiring teams benefit.

David Haney is the creator of CodeSession , a platform that helps companies do collaborative coding interviews.