AI-Assisted Coding: A Catalyst for Hiring Great Devs
AI-Assisted Coding: A Catalyst for Hiring Great Devs
When you ask your dev candidates to do a take-home exercise, is using AI tools cheating? Can it actually help you find great engineers? Here are some of my thoughts.
Vibe Coding
Smart engineers use smart tools. That has always been the case, and it is no different with the ascent of AI tools that help developers code faster. First and foremost, they allow developers to offload some of the boring tasks of coding - like scaffolding, mocking, and performing repetitive tasks.
With tools like Cursor, this becomes even more exciting as developers can push the boundaries of what’s possible by conversationally teaming up with an AI agent. A sophisticated prompt can take you very far, and a lot of things the AI creates make sense - or at least AI can make it look like it makes sense. This also means that a developer really has to understand how code works to tell when the AI is hitting its limits. And this is where things get interesting.
Should developers use AI to code?
The answer is simple: yes. It would be stupid not to do so because, if applied the right way, it can make a good developer so much more productive. Actually, we wouldn’t hire an engineer who prides themselves on not using AI.
But can AI turn a bad developer into a good one?
Again, a simple answer: no. Actually, AI really puts the focus on a developer’s skill to actually understand how his code works. I have met a lot of developers who know how to code well but still lack an in-depth understanding of what their code actually does. For example, they could write excellent, error-free code but fail at integrating it with other services. However, they were still hired because simple coding skills were the primary discriminator in making the call on hiring. But what happens if you do a coding exercise with a candidate who is basically able to crack any nut by leveraging AI?
How do we vet developers on skill?
Assessing the skills of developers is a controversial topic, so I can only talk about our approach without claiming that it’s “the best” way to vet an engineer. What I can say, though, is that it works well for us as a key quality gate besides assessing cultural fit (this alone calls for a separate article :), and AI has actually made our task easier, not harder. Here’s why.
As part of our hiring process, engineers usually get a take-home assignment to do some coding on a real-world problem, e.g., working on a small improvement on the real code base, but in a separate branch just used for this purpose. After an initial explanation on a call, candidates receive a link to a PR that outlines the requirements, expected outcome, and any additional information to support the task, such as details on what is out of scope.
After finishing the assignment in about a week, the candidate has a call with our CTO or one of the senior engineers to walk through the PR and review the implementation. This is intended to be an eye-level discussion from dev to dev, and there usually is more than one way to solve the problem. We’re mainly interested in understanding the candidate’s approach, so it doesn’t have to be perfect - but the reasoning behind it must make sense. This is where it gets interesting because, as part of the discussion, the candidate needs to be able to explain what they were thinking, why they took this approach and not another, what kind of challenges they faced, etc. And if you just used Claude to code for you, this is where candidates usually fail - and it is easy to spot.
How AI Spotlights Fundamental Developer Expertise
Now that virtually everyone can code, some key developer traits are paramount. These traits are not about mastering a specific language in terms of constructs and the like but about understanding how programming languages, compilers, and patterns work.
In order to explain what AI does when it codes for/with you and to spot the pitfalls in code (for example, insufficient tests), developers need to understand the concepts behind it, which are language-agnostic. AI may get stuck in complex nested constructs, or it may make inefficient use of memory, or it will address nonexistent use cases whilst omitting obvious sources for errors, just to name a few examples–and developers need to be at least aware of it to decide when that’s tolerable (e.g., for doing some quick and dirty experiment), or when it’s not (production-grade code).
This means that in our review call, we’ll not focus too much on the code itself unless it’s seriously off target, but on how and why, meaning the rationale or the concepts behind the candidate’s approach. Developers who just used Cursor to work on their assignment (and we’ve seen people just pasting the PR and its description into an AI tool) will usually fail at this stage because it’s very hard to prepare for every potential question as part of an open discussion.
On the other hand, we also sometimes see pretty innovative applications of AI-assisted coding. There may be errors, but it’s absolutely amazing when a candidate can surprise us by using AI to do some crazy stuff–but it all needs to follow a concept the candidate can explain without hesitation.
Interview cheating and why it doesn’t work in the end
Unfortunately, candidates can run AI tools while interviewing that analyze the interviewer’s questions while providing seemingly suitable answers based on the code being discussed. We’ve heard of candidates who are exceptionally good at concealing that they’re actually reading answers off their screen.
To be honest, I’m not sure if we’d be able to catch such cheating, and I’m afraid that the number of people trying to take this approach will grow. Cheaters gonna cheat; that’s a sad truth gamers know all too well. But then, it usually doesn’t work out unless a candidate has made a business model out of tricking their way into an employment contract and then forcing the company to pay them severance to avoid a lawsuit, trying to kick them out again.
One thing is for sure: it will become apparent quickly if a candidate doesn’t live up to the expectations raised as part of the hiring process, and employers don’t react well to being tricked (who would?). The only way out is terminating the employment relationship, which is bad for both sides.
This leads to a candidate having a very fragmented career history (which is visible on LinkedIn), and a lack of references to provide testimonials. As part of our preparations during the hiring process, we take a very close look at this and discuss frequent changes in employment on the very first call to spot potentially unsuitable candidates.
So, don’t cheat, even if you could. It’s not worth it. We trust that most candidates are really looking for a job that’s both challenging and rewarding, and the best way to go about this is by being honest. From our perspective at WunderGraph, it’s also very likely that we’ll offer help to a new hire to improve their skills because our goal is to help people grow. Cheaters don’t qualify for this.
TL;DR
AI should be part of every developer’s toolbox. If a candidate uses AI tools in a hiring assignment, that’s not just acceptable—it’s welcome. It helps employers distinguish between those who truly know how to code and those who are just pretending.