Circus Manager: How long have you been juggling?
Candidate: Oh, about six years.
Manager: Can you handle three balls, four balls, and five balls?
Candidate: Yes, yes, and yes.
Manager: Do you work with flaming objects?
Manager: … knives, axes, open cigar boxes, floppy hats?
Candidate: I can juggle anything.
Manager: Do you have a line of funny patter that goes with your juggling?
Candidate: It’s hilarious.
Manager: Well, that sounds fine. I guess you’re hired.
Candidate: Umm … Don’t you want to see me juggle?
Manager: Gee, I never thought of that.
– Peopleware, DeMarco and Lister
OK, I guess I should have known that I wouldn’t be able to escape the inevitable arguments over whether or not my interview question was valid, relevant, well thought out, an abomination against all that is right and holy, etc. So let’s face this head on, back up a bit, and try to answer the question of what even makes a good coding question.
But wait – maybe we don’t even need coding questions! Maybe, as a previous commenter suggested, we can just have a free ranging exchange of ideas, at the end of which we’ll know whether or not a candidate is a good match for the organization?
Well, no. This is, in fact, one of the most common fallacies, and one of the most pernicious, since it helps to justify hiring decisions that have no basis in anything other than “liking.” All beginning interviewers, and many (most?) experienced ones, have the precious idea that they, more so than everyone around them, are extraordinary judges of character, ability, fit, intelligence, etc. Experience teaches us that this is complete BS. Oh, and all the research. In fact, we’re all astonishingly bad at judging people in an interview setting, and open-ended behavioral interviews are among the least accurate indicators of future job success – they basically boil down to whether the interviewer likes the interviewee, and this has mostly to do with irrelevant qualities (race, gender, looks, etc.), and whether the interviewee can spin a line.
Really, the only good indicators for future job success are 1) past job success, and 2) the ability to demonstrate skills that will be required in the job (in that order). (1) is hard to judge, since resumes, references, and the interviewee’s presentation during the interview are all going to be crafted to present the interviewee in the best possible light. Which leaves us with (2). Given the outsized damage that bad hires (i.e., false positives) can do to your organization, the following matrix is about the best that you can do.
Of course you should try to minimize the false negatives. Of course there are criteria other than coding that need to be considered. But if technical competence is a necessary condition for a software engineering role at your company, then you need an interview process that identifies and eliminates candidates who can’t code to your standards.
With that, let’s go through the necessary features of a good coding question.
A good technical question should be easy for a great candidate, hard for a good candidate, and impossible for a bad candidate. Fizzbuzz has its uses, but fails on this axis since its only purpose is to knock out candidates who can’t code at all – it doesn’t help you differentiate between the good and the great. On the other hand, problems which require a flash of insight reward the lucky, and truly difficult problems will screen out everyone except the lucky great. Calibrating a problem to be “just hard enough” is tough.
It’s critical to have objective criteria for judging success on a problem. We’re all prey to cognitive biases, and the more subjectivity allowed into the evaluation process, the more likely you’ll end up making decisions based on preconceptions rather than actual performance. Research consistently demonstrates that interviews are decided in the first couple seconds – but the more objective your testing criteria are, the more hope you’ll have of overcoming your biases.
Your question needs to test a candidate on the specific technical competencies the job requires. Asking puzzle-style questions (“how many piano tuners are there in Seattle?”) won’t tell you whether someone will be a good coder, and making a problem vague to catch candidates “who don’t ask for clarification” will tell you whether the candidate is a good interviewer, not how she would react when working directly with PMs, designers, etc. Likewise, taking off points for minor issues that an IDE would catch isn’t going to increase your accuracy. Asking language- or technology-specific questions is only important for critical competencies that can’t be learned on the job (e.g., A C++ programmer can very easily transition to Java – how much do you really care whether she know the “transient” keyword? On the other hand, prior experience writing e-commerce infrastructure can and should be tested if that’s the position you’re trying to fill).
Related to the previous point, a good question is focused, and doesn’t try to evaluate too many things at a time. It can take time, and a lot of interviews, to clear away all the irrelevant details from a question.
Asking a top candidate a series of boring, easy, or annoying questions won’t convince her that your company is a great place to work. After a candidate has passed the fizzbuzz bar, questions should be fun and challenging, but doable.
There isn’t just one way to ask a question. Some companies use whiteboards. Some use paper. Some give candidates a computer with an IDE, and/or let them use the internet. Some give out take-home problems. Some do pair programming. These are all choices, and change the way the answers should be evaluated – but not the above criteria.
An alternative to technical interviews that frequently gets trotted out is the idea of trial period – either an internship or contract period of a week to a couple months in which you can see how someone really performs. Unfortunately, this is almost never possible with top candidates. Recruiting is bloodsport – top graduating college students will have multiple solid offers from top companies, and simply won’t be interested. Likewise, experienced candidates already have jobs – and limited vacation time – and will generally be scheduling their interviews to all take place within a single time period. Although it’s nice to imagine that a great candidate would quit their job or spend a week of their precious vacation time, all for the honor of doing an extended interview at your company, this is unrealistic in the extreme. Not only are you competing with companies who aren’t relying on this clumsy, high-maintenance interview method, savvy candidates will know that this process would damage their BATNA and demonstrate a level of desperation that would be a huge red flag.
Think of this as a usability challenge. You’re putting your application process behind a massive paywall. Other companies are making everything free and working to improve click-through rate. No matter how amazing your opportunity is, there are lots of other exciting options for great candidates.
So, does this prove that my question is a good one? Well… no. I can assert that it is, but ultimately that’s just my opinion. You are, of course, welcome to disagree, and many of you have. What I hope that it does demonstrate is that coding exercises are necessary; that good questions need to fulfill certain criteria; and that in the general case, the idea of requiring a trial period is a fantasy. False negatives suck, both for the candidate and the company, and can create a sense that the process wasn’t fair. I’ve spent a lot of time thinking about how to reduce false negatives, and I honestly don’t have a great answer – but if it’s the price for maintaining a strong engineering organization, then it’s a price worth paying.