We're Hiring Engineers All Wrong. Here's How HuffPost Evolved

This is the key anxiety a software organization faces when evaluating a potential hire. At HuffPost Engineering, we've tried to turn this question on its head -- and for the most part, eliminate it.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

But how do we know if they can code?

This is the key anxiety a software organization faces when evaluating a potential hire. At HuffPost Engineering, we've tried to turn this question on its head -- and for the most part, eliminate it.

As it turns out, there's a great deal of research out there on best practices for interviewing. But for whatever reason, it seems the software industry has gravitated toward a set of practices that don't really align with the goal of hiring the right people. And we at HuffPost have fallen into some of these pitfalls in the past. What follows is an account of how we have rethought our engineering hiring practices.


Engineering competencies

To begin with, the question above doesn't really reflect what we're trying to discern from our interview process. The TechCrunch article, "On Secretly Terrible Engineers", does a great job of skewering this anxiety. To summarize, it's safe to assume that any applicant that has spent at least a year at a job in the engineering world "knows how to code" well enough not to get fired. A simple pre-screening process filters for this. What we're really looking for is more nuanced. Instead of asking if a candidate can code, why not ask does the applicant have the concrete skills necessary to succeed in the role?

That's a little bit better, and it leads directly to the more fundamental question of what those concrete skills are.

Ah, now, we're getting somewhere.

Many job descriptions make the mistake of focusing on what a good applicant looks like (e.g., five years of Java experience, experience coding single-page MVC web apps), not what a successful engineer actually does -- in other words, identifying the competencies of the role. This filters out people who have the aptitude of doing the job at hand and artificially selects for people with relevant knowledge, when what we really need is relevant ability.

For the typical developer role at HuffPost, competencies might include things like:

  • Able to solve problems under rapidly changing requirements
  • Able to quickly learn and apply new skills
  • Demonstrates resourcefulness when confronted with unfamiliar problems

...and so forth. As will be discussed later in this piece, we have actually done surprisingly well using our parent company's competencies, which are not specific to engineering. Searching for "job competencies" leads to a lot of great publicly available resources, such as Workforce.com's 31 Core Competencies Explained.

Even when presented with good lists of competencies, it can be tough untrain oneself from retreating back to "knows how to code". We found it to be productive to think about what attributes are exemplified by our current team members. Since all of our existing engineers do, in fact, "know how to code", it tends to be the so-called soft skills and work habits that stand out.

This exercise is essential to a predictive screening process. Yet I know that in the past, I have performed interviews and been interviewed in unstructured processes that don't reflect any preselected competencies. Implicitly, such processes tend to select for interviewees that are attractive, articulate, assertive, or affable. While these may be very pleasant attributes, it is unlikely that they correlate with on-the-job success.


Competency filters

Once the competencies for success in the role are selected, the next task is to figure out how to evaluate the applicant with respect to the competencies. The number one goal of the interview process is to reliably select applicants who will be successful in the role. This is accomplished by what you might call filters, which are the tasks the company does to evaluate an applicant. In much the same way as we aim to evaluate our applicants objectively in terms of competencies, we also choose the filters used measure the applicants in terms specific criteria, including:

  • Predictiveness -- If hired, how will the applicant actually perform the competency on the job?
  • Prep cost -- How much does it cost, in time and effort, to design the filter?
  • Opportunity cost -- In a interview process that has a limited amount of time, how much of it do we have to allocate to the filter?
  • Specificity -- How closely tailored is the filter to measuring one or more of the competencies for the role?
  • Objectivity -- Can the filter be fairly applied to applicants with different backgrounds? In other words, an applicant may appear to perform well or poorly with respect to a given filter based on prior knowledge, rather than actual aptitude, whereas other filters may more directly measure aptitude, regardless of experience.
  • Applicant load -- How much does the filter demand of the applicant, who may have limited time or patience?
  • Marginal benefit -- In the context of the rest of the process, how much additional information does the filter provide?

I would argue that many engineering interviews across the industry are poorly structured in terms of how their filters stack up to these criteria. Let's look at some examples.

"So, tell me about your past experience"

This is how a typical interview session starts off, and it's problematic for a number of reasons. First of all, it has an indeterminate opportunity cost. As the question provides no parameters, the applicant is free to go on at length and chew through your precious interview time. It does not directly measure any well-defined competencies - unless, of course, ability to talk extemporaneously about oneself is actually an important job skill for the role. And worse off, it is often asked by every single interviewer, such that any benefit the question does have drops to nearly nil after the first application of the filter. This filter falls under a broader category of open-ended discussion, which generally suffers from poor specificity.

Whiteboard coding

The hallmark of the engineering interview. Completely divorced from the normal coding environment and resources, and under artificial time pressure, we watch the applicant try to solve some problem. Being generous, let's suppose that the problem selected is actually representative of the work done on the job, and therefore has good specificity (rarely the case, in my experience, but whatever). This filter still typically suffers dearly from high prep cost, as it is difficult to come up with the perfect whiteboard exercise. It suffers from high opportunity cost as well, as it often takes up a pretty solid chunk of time per problem. Objectivity can be poor, too, given that a particular problem may happen to fall into a given applicant's wheelhouse but not others.

"What would you do if..."

Questions of this sort fall under the category of hypotheticals. These often fail the core goal of predictiveness. A crafty applicant will tell you what you want to hear, but you'll be left with little indication of whether they'll actually do what they say.

Take home work

Homework has high applicant load and poor objectivity. There is also a high prep cost, because it tends to take a lot of effort to design a good homework assignment. Like whiteboard coding, assignments also tend to suffer in practice from poor specificity. Some applicants love doing a sample project and really dive in, but others may find themselves strapped for time, because they have a current job or are searching among many places.

Pair programming tryout

The nice thing about this is that you get to see how the applicant performs on real programming tasks faced in a real day. It doesn't get much more predictive and specific than this. The problem is high opportunity cost and poor objectivity. This fails the objectivity test because the task the applicant gets depends on what happens to be available that day.

Weirding the applicant out

I have never experienced this myself, but I've heard of it before, and it is an example of the stress interview technique. It probably goes without saying that this intentionally scores low on applicant load.


The gold standard of filters

Fortunately, we have a technique that performs quite well under the above criteria. It's known as the the behavioral question, and it has a long history in the business world. The idea is based on the premise of past behavior predicting future performance. The approach centers on asking specific, competency-aligned, open-ended questions about the applicant's past experience. A critical difference between a behavioral and hypothetical question is that the latter asks what they would do rather than what they have done.

A variety of techniques should be used to get the most out of the behavioral approach.

  • It is crucial to ask follow-up questions, such as who, what, where, when, why, and how, in order to draw out the specifics of the recollection. This is how the interviewer can prevent being snowed over. It could be easy to fabricate an initial answer, but it will be harder to weave a full tale with details. Getting the applicant to reflect on past experience critically with questions, like "what was the result", "what did you learn", and "what would you do differently", is also revealing.
  • In the preamble of the interview, the interviewer should set the expectation that they may interrupt the applicant to ask for specific details and to make sure the interview stays on schedule. For example, if the applicant begins to answer a question with "we did...", the interviewer should redirect them to focus on their own role and accomplishments. A courteous "I'm sorry to interrupt, but I need you to focus on your specific contribution" will do the trick here. To stay on time, it's also important for the interviewer to be prepared to cut answers short when enough information has been revealed to evaluate the underlying competency. Although it feels abrupt, if properly done, it can actually put the candidate at ease. Otherwise, they may ramble on, unsure of whether they have answered the question fully.
  • The interviewer must be careful to steer the applicant to the specific competency but not shape the answer. The article "The Hidden Flaw in Behavioral Interview Questions" does a great job of explaining how to guard against the latter.
  • As the candidate is answering, the interviewer should be taking enough notes to be able to relate the answer to the rest of the team in post-interview evaluation. In this way, the end result is a pretty clear picture of the applicant, which also serves as a useful artifact for justifying a hiring decision to upper management.

To give an example we have used, one of our competencies is "Proactive & Resourceful", which we define as "able to deal well with new and difficult situations". A prompt we might use to evaluate this is: "Tell me about a time you worked alone or without much direction from others on a major project or assignment". If the applicant supplies a useful scenario, we ask follow up questions to draw out details. If the candidate struggles to come up with an example that we can evaluate (positively or negatively), we could switch to an alternative question, such as: "Tell me about a problem that you've solved in a unique or unusual way. What was the outcome?" The end result of an interview composed with many such filters is a much deeper picture of the candidate's experience than a resume or autobiographical overview would offer, aligned with the competencies of the role.

One key advantage of the behavioral approach over many others is its objectivity. Because it allows the candidate to choose the venue, it allows people of diverse backgrounds to shine. Suppose you have a junior role open. One applicant has a bit of relevant experience but a lack of individual concrete achievement. Another applicant has determinately less directly relevant experience but has clearly demonstrated exceptional application of the soft skills of the job as well as adaptability, grit, and ability to quickly scale a learning curve. A well-calibrated behavioral approach allows these two applicants, with different backgrounds, to be compared. To be clear, sometimes directly applicable experience is more important than adaptability, but this should be explicitly considered in the pre-planning process.


Our findings

At HuffPost, we've experimented with many of the above interview techniques, and we likely will continue to do so. There's nothing inherently wrong with any interview technique, as long as it strikes an acceptable balance between predictiveness and the other attributes of a filter. This balance varies by role. The important part is to make the consideration of filters intentional with regard to the role's competencies and the filter's characteristics. We have simply observed that the behavioral approach to filtering tends to land at a very attractive point stacked up against other filtering techniques.

For pre-planning our interviews, we used a competency guide developed by our parent company, Aol. The competencies in the Aol handbook are things like communication, learning agility, and coachability, generic enough to apply roles in pretty much any department. For other organizations, similar guides of competencies and aligned prompts/questions are available online. A quick search turned up a link to a Complete List of Behavioral Interview Questions on Henderson State University's website, for instance. Redundancy between interviewers was eliminated (unless retained by intention).

As we prepared to test the application of the behavioral approach to real interviews, we had some initial concern that our generic competencies wouldn't filter for engineering ability specifically. This concern turned out to be unfounded. We found that engineering candidates are eager to answer behavioral questions with recollections that reveal their technical experience. We do also augment the soft-skill and work habit competencies with some tech-specific competencies.

There were other benefits to our restructured interview process. Having a specific screening framework put our interviewers at ease. Although we don't track data on this, I'm certain that the discipline it imposed on the process left a better impression on our applicants, too. I have seen applicants loosen up after telling them in the interview preamble that there wouldn't be any whiteboarding. A more comfortable interview experience leads to better conversion rates for candidates that we make offers to, but we see upside in leaving a good impression with candidates we don't make offers to, as well.

Don't get me wrong -- we haven't solved every problem. Optimal application of any screening technique requires training and practice on the part of the interviewers. But the upside of a behavioral-heavy approach is that any such honing of the skills pays dividends for the entire interview process. The end result of the overhaul has been a process that yields comprehensive pictures of applicants' soft skills and work habits within technical roles. I would argue that these, not raw coding ability, is the heart of what separates top performers from people who struggle to contribute.

Want to give our interview process a spin? We're hiring.

Close

What's Hot