Our hiring process is different, very different. It's designed to be as fair and effective as possible. Tim Yocum explains how and why we do it differently at Compose.
Hiring is a difficult task for most small companies. Finding the right fit is often up to the team at large, and recruiters are often a last-resort.
Hiring the wrong person can disrupt a well-running team and sap energy from the momentum of the entire company. Onboarding takes a lot of effort from an already stretched-thin team, so making the right choice is a huge deal.
Our earliest hires at Compose have come organically, by internal means: past colleagues, trusted acquaintances, and referrals. These are excellent ways to confidently hire though it's a slow process and the pool is often shallow.
Show us your skills!
As part of an effort to streamline our hiring, we introduced a small set of work samples that candidates are asked to complete. The intent was to quickly identify candidates that we should make time to have a more in-depth talk with - and it worked well.
We were careful to not make the work samples overbearing. Many companies who employ this technique ask candidates to produce onerous amounts of code, complete applications and write voluminous documentation. These techniques are off-putting for the candidate and quickly lose value to the hiring panel.
Those of us evaluating submitted work samples quickly realized our methods were haphazard. Most concerning, the responses were evaluated inconsistently and the applicants were often subject to bias despite efforts to remain neutral.
Greasing the wheels
We set out to make our hiring process more transparent, less subjective, and reduce as many biases as possible while decreasing the time spent finding the ideal new hire.
We developed an extension to our internal project tracking tool, Fizz, to centralize applicant tracking. Fizz has since evolved into a standalone application that allows us to get interested applicants through the hiring pipeline as efficiently as possible. Our central hub for hiring, Fizz allows staff to draft job descriptions, set hiring criteria, create work samples, and get other Compose staff to grade completed samples so we have a full view of our hiring within one portal.
How it works
Applicants that reach out to us are sent a unique URL that presents them with a series of work samples, supporting data and space to enter responses. We encourage applicants to take their time and answer the questions as completely and with as much effort as they'd expect to spend on a real-world issue.
When a submission is finalized, we read and grade answers according to predefined criteria. Each work sample response has a series of checkboxes that describe specific attributes we're seeking. As submissions come in, the top performers float to the top of the stack and are prime for in-depth group interviews, a subject I'll write about in a follow-up post.
Response grades are tallied up and averaged globally in addition to per-candidate. Doing this helps us identify particularly good questions - and really bad ones that we should scrap. To help us further, each question has a subjective, 1-10 score that can be assigned: bad questions will generally stand out and we can focus on improving them.
There are a number of biases that we've been working to eliminate directly within our application using several strategies:
First Impressions and stereotypes : Making a good first impression is pretty important. It's also a breeding ground for biases to take root. By sanitizing responses, those who grade them are not exposed to names, email addresses, or any other identifying applicant details that might lead to details like a country of origin or gender.
Performance Bias: We don't ask for, nor require, resumes. A resume can be a good indicator of what you've done, and also embed subconscious sentiment toward the applicant.
Confirmation Bias: If you've ever been really excited about a candidate, you've probably been influenced by your excitement as you judge the candidate's abilities. By presenting reviewers with anonymized answers non-sequentially, it's unlikely for a single candidate's score to be influenced by factors outside the answers provided.
Self-selection: We don't ask questions like, "what superpower would you like to have?" or "when the zombies come, what would you do first?". We don't want to judge candidates based on social cliques with regional and cultural influence.
Anchoring and Intuition: If an applicant hits a home run on the first question, you're going to be more disappointed in the base hit on the second. To avoid having any subconsciously elevated expectations, we shuffle responses: the scorer shouldn't know which responses correlate with each applicant.
Grading with rigidity helps ensure everyone's given an equal opportunity to shine. With anonymity, candidate responses can be judged rapidly and without any expectation.
By ensuring that the initial selection is as fair and effective as possible, we are able to spend more time with high quality candidates at the interview stage – which I'll come back to in the future – and give them the opportunity to shine that every candidate deserves. Because that's how we do it at Compose in our search for the brightest and best for the team.