What is the HireVue Assessment measuring?
It’s analyzing around 25,000 data points, many more than a human can synthesize. The assessment looks at what the candidate says and how they say it - as compared to top performers for the same position. What the candidate says is key, as only language-based data is scored.
How were the Assessments developed?
To create the model for each Interview competency, HireVue reviewed several thousand interview responses for a relevant question and then rated the candidate's response using a team of trained behavioral analysts. Their ratings were the input for training the model on that competency. The interviews used to train these models originated in several world regions, including the US, EMEA, and APAC. Although a higher portion of interviews comes from the US region, we have conducted considerable research which demonstrates comparable prediction accuracy across different geographic regions.
Our game-based assessments were developed using a large panel of test participants and validated against different external measures.
How do the Assessments account for individuals with disabilities?
Due to the variable nature of accommodations needed and the spectrum of impairment (from mild to severe), it is not recommended that we have an automated solution for assessments. Instead, human intervention is needed to determine the level of accommodation needed and the appropriate route for that specific candidate. In our system we allow candidates to request accommodations during the assessment process.
We are also working with disability organization partners to ensure that the user interface is as easy to use as possible and can easily accommodate common disabilities and impairments. For example, we have developed partnerships with neuro-diverse organizations where we are working together reviewing language in our Interview assessment questions to ensure they will support the neuro-diverse population.
What do the numbers mean?
The HireVue score is a percentile ranking of each candidate's degree of fit using success profile attributes used to build the algorithm compared to all other candidates that have applied to the same position in HireVue.
Why do the numbers have a different color?
The color bands (Green, Yellow, Red) show the breakdown of candidates into three clusters. This is done for the convenience of the customer and can assist in the assignment of candidates to evaluators or automated actions.
The color bands are broken out evenly between the various percentile rankings, so default Red = 0% to 33%, Yellow = 34% - 66%, and Green = 67% to 99%. The size, number, and name of these bands are customizable.
Often customers will use automated actions to advance or reject candidates based on their HireVue score and correlating with these color bands.
What if I don't agree with the HireVue Score? How can you explain these differences?
Sometimes there will be a divergence between a recruiter or hiring manager's evaluation and what the model tells us. Why this is can be complicated to answer.
- The model is better at predicting than human evaluators because it is non-biased, looking at many more data points and it is comparing candidates to each other at a scale that is not humanly possible. It is optimized to find those features that impact the likelihood of a candidate performing well in the position, not necessarily how well they will be liked by evaluators. This will be the case for most perceived differences.
- Alternatively, the model could be making a wrong prediction, either over OR under scoring candidates. Each model will have an estimated error rate that can help explain perceived differences in scores. The best way to improve a model's accuracy is to regularly update the model with more performance data from real employees, and/or our latest versions.
3. The candidate may have provided an outlier response. Some candidates' answers will be so unique that the algorithm won’t know how to score them. We believe this will be a very small occurrence, but it can happen. Examples of possible outlier applicants might be someone who provides a response that is overly technical or contains excessive industry jargon. A recruiter will pick up on more technical or industry specifics than the model, which may lead to disagreement with the score (either rightly or wrongly).
It is important to note that even when the model makes an occasional inaccurate prediction on a candidate, the overall result of using assessments is a more accurate sorting of all candidates, at greater volumes, than can be done using traditional methods.
Why does an interview get an Insufficient Data Error?
This will occur when any video question in an interview has a technical issue with their audio. Any question that fails to meet our quality standards will cause the entire interview to fail with an Insufficient Data Error.
What causes audio errors?
Audio errors can result in a transcription that is unusable, which prevents us from generating a HireVue Score. Common causes are:
- Low volume audio
- Background noise, especially a consistent hum
- Not enough words spoken in their answer (be careful not to have single response video questions for assessment interviews. This will trigger errors!)
- Very strong accent
- Unintelligible voice
Speaking in a different language than expected.
What happens if a technical issue stops the candidate from completing the interview?
If a candidate is forced to exit the interview, due to technology issues, they can rejoin the interview from the start of the question or game challenge that they were attempting when they had the issue.
What should a customer do when a candidate interview fails due to an Insufficient Data Error?
There are three possible workflows that customers could take.
- For customers that review videos (most), interviews with Insufficient Data Error should be manually reviewed. Many of these will have sufficient information to be manually scored by an evaluation.
- For customers that do not review videos OR for those interviews that were so technically flawed that no manual scoring is possible, candidates will need to be advised that they need to retake the interview.
- For customers that do not allow retakes, candidates will have to be added to a new position with different, but equivalent questions and re-invited to this new interview.
In all cases, candidates retaking the interview should be advised to do whatever they can to improve the conditions of their interview. This may mean switching to a different computer or phone, or moving to a new location.
What is the hour glass icon?
The hour glass icon indicates that an interview has been completed and the video is being processed. The score should be produced within an hour.
The hour glass may also appear if less than 11 candidates have completed the assessment, as this is the minimum required to start scoring and comparing candidates to each other. Please note this is 11 completions that can be scored. Candidates who have received an Insufficient Data Error do not count towards this number.