HireVue Assessments are created by our industry-leading IO Psychologists and delivered via video interview. Our validated approach predicts the top performers you need to meet our customers' business objectives.
FAQ
It’s analyzing around 25,000 data points, many more than a human can synthesize. The assessment looks at what the candidate says and how they say it - as compared to top performers for the same position. What the candidate says is key, as only language-based data is scored.
What if I don't agree with the HireVue Score? How can you explain these differences?
Sometimes there will be divergence between a recruiter or hiring manager's evaluation and what the model tells us. Why this is can be complicated to answer.
- The model is better at predicting than human evaluators, because it is non- biased, looking at many more data points and it is comparing candidates to each other at a scale that is not humanly possible. It is optimized to find those features that impact the likelihood of a candidate performing well in the position, not necessarily how well they will be liked by evaluators. This will be the case for most perceived differences.
- Alternatively, the model could be making a wrong prediction, either over OR under scoring candidates. Each model will have an estimated error rate that can help explain perceived differences in scores. The best way to improve a model's accuracy is to regularly update the model with more performance data from real employees, and/or our latest versions.
3. The candidate may have provided an outlier response. Some candidates' answers will be so unique that the algorithm won’t know how to score them. We believe this will be a very small occurrence, but it can happen. Examples of possible outlier applicants might be someone who provides a response that is overly technical or contains excessive industry jargon. A recruiter will pick up on more technical or industry specifics than the model, which may lead to disagreement with the score (either rightly or wrongly).
It is important to note that even when the model makes an occasional inaccurate prediction on a candidate, the overall result of using assessments is a more accurate sorting of all candidates, at greater volumes, than can be done using traditional methods.
How do a candidate's speech patterns affect the HireVue score?
While we do our best to match up the language transcription capabilities with the language of the candidate who is applying, there is always the possibility that the candidate is applying to a position and they speak in a different language than expected, or they may have speech impediment. There can be an impact on the accuracy of the transcription if the candidate is hard to understand, although our current transcription provider demonstrates the highest transcription accuracy on the market. As with a traditional in-person interview, our processes may misunderstand what is said. As a fail-safe, the candidate may receive an Insufficient Data Error rather than risk the model scoring them inaccurately. Candidates should be encouraged to speak as clearly as possible to maximize the accuracy of the speech transcription.
Why does an interview get an Insufficient Data Error?
This will occur when any assessed question in an interview has a technical issue with either the audio. Any question that fails to meet our quality standards to be scored will cause the entire interview to be marked with an Insufficient Data Error.