Once you’ve got assessments mode switched on you’ll see small red or yellow indicators below some of your subject’s faces. In short, these tell how well a subject’s face has been captured, red for really bad, and yellow for probably bad. Think of them as warning lights on your car’s dashboard. 

We use a combination of factors to decide which assessment indicator to apply. These factors are determined by our machine learning models. Essentially we input a heap of manually labelled data to teach the model what to do. We then look at the results and continue to refine where we see weaknesses. 

We’re constantly updating our models and thresholds to make them as good as we can, so please send us your feedback if it's not working as you expected.

Did this answer your question?