Once you’ve got assessments mode switched on you’ll see small red or yellow indicators below some of your subject’s faces. In short, these tell how well a subject’s face has been captured, red for really bad, and yellow for probably bad. Think of them as warning lights on your car’s dashboard. 

We use a combination of factors to decide which assessment indicator to apply. These include but are not limited to:

  • The degree to which the subject’s eyes are open or closed
  • How much they are in focus, relative to their size within the image
  • How prominent a subject is within an image relative to other subjects in the background (key people)
  • Relative differences between images within a scene.

These factors are determined by our machine learning models. Essentially we input a heap of manually labelled data to teach the model what to do. We then look at the results and continue to refine where we see weaknesses. 

We’re constantly updating our models and thresholds to make them as good as we can, so please send us your feedback if it's not working as you expected.

Did this answer your question?