• No results found

Face Image Quality Metrics

In document Face Image Quality Assessment (sider 42-45)

With the inclusion of authentication to applications, Face Recognition systems are being utilized more than ever. These FR systems are reliant on a pre-captured reference image (a high quality image) which are then used as a reference to compare with the test image (also called probe image).

Figure 3.3: Face Recognition system. Middle photo by Thomas Haugersveen, Statsministerens Kontor. Top left photo by Human-Etisk Forbund. Second and third from the top photos by Torbjørn Kjosvold, Forsvaret. Bottom left photo by Eirin Larsen, Statsministerens Kontor.

Figure 3.3 visualizes how FR systems work. The quality of the probe images on the left side of the figure are assessed and compared with the reference image. A similarity score between the two images is calculated (in the case of the FR system depicted in Figure 3.3 the similarity scores are between zero and one). FR systems have a set threshold where probe images are rejected if the similarity score is too low.

When it comes to the performance of the system, the quality of the probe images plays a crucial role. If the probe images are of bad quality, the overall performance of the system will decrease. To keep the performance of FR systems, careful attention is paid to the quality of facial images so that only high-quality images are used in the system. For this and to evaluate the quality of facial images, FIQMs are used. FIQMs are automated algorithms that evaluate the quality of facial images and provide a score which represents the perceived quality of the given images. FIQMs can be based on different quality factors, such as subject-camera distance, inter-eye distance, pose, lighting and facial coverings.

Mobai chose FaceQnet and ISO Metrics to be used in our application. The reason for choosing these specific metrics was their differences in terms of

evalu-Chapter 3: Face Quality Assessment 23

Figure 3.4:Typical FIQM process[12].

ating facial images. The two FIQMs are both no-reference approaches, which will be used to assist and provide feedback when an image is acquired for FR system enrollment.

3.2.1 ISO Metrics

ISO Metrics is a no-referenced FIQM. The metric is implemented based on ISO/IEC TR 29794-5:2010 Information technology — Biometric sample quality — Part 5:

Face image data[10]. All factors described in the standard affecting the face image quality are implemented in the FIQM.

ISO Metrics calculate the inter-eye distance on the facial images. If this value is below a certain score, the metric will filter out these types of images. The inter-eye distance is related to the subject camera distance, because it indicates that the subject could be too close to or to far from the camera lens.

All image properties and characteristics described in[10] are taken into ac-count when evaluating the quality of facial images in ISO Metrics. This includes the sharpness, contrast, blur, brightness, exposure, pose symmetry, light symmetry and illumination symmetry of the image. These factors are stored in an image properties array for each facial image.

To be able to calculate quality scores on the facial images, training data is needed. The metric uses random forest regression[13]with 214 estimators and 22 nodes of depth. The quality score for each facial image is computed by predicting the output score of the image properties array.

3.2.2 FaceQnet

FaceQnet[14]is an open source, no-reference FIQM using Convolutional Neural Networks (CNNs). FaceQnet has two versions implemented, FaceQnet v0[15]and FaceQnet v1 [16]. In this project, we used the latest version (FaceQnet v1). Its quality measures are closely related to the ICAO standard[17]that provides strict guidelines for capturing images. Factors such as illumination, pose, resolution and focus are essential in regard to the final quality score.

A key part of the implementation of FaceQnet is data preprocessing (shown in Figure 3.4). Generally data preprocessing removes unnecessary data, which directly improves the quality of machine learning algorithms. The background of

images will affect the quality score which provides us with biased results. One way to avoid feature extraction from the background is to crop the input images to only include the face before using FIQMs. FaceQnet uses Multitask Cascaded Convo-lutional Networks (MTCNN) to detect and extract the coordinates of the face. In the next step, the facial image is cropped to an image with the size 224×224 and used as the input image in FaceQnet v1.

FaceQnet uses a subset of the VGGFace2 [18]dataset to create a pre-trained model to make its quality predictions. The subset consists of 300 subjects. The FIQM will first generate ground truth quality measures which are created by la-beling the 300 subjects in the training dataset. The ground truth quality measures will then train the deep regression model in order to generate quality scores.

Chapter 4

Objective Assessment

This chapter contains the development process of the Face Image Quality Assess-ment application. To start we will look at functional and non-functional require-ments in Section 4.1 followed by use cases in Section 4.2. Section 4.3 discusses our choice of front- and backend implementation, followed by Section 4.4 that shows off our design and implementation. We include sequence diagrams in Sec-tion 4.5 to show how some of the core funcSec-tionality work. Finally, in SecSec-tion 4.6 we conclude the chapter with an overview of the user testing process where we ensure the quality of the application.

In document Face Image Quality Assessment (sider 42-45)