• No results found

4. EXPERIMENTAL EVALUATION

4.2 Face recognition algorithm evaluation

4.2.2 Score-level and Feature-level fusion evaluation

To improve the accuracy and reduce the EER we used score-level and feature-level fusion. The following graphs show False non match rate (FNMR) vs. False match rate (FMR) and ROC curves (ROC curves also shows the EER and accuracy) for score-level fusion and feature-level fusion of the three patch feature vectors:

41

Table 8: False non match rate (FNMR) vs. False match rate (FMR) and ROC curve of Score-level fusion on FEI and TUFTS dataset

Score-level fusion FEI dataset

TUFTS dataset

42

Table 9: False non match rate (FNMR) vs. False match rate (FMR) and ROC curve of Feature-level fusion on FEI and TUFTS dataset

Feature-level fusion FEI dataset

TUFTS dataset

43

Table 10: EER and accuracy of score-level and feature-level fusion on FEI and TUFTS dataset

Fusion Dataset EER

Score-level

FEI dataset 13.2489

TUFTS dataset 8.6052

Feature-level

FEI dataset 20.5321

TUFTS dataset 14.1753

For score-level fusion the weights for the three fused scores were determined by trial -and-error method. It was seen that increasing the weight of the top patch and decreasing the top-left and top-right patch improved the accuracy from weights (w1=0.5, w2=0.25, w3=0.25) till (w1=0.9, w2=0.05, w3=0.05). After this the accuracy of FEI dataset started reducing. Weights of the top-left and top-right patches were chosen to be the same as accuracy of these patches were almost the similar to each other for both FEI and TUFTS dataset as it can be seen from the results before.

Table 10 shows that the Score-level fusion gave a better accuracy than feature-level vector in our case. Score-level fusion accuracy is also better than the accuracy we had before. Feature-level fusion did not work in our case because the top-left and top-right accuracy was not good compared to the top patch. For feature-level fusion we are fusing all the three features without any weights which gives then all equal weights. Therefore, the accuracy drops due to the low accuracies if top-left and top-right patch. On the other hand, for the score-fusion we set less weights on the lower accuracy top-left and top-right patches. This therefore, gives us better results.

44

5. Conclusion

The experimental evaluation of this thesis, we believe, confirms that patch-based transfer learning and fine tuning a CNN based pre-trained model can be used for partial face recognition with high accuracy. As seen from the experiments performed in this thesis, masked face recognition can be successfully performed using deep learning. Face recognition for only parts of the faces like top-left or top-right can also be done with an accuracy of up to 80% on some databases. This answers our research questions: To what extent can partial-face or face covered with mask be recognized? And can patch-based deep learning be used for partial-face recognition.

This thesis incorporated the methods of using a 3D face masking tool to create masked datasets, face detection and alignment on the masked datasets, a patch-based transfer learning and fine tuning of a pre-trained deep learning model (FaceNet), feature extraction using the trained models on different test datasets, and score-level and feature-level fusion of the different patches.

KomNET dataset does not have too many pose variations i.e., there are no 180-degree rotation side profiles, and more than half of the images in FEI are of left and right profiles. We separated the training dataset and testing dataset by using two completely different datasets , KomNET dataset for training and FEI dataset for testing, we think that is the main reason for the low performances [18]. The left and right profiles gave different features than the front profiles.

Therefore, there were low similarity scores (large distances) between some pairs from the same persons. The performance on datasets with large pose variations can be improved by using datasets with large pose variations for training the models.

The performance on TUFTS dataset was better compared to the FEI dataset and gave a final accuracy 91.3948 % which still has a large EER of 8.6052 %. Apart from the reason mentioned above about having different train and test datasets, another reason for this might be because the image quality of TUFTS is lower than the other datasets used. When it is scaled to 160*160 pixels some of the images are blurry. To have FaceNet perform better on low image qualities we have to add low quality images while training [60]. This will help us improve the accuracy further.

45

The lighting used in both FEI and KomNET datasets are similar, whereas the lighting used for TUFTS dataset is different. Therefore, another way to improve the performance on datasets like TUFTS can be to use light changes as data augmentation or use different lighting images while training the models. As a difference in illumination causes the face recognition accuracy to drop [18].

Even though our models were trained with only 50 classes consisting of 1200 images in total (both train and validation together) our accuracy reached to 86.7511% for FEI and 91.3948 % TUFTS. Thus, increasing the train data for the transfer learning and fine tuning will increase the accuracy even further [37].

From the experiments in this thesis, we can conclude that partial face recognition can be achieved with high accuracy using patch-based deep learning.

46

6. Future work

The performance of the methodology in this thesis used can be improved by increasing the number of training images by:

• Using data augmentation while training the models specially for different lighting conditions

• Increasing the number of classes and images per person while training the models

• Using a large variety of poses for the training dataset

• Using both good quality and bad quality images for the training

A way of possibly improving the method used in this thesis is to use a different loss function than the triplet loss function in the model. As triplet loss function is highly sensitive to noise using a hierarchical triplet loss function can possibly improve the accuracy [64], [84]. Using quadruplet loss function can perhaps improve the accuracy as well as triplet loss function can cause a relatively large intra-class variation and reducing this intra-class variation and enlarging the inter-class variation can improve the model accuracy [85]–[87].

Increasing the number of patches and making them overlapping like in the paper “Patch strategy for deep face recognition” can also improve the accuracy as the models will be able to learn more underlying features [6].

An extension of this experiment can be to use the proposed methods on other partial face images. This can be used for recognizing people from low resolution cameras or different angle cameras like CCTV footages.

47

7. References

[1] N. Damer, J. H. Grebe, C. Chen, F. Boutros, F. Kirchbuchner, and A. Kuijper, “The Effect of Wearing a Mask on Face Recognition Performance: an Exploratory Study,” BIOSIG 2020 - Proc. 19th Int. Conf. Biometrics Spec. Interes. Gr., Jul. 2020, [Online]. Available:

http://arxiv.org/abs/2007.13521.

[2] “Criminals use coronavirus masks to conceal themselves,” Al Arabiya, 2020.

[3] D. Babwin and S. Dazio, “Coronavirus Masks a Boon for Crooks Who Hide Their Faces,”

NBC Miami, 2020.

[4] L. He, H. Li, Q. Zhang, and Z. Sun, “Dynamic Feature Matching for Partial Face Recognition,”

IEEE Trans. Image Process., vol. 28, no. 2, pp. 791–802, Feb. 2019, doi:

10.1109/TIP.2018.2870946.

[5] R. Weng, J. Lu, J. Hu, G. Yang, and Y.-P. Tan, “Robust Feature Set Matching for Partial Face Recognition,” in 2013 IEEE International Conference on Computer Vision, Dec. 2013, pp.

601–608, doi: 10.1109/ICCV.2013.80.

[6] Y. Zhang, K. Shang, J. Wang, N. Li, and M. M. Y. Zhang, “Patch strategy for deep face recognition,” IET Image Process., vol. 12, no. 5, pp. 819–825, May 2018, doi: 10.1049/iet-ipr.2017.1085.

[7] I. Cheheb, N. Al-Maadeed, S. Al-Madeed, A. Bouridane, and R. Jiang, “Random sampling for patch-based face recognition,” in 2017 5th International Workshop on Biometrics and

Forensics (IWBF), Apr. 2017, pp. 1–5, doi: 10.1109/IWBF.2017.7935104.

[8] W. Hariri, “Efficient Masked Face Recognition Method during the COVID-19 Pandemic,”

May 2021, doi: 10.21203/rs.3.rs-39289/v1.

[9] J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou, “RetinaFace: Single-stage Dense Face Localisation in the Wild,” arXiv, May 2019, [Online]. Available:

http://arxiv.org/abs/1905.00641.

[10] A. Elmahmudi and H. Ugail, “Experiments on Deep Face Recognition Using Partial Faces,” in 2018 International Conference on Cyberworlds (CW), Oct. 2018, pp. 357–362, doi:

10.1109/CW.2018.00071.

[11] D. Misra, C. Crispim-Junior, and L. Tougne, “Patch-Based CNN Evaluation for Bark

48

Classification,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12540 LNCS, Edinburgh, United Kingdom, 2020, pp. 197–212.

[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep

convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi:

10.1145/3065386.

[13] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp.

1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.

[14] C. Sun, Y. Yang, C. Wen, K. Xie, and F. Wen, “Voiceprint Identification for Limited Dataset Using the Deep Migration Hybrid Model Based on Transfer Learning,” Sensors, vol. 18, no. 7, p. 2399, Jul. 2018, doi: 10.3390/s18072399.

[15] J. Li, T. Qiu, C. Wen, K. Xie, and F.-Q. Wen, “Robust Face Recognition Using the Deep C2D-CNN Model Based on Decision-Level Fusion,” Sensors, vol. 18, no. 7, p. 2080, Jun. 2018, doi:

10.3390/s18072080.

[16] Y.-X. Yang, C. Wen, K. Xie, F.-Q. Wen, G.-Q. Sheng, and X.-G. Tang, “Face Recognition Using the SR-CNN Model,” Sensors, vol. 18, no. 12, p. 4237, Dec. 2018, doi:

10.3390/s18124237.

[17] I. Masi, Y. Wu, T. Hassner, and P. Natarajan, “Deep Face Recognition: A Survey,” in 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Oct. 2018, pp.

471–478, doi: 10.1109/SIBGRAPI.2018.00067.

[18] Changbo Hu, J. Harguess, and J. K. Aggarwal, “Patch-based face recognition from video,” in 2009 16th IEEE International Conference on Image Processing (ICIP), Nov. 2009, no.

December 2009, pp. 3321–3324, doi: 10.1109/ICIP.2009.5413935.

[19] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2014, pp. 1717–1724, doi:

10.1109/CVPR.2014.222.

[20] A. Elmahmudi and H. Ugail, “Deep face recognition using imperfect facial data,” Futur.

Gener. Comput. Syst., vol. 99, pp. 213–225, Oct. 2019, doi: 10.1016/j.future.2019.04.025.

49

[21] M. J. Sudhamani, M. K. Venkatesha, and K. R. Radhika, “Revisiting feature level and score level fusion techniques in multimodal biometrics system,” in 2012 International Conference on Multimedia Computing and Systems, May 2012, pp. 881–885, doi:

10.1109/ICMCS.2012.6320155.

[22] F. Wang and J. Han, “Multimodal biometric authentication based on score level fusion using support vector machine,” Opto-Electronics Rev., vol. 17, no. 1, Jan. 2009, doi:

10.2478/s11772-008-0054-8.

[23] Yongsheng Gao and M. Maggs, “Feature-Level Fusion in Personal Identification,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, vol. 1, pp. 468–473, doi: 10.1109/CVPR.2005.159.

[24] J. Guo and J. Deng, “InsightFace: a Deep Learning Toolkit for Face Analysis.”

http://insightface.ai/index.html (accessed Jan. 15, 2021).

[25] Y. Feng, “Face3d,” Github. https://github.com/YadiraF/face3d (accessed Jan. 16, 2021).

[26] InsightFace, “Face Mask Renderer tool,” Github.

https://github.com/deepinsight/insightface/tree/master/recognition/tools (accessed Jan. 15, 2021).

[27] Z. Wang et al., “Masked Face Recognition Dataset and Application,” Mar. 2020, [Online].

Available: http://arxiv.org/abs/2003.09093.

[28] J. J. Lee, U. J. Gim, J. H. Kim, K. H. Yoo, Y. H. Park, and A. Nasridinov, “Identifying customer interest from surveillance camera based on deep learning,” in Proceedings - 2020 IEEE International Conference on Big Data and Smart Computing, BigComp 2020, Feb. 2020, pp. 19–20, doi: 10.1109/BigComp48618.2020.0-105.

[29] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–

1503, Oct. 2016, doi: 10.1109/LSP.2016.2603342.

[30] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–

1503, Oct. 2016, doi: 10.1109/LSP.2016.2603342.

[31] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “VGGFace2: A Dataset for Recognising Faces across Pose and Age,” in 2018 13th IEEE International Conference on

50

Automatic Face & Gesture Recognition (FG 2018), May 2018, pp. 67–74, doi:

10.1109/FG.2018.00020.

[32] X. Guo and J. Nie, “Face Recognition System for Complex Surveillance Scenarios,” J. Phys.

Conf. Ser., vol. 1544, no. 1, May 2020, doi: 10.1088/1742-6596/1544/1/012146.

[33] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, and T.-H. Ma, “Patch-Based Principal Component Analysis for Face Recognition,” Comput. Intell. Neurosci., vol. 2017, pp. 1–9, 2017, doi:

10.1155/2017/5317850.

[34] Z. Xu, Y. Liu, M. Ye, L. Huang, H. Yu, and X. Chen, “Patch Based Collaborative

Representation with Gabor Feature and Measurement Matrix for Face Recognition,” Math.

Probl. Eng., vol. 2018, pp. 1–13, 2018, doi: 10.1155/2018/3025264.

[35] A. Vijayan, S. Kareem, and J. J. Kizhakkethottam, “Face Recognition Across Gender

Transformation Using SVM Classifier,” Procedia Technol., vol. 24, pp. 1366–1373, 2016, doi:

10.1016/j.protcy.2016.05.150.

[36] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, vol. 07-12-June, pp. 5325–5334, doi: 10.1109/CVPR.2015.7299170.

[37] S. T. Krishna and H. K. Kalluri, “Deep learning and transfer learning approaches for image classification,” Int. J. Recent Technol. Eng., vol. 7, no. 5S4, pp. 427–432, 2019.

[38] S. J. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. Knowl. Data Eng., vol.

22, no. 10, pp. 1345–1359, Oct. 2010, doi: 10.1109/TKDE.2009.191.

[39] F. Chollet, “Transfer learning & fine-tuning,” Keras, 2020.

https://keras.io/guides/transfer_learning/.

[40] R. Ahdid, S. Safi, and B. Manaut, Euclidean and geodesic distance between a facial feature points in two-dimensional face recognition system, vol. 14, no. 4A Special Issue. 2017.

[41] A. C. Lorena, A. C. P. L. F. de Carvalho, and J. M. P. Gama, “A review on the combination of binary classifiers in multiclass problems,” Artif. Intell. Rev., vol. 30, no. 1–4, pp. 19–37, Dec.

2008, doi: 10.1007/s10462-009-9114-9.

[42] A. D. Back, “Multiclass Classification Using Support Vector Machines,” Baeldung, 2020.

https://www.baeldung.com/cs/svm-multiclass-classification (accessed Dec. 09, 2020).

51

[43] N. H. Spencer, Essentials of Multivariate Data Analysis. Chapman and Hall/CRC, 2013.

[44] M. K. Hossain and S. Abufardeh, “A new method of calculating squared euclidean distance (SED) using PTreE technology and its performance analysis,” in Proceedings of 34th

International Conference on Computers and Their Applications, CATA 2019, 2019, pp. 45–54, doi: 10.29007/trrg.

[45] I. N. G. A. Astawa, I. K. G. D. Putra, M. Sudarma, and R. S. Hartati, “KomNET: Face Image Dataset from Various Media for Face Recognition,” Data Br., vol. 31, p. 105677, Aug. 2020, doi: 10.1016/j.dib.2020.105677.

[46] C. E. Thomaz and G. A. Giraldi, “A new ranking method for principal components analysis and its application to face image analysis,” Image Vis. Comput., vol. 28, no. 6, pp. 902–913, Jun. 2010, doi: 10.1016/j.imavis.2009.11.005.

[47] E. Z. Tenorio and C. E. Thomaz, “Analise Multilinear Discriminante De Formas Frontais De Imagens 2D De Face,” in X Sbai - Simpósio Brasileiro de Automação Inteligente, 2011, vol. X, pp. 1043–1048.

[48] V. do Amaral, C. Fígaro-Garcia, G. J. F. Gattas, and C. E. Thomaz, “Normalização espacial de imagens frontais de face em ambientes controlados e n ão-controlados,” FaSCi-Tech, vol. 1, no.

1, 2009, [Online]. Available:

http://fatecsaocaetano.edu.br/fascitech/index.php/fascitech/article/view/9/8.

[49] V. Amaral and C. E. Thomaz, “Normalizacao Espacial de Imagens Frontais de Face - Technical Report,” São Paulo, Brazil, 2008.

[50] L. L. de O. Junior and C. E. Thomaz, “Captura e Alinhamento de Imagens: Um Banco de Faces Brasileiro - Undergraduate Technical Report,” São Paulo, Brazil, 2006.

[51] K. Panetta et al., “A Comprehensive Database for Benchmarking Imaging Systems,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 3, pp. 509–520, Mar. 2020, doi:

10.1109/TPAMI.2018.2884458.

[52] J. Cho, K. Lee, E. Shin, G. Choy, and S. Do, “How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?,” Nov. 2015, [Online].

Available: http://arxiv.org/abs/1511.06348.

[53] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3D Face Model for Pose and Illumination Invariant Face Recognition,” in 2009 Sixth IEEE International Conference on

52

Advanced Video and Signal Based Surveillance, Sep. 2009, pp. 296–301, doi:

10.1109/AVSS.2009.58.

[54] Xiangyu Zhu, Z. Lei, Junjie Yan, D. Yi, and S. Z. Li, “High-fidelity Pose and Expression Normalization for face recognition in the wild,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, vol. 07-12-June, pp. 787–796, doi:

10.1109/CVPR.2015.7298679.

[55] A. Bas, P. Huber, W. A. P. Smith, M. Awais, and J. Kittler, “3D Morphable Models as Spatial Transformer Networks,” in 2017 IEEE International Conference on Computer Vision

Workshops (ICCVW), Oct. 2017, vol. 2018-Janua, pp. 895–903, doi:

10.1109/ICCVW.2017.110.

[56] E. Kaziakhmedov, K. Kireev, G. Melnikov, M. Pautov, and A. Petiushko, “Real-world Attack on MTCNN Face Detection System,” in 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), Oct. 2019, pp. 0422–0427, doi:

10.1109/SIBIRCON48586.2019.8958122.

[57] J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou, “RetinaFace Face Detector,”

Github. https://github.com/deepinsight/insightface/tree/master/detection/RetinaFace (accessed Feb. 01, 2021).

[58] S. I. Sergil, “Face Alignment for Face Recognition in Python within OpenCV,” 2020.

https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/.

[59] D. Hacker, “How to align faces with OpenCV in Python,” 2020. http://datahacker.rs/010-how-to-align-faces-with-opencv-in-python/.

[60] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, vol. 07-12-June, pp. 815–823, doi:

10.1109/CVPR.2015.7298682.

[61] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, vol. 2016-Decem, pp. 2818–2826, doi:

10.1109/CVPR.2016.308.

[62] R. M. Kamble et al., “Automated Diabetic Macular Edema (DME) Analysis using Fine Tuning with Inception-Resnet-v2 on OCT Images,” in 2018 IEEE-EMBS Conference on Biomedical

53

Engineering and Sciences (IECBES), Dec. 2018, pp. 442–446, doi:

10.1109/IECBES.2018.8626616.

[63] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” 31st AAAI Conf. Artif. Intell. AAAI 2017, pp.

4278–4284, Feb. 2016, [Online]. Available: http://arxiv.org/abs/1602.07261.

[64] I. William, D. R. Ignatius Moses Setiadi, E. H. Rachmawanto, H. A. Santoso, and C. A. Sari,

“Face Recognition using FaceNet (Survey, Performance Test, and Comparison),” in 2019 Fourth International Conference on Informatics and Computing (ICIC), Oct. 2019, pp. 1–6, doi: 10.1109/ICIC47613.2019.8985786.

[65] D. Sandberg, “Face recognition using Tensorflow,” Github, 2018.

https://github.com/davidsandberg/facenet (accessed Feb. 27, 2021).

[66] J. Brownlee, Deep Learning for Computer - Vision Image Classification , Object Detection , and Face Recognition in Python UNLOCK Computer Vision With Deep Learning. 2019.

[67] H. Taniai, “Keras-facenet,” Github. https://github.com/nyoki-mtl/keras-facenet (accessed Mar.

01, 2021).

[68] S. Gadicherla, “Tensorflow Vs Keras? — Comparison by building a model for image

classification.,” Hackermoon, 2020. https://hackernoon.com/tensorflow-vs-keras-comparison-by-building-a-model-for-image-classification-f007f336c519.

[69] M. T. Rosenstein, Z. Marx, L. P. Kaelbling, and T. G. Dietterich, “To Transfer or Not To Transfer,” NIPS 2005 Work. Transf. Learn., 2005.

[70] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” May 2015, [Online]. Available: http://arxiv.org/abs/1505.00853.

[71] J. Brownlee, Better Deep Learning. Train Faster, Reduce Overfitting, and Make Better Predictions, vol. 1, no. 2. 2018.

[72] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Dec. 2014, [Online].

Available: http://arxiv.org/abs/1412.6980.

[73] S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using Deep Learning for Image-Based Plant Disease Detection,” Front. Plant Sci., vol. 7, Sep. 2016, doi: 10.3389/fpls.2016.01419.

[74] A. Ross and K. Nandakumar, “Fusion, Score-Level,” in Encyclopedia of Biometrics, Boston,

54 MA: Springer US, 2009, pp. 611–616.

[75] A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, “Feature Level Fusion of Face and Fingerprint Biometrics,” in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems, Sep. 2007, pp. 1–6, doi: 10.1109/BTAS.2007.4401919.

[76] P. Drozdowski, N. Wiegand, C. Rathgeb, and C. Busch, “Score Fusion Strategies in Single-Iris Dual-Probe Recognition Systems,” in Proceedings of the 2018 2nd International Conference on Biometric Engineering and Applications - ICBEA ’18, 2018, pp. 13–17, doi:

10.1145/3230820.3230823.

[77] A. A. Ross and R. Govindarajan, “Feature level fusion of hand and face biometrics,” in Biometric Technology for Human Identification II, Mar. 2005, vol. 5779, no. March, p. 196, doi: 10.1117/12.606093.

[78] S. K. Bhardwaj, “An Algorithm for Feature Level Fusion in Multimodal Biometric System,”

Int. J. Adv. Res. Comput. Eng. Technol., vol. 3, no. 10, pp. 3499–3503, 2014.

[79] S. Almabdy and L. Elrefaei, “Deep Convolutional Neural Network-Based Approaches for Face Recognition,” Appl. Sci., vol. 9, no. 20, p. 4397, Oct. 2019, doi: 10.3390/app9204397.

[80] M. E. Schuckers, Computational Methods in Biometric Authentication. London: Springer London, 2010.

[81] M. E. Schuckers, “False Non-Match Rate,” in Computational Methods in Biometric Authentication, 2010, pp. 47–96.

[82] M. E. Schuckers, “False Match Rate,” in Computational Methods in Biometric Authentication, 2010, pp. 97–153.

[83] M. E. Schuckers, “Receiver Operating Characteristic Curve and Equal Error Rate,” in Computational Methods in Biometric Authentication, 2010, pp. 155–204.

[84] W. Ge, W. Huang, D. Dong, and M. R. Scott, “Deep Metric Learning with Hierarchical Triplet Loss,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, pp. 272–288.

[85] W. Chen, X. Chen, J. Zhang, and K. Huang, “Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-identification,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, vol. 2017-Janua, pp. 1320–1329, doi:

10.1109/CVPR.2017.145.

55

[86] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng, “Person Re-identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, vol. 2016-Decem, pp. 1335–

1344, doi: 10.1109/CVPR.2016.149.

[87] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Discriminative Feature Learning Approach for Deep Face Recognition,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, pp. 499–515.

56

Appendix

Appendix A – Codes

Before running the codes use pip to install all the imported packages

A1- Code example of creating masked face dataset

• Download and install the libraries and models required following the instructions from https://github.com/deepinsight/insightface/tree/master/recognition/tools

• Edit the code mask_renderer.py depending on your dataset, an example of the code used

• Edit the code mask_renderer.py depending on your dataset, an example of the code used