Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?

Thumbnail Image
Date
2020
Authors
Ana Filipa Sequeira
Jaime Cardoso
Tiago Filipe Gonçalves
João Tiago Pinto
Wilson Santos Silva
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups. © 2020 IEEE.
Description
Keywords
Citation