TY - GEN
T1 - Implementing a robust explanatory bias in a person re-identification network
AU - Bekele, Esube
AU - Lawson, Wallace E.
AU - Horne, Zachary
AU - Khemlani, Sangeet
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/13
Y1 - 2018/12/13
N2 - Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.
AB - Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.
UR - http://www.scopus.com/inward/record.url?scp=85060854790&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060854790&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2018.00291
DO - 10.1109/CVPRW.2018.00291
M3 - Conference contribution
AN - SCOPUS:85060854790
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 2246
EP - 2253
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
PB - IEEE Computer Society
T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
Y2 - 18 June 2018 through 22 June 2018
ER -