Implementing a robust explanatory bias in a person re-identification network

Esube Bekele, Wallace E. Lawson, Zachary Horne, Sangeet Khemlani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
PublisherIEEE Computer Society
Pages2246-2253
Number of pages8
Volume2018-June
ISBN (Electronic)9781538661000
DOIs
StatePublished - Dec 13 2018
Event31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018 - Salt Lake City, United States
Duration: Jun 18 2018Jun 22 2018

Other

Other31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
CountryUnited States
CitySalt Lake City
Period6/18/186/22/18

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Implementing a robust explanatory bias in a person re-identification network'. Together they form a unique fingerprint.

  • Cite this

    Bekele, E., Lawson, W. E., Horne, Z., & Khemlani, S. (2018). Implementing a robust explanatory bias in a person re-identification network. In Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018 (Vol. 2018-June, pp. 2246-2253). [8575462] IEEE Computer Society. https://doi.org/10.1109/CVPRW.2018.00291