Implementing a robust explanatory bias in a person re-identification network

Esube Bekele, Wallace E. Lawson, Zachary Horne, Sangeet Khemlani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
PublisherIEEE Computer Society
Pages2246-2253
Number of pages8
Volume2018-June
ISBN (Electronic)9781538661000
DOIs
StatePublished - Dec 13 2018
Event31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018 - Salt Lake City, United States
Duration: Jun 18 2018Jun 22 2018

Other

Other31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
CountryUnited States
CitySalt Lake City
Period6/18/186/22/18

Fingerprint

Blueprints
Biometrics
Deep learning

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this

Bekele, E., Lawson, W. E., Horne, Z., & Khemlani, S. (2018). Implementing a robust explanatory bias in a person re-identification network. In Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018 (Vol. 2018-June, pp. 2246-2253). [8575462] IEEE Computer Society. https://doi.org/10.1109/CVPRW.2018.00291

Implementing a robust explanatory bias in a person re-identification network. / Bekele, Esube; Lawson, Wallace E.; Horne, Zachary; Khemlani, Sangeet.

Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018. Vol. 2018-June IEEE Computer Society, 2018. p. 2246-2253 8575462.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bekele, E, Lawson, WE, Horne, Z & Khemlani, S 2018, Implementing a robust explanatory bias in a person re-identification network. in Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018. vol. 2018-June, 8575462, IEEE Computer Society, pp. 2246-2253, 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018, Salt Lake City, United States, 6/18/18. https://doi.org/10.1109/CVPRW.2018.00291
Bekele E, Lawson WE, Horne Z, Khemlani S. Implementing a robust explanatory bias in a person re-identification network. In Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018. Vol. 2018-June. IEEE Computer Society. 2018. p. 2246-2253. 8575462 https://doi.org/10.1109/CVPRW.2018.00291
Bekele, Esube ; Lawson, Wallace E. ; Horne, Zachary ; Khemlani, Sangeet. / Implementing a robust explanatory bias in a person re-identification network. Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018. Vol. 2018-June IEEE Computer Society, 2018. pp. 2246-2253
@inproceedings{8a7ca17e53e24936a64b54e4f6d6c68b,
title = "Implementing a robust explanatory bias in a person re-identification network",
abstract = "Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.",
author = "Esube Bekele and Lawson, {Wallace E.} and Zachary Horne and Sangeet Khemlani",
year = "2018",
month = "12",
day = "13",
doi = "10.1109/CVPRW.2018.00291",
language = "English (US)",
volume = "2018-June",
pages = "2246--2253",
booktitle = "Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018",
publisher = "IEEE Computer Society",

}

TY - GEN

T1 - Implementing a robust explanatory bias in a person re-identification network

AU - Bekele, Esube

AU - Lawson, Wallace E.

AU - Horne, Zachary

AU - Khemlani, Sangeet

PY - 2018/12/13

Y1 - 2018/12/13

N2 - Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.

AB - Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain 'black boxes' and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.

UR - http://www.scopus.com/inward/record.url?scp=85060854790&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060854790&partnerID=8YFLogxK

U2 - 10.1109/CVPRW.2018.00291

DO - 10.1109/CVPRW.2018.00291

M3 - Conference contribution

AN - SCOPUS:85060854790

VL - 2018-June

SP - 2246

EP - 2253

BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018

PB - IEEE Computer Society

ER -