TY - GEN
T1 - Handling Stuck-at-faults in Memristor Crossbar Arrays using Matrix Transformations
AU - Zhang, Baogang
AU - Uysal, Necati
AU - Fan, Deliang
AU - Ewetz, Rickard
PY - 2019/1/21
Y1 - 2019/1/21
N2 - Matrix-vector multiplication is the dominating computational workload in the inference phase of neural networks. Memristor crossbar arrays (MCAs) can inherently execute matrix-vector multiplication with low latency and small power consumption. A key challenge is that the classification accuracy may be severely degraded by stuck-at-fault defects. Earlier studies have shown that the accuracy loss can be recovered by retraining each neural network or by utilizing additional hardware. In this paper, we propose to handle stuck-at-faults using matrix transformations. A transformation T changes a weight matrix W into a weight matrix, W = T (W ), which is more robust to stuck-at-faults. In particular, we propose a row flipping transformation, a permutation transformation, and a value range transformation. The row flipping transformation results in that stuck-off (stuck-on) faults are translated into stuck-on (stuck-off) faults. The permutation transformation maps small (large) weights to memristors stuck-off (stuck-on). The value range transformation is based on reducing the magnitude of the smallest and largest elements in the matrix, which results in that each stuck-at-fault introduces an error of smaller magnitude. The experimental results demonstrate that the proposed framework is capable of recovering 99% of the accuracy loss introduced by stuck-at-faults without requiring the neural network to be retrained.
AB - Matrix-vector multiplication is the dominating computational workload in the inference phase of neural networks. Memristor crossbar arrays (MCAs) can inherently execute matrix-vector multiplication with low latency and small power consumption. A key challenge is that the classification accuracy may be severely degraded by stuck-at-fault defects. Earlier studies have shown that the accuracy loss can be recovered by retraining each neural network or by utilizing additional hardware. In this paper, we propose to handle stuck-at-faults using matrix transformations. A transformation T changes a weight matrix W into a weight matrix, W = T (W ), which is more robust to stuck-at-faults. In particular, we propose a row flipping transformation, a permutation transformation, and a value range transformation. The row flipping transformation results in that stuck-off (stuck-on) faults are translated into stuck-on (stuck-off) faults. The permutation transformation maps small (large) weights to memristors stuck-off (stuck-on). The value range transformation is based on reducing the magnitude of the smallest and largest elements in the matrix, which results in that each stuck-at-fault introduces an error of smaller magnitude. The experimental results demonstrate that the proposed framework is capable of recovering 99% of the accuracy loss introduced by stuck-at-faults without requiring the neural network to be retrained.
UR - http://www.scopus.com/inward/record.url?scp=85061139119&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061139119&partnerID=8YFLogxK
U2 - 10.1145/3287624.3287707
DO - 10.1145/3287624.3287707
M3 - Conference contribution
AN - SCOPUS:85061139119
T3 - Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC
SP - 474
EP - 479
BT - ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019
Y2 - 21 January 2019 through 24 January 2019
ER -