TY - GEN
T1 - Attention-based Representation Learning for Time Series with Principal and Residual Space Monitoring
AU - Wang, Botao
AU - Tsung, Fugee
AU - Yan, Hao
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The encoder-decoder network is one of the most common deep learning models for time series representation learning and anomaly detection. However, it is hard to reconstruct time series, which is complex, correlated, and lacking in common patterns. In this paper, we apply the attention mechanism to rescale convolution layers and learn representation in the principal and the residual space. To avoid the reconstruction process, we define the residual space by the omitted segments according to the attention score in the encoder. We introduce the temporal information inside the token level and use sparse penalty to improve representation learning. We apply the proposed model to anomaly classification and fault detection experiments on two datasets, i.e. multivariate bearing fault dataset and UCRArchive profile dataset. The result shows that the representation learned by the proposed model is more likely to cluster by category, especially in the residual space. Compared to the baselines and state-of-the-art models, the proposed model has higher accuracy and recall in the limited-labeled situation, which illustrates the stability of the learned representation and its superiority in the downstream tasks.
AB - The encoder-decoder network is one of the most common deep learning models for time series representation learning and anomaly detection. However, it is hard to reconstruct time series, which is complex, correlated, and lacking in common patterns. In this paper, we apply the attention mechanism to rescale convolution layers and learn representation in the principal and the residual space. To avoid the reconstruction process, we define the residual space by the omitted segments according to the attention score in the encoder. We introduce the temporal information inside the token level and use sparse penalty to improve representation learning. We apply the proposed model to anomaly classification and fault detection experiments on two datasets, i.e. multivariate bearing fault dataset and UCRArchive profile dataset. The result shows that the representation learned by the proposed model is more likely to cluster by category, especially in the residual space. Compared to the baselines and state-of-the-art models, the proposed model has higher accuracy and recall in the limited-labeled situation, which illustrates the stability of the learned representation and its superiority in the downstream tasks.
KW - Anomaly detection
KW - attention mechanism
KW - representation learning
KW - residual space
KW - time series
UR - http://www.scopus.com/inward/record.url?scp=85141667732&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141667732&partnerID=8YFLogxK
U2 - 10.1109/CASE49997.2022.9926721
DO - 10.1109/CASE49997.2022.9926721
M3 - Conference contribution
AN - SCOPUS:85141667732
T3 - IEEE International Conference on Automation Science and Engineering
SP - 1833
EP - 1839
BT - 2022 IEEE 18th International Conference on Automation Science and Engineering, CASE 2022
PB - IEEE Computer Society
T2 - 18th IEEE International Conference on Automation Science and Engineering, CASE 2022
Y2 - 20 August 2022 through 24 August 2022
ER -