TY - GEN
T1 - A highly scalable parallel algorithm for isotropic total variation models
AU - Wang, Jie
AU - Li, Qingyang
AU - Yang, Sen
AU - Fan, Wei
AU - Wonka, Peter
AU - Ye, Jieping
N1 - Publisher Copyright:
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved.
PY - 2014
Y1 - 2014
N2 - Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.
AB - Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.
UR - http://www.scopus.com/inward/record.url?scp=84919903869&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84919903869&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84919903869
T3 - 31st International Conference on Machine Learning, ICML 2014
SP - 1486
EP - 1514
BT - 31st International Conference on Machine Learning, ICML 2014
PB - International Machine Learning Society (IMLS)
T2 - 31st International Conference on Machine Learning, ICML 2014
Y2 - 21 June 2014 through 26 June 2014
ER -