A highly scalable parallel algorithm for isotropic total variation models

Jie Wang, Qingyang Li, Sen Yang, Wei Fan, Peter Wonka, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.

Original languageEnglish (US)
Title of host publication31st International Conference on Machine Learning, ICML 2014
PublisherInternational Machine Learning Society (IMLS)
Pages1486-1514
Number of pages29
Volume2
ISBN (Print)9781634393973
StatePublished - 2014
Event31st International Conference on Machine Learning, ICML 2014 - Beijing, China
Duration: Jun 21 2014Jun 26 2014

Other

Other31st International Conference on Machine Learning, ICML 2014
CountryChina
CityBeijing
Period6/21/146/26/14

Fingerprint

Parallel algorithms
Linear systems
Signal processing
Decomposition
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Software

Cite this

Wang, J., Li, Q., Yang, S., Fan, W., Wonka, P., & Ye, J. (2014). A highly scalable parallel algorithm for isotropic total variation models. In 31st International Conference on Machine Learning, ICML 2014 (Vol. 2, pp. 1486-1514). International Machine Learning Society (IMLS).

A highly scalable parallel algorithm for isotropic total variation models. / Wang, Jie; Li, Qingyang; Yang, Sen; Fan, Wei; Wonka, Peter; Ye, Jieping.

31st International Conference on Machine Learning, ICML 2014. Vol. 2 International Machine Learning Society (IMLS), 2014. p. 1486-1514.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, J, Li, Q, Yang, S, Fan, W, Wonka, P & Ye, J 2014, A highly scalable parallel algorithm for isotropic total variation models. in 31st International Conference on Machine Learning, ICML 2014. vol. 2, International Machine Learning Society (IMLS), pp. 1486-1514, 31st International Conference on Machine Learning, ICML 2014, Beijing, China, 6/21/14.
Wang J, Li Q, Yang S, Fan W, Wonka P, Ye J. A highly scalable parallel algorithm for isotropic total variation models. In 31st International Conference on Machine Learning, ICML 2014. Vol. 2. International Machine Learning Society (IMLS). 2014. p. 1486-1514
Wang, Jie ; Li, Qingyang ; Yang, Sen ; Fan, Wei ; Wonka, Peter ; Ye, Jieping. / A highly scalable parallel algorithm for isotropic total variation models. 31st International Conference on Machine Learning, ICML 2014. Vol. 2 International Machine Learning Society (IMLS), 2014. pp. 1486-1514
@inproceedings{4adf24d4335f441c91609ee7e00b3ca0,
title = "A highly scalable parallel algorithm for isotropic total variation models",
abstract = "Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.",
author = "Jie Wang and Qingyang Li and Sen Yang and Wei Fan and Peter Wonka and Jieping Ye",
year = "2014",
language = "English (US)",
isbn = "9781634393973",
volume = "2",
pages = "1486--1514",
booktitle = "31st International Conference on Machine Learning, ICML 2014",
publisher = "International Machine Learning Society (IMLS)",

}

TY - GEN

T1 - A highly scalable parallel algorithm for isotropic total variation models

AU - Wang, Jie

AU - Li, Qingyang

AU - Yang, Sen

AU - Fan, Wei

AU - Wonka, Peter

AU - Ye, Jieping

PY - 2014

Y1 - 2014

N2 - Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.

AB - Total variation (TV) models are among the most popular and successful tools in signal processing. However, due to the complex nature of the TV term, it is challenging to efficiently compute a solution for large-scale problems. State-of-the- Art algorithms that are based on the alternating direction method of multipliers (ADMM) often involve solving large-size linear systems. In this paper, we propose a highly scalable parallel algorithm for TV models that is based on a novel decomposition strategy of the problem domain. As a result, the TV models can be decoupled into a set of small and independent subproblems, which admit closed form solutions. This makes our approach particularly suitable for parallel implementation. Our algorithm is guaranteed to converge to its global minimum. With N variables and np processes, the time complexity is O(N/εnp)to reach an e-optimal solution. Extensive experiments demonstrate that our approach outperforms existing state-of-the-art algorithms, especially in dealing with high-resolution, mega- size images.

UR - http://www.scopus.com/inward/record.url?scp=84919903869&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84919903869&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781634393973

VL - 2

SP - 1486

EP - 1514

BT - 31st International Conference on Machine Learning, ICML 2014

PB - International Machine Learning Society (IMLS)

ER -