Distance-Penalized Active Learning via Markov Decision Processes

Dingyu Wang, John Lipor, Gautam Dasarathy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost.

Original languageEnglish (US)
Title of host publication2019 IEEE Data Science Workshop, DSW 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages155-159
Number of pages5
ISBN (Electronic)9781728107080
DOIs
StatePublished - Jun 2019
Event2019 IEEE Data Science Workshop, DSW 2019 - Minneapolis, United States
Duration: Jun 2 2019Jun 5 2019

Publication series

Name2019 IEEE Data Science Workshop, DSW 2019 - Proceedings

Conference

Conference2019 IEEE Data Science Workshop, DSW 2019
CountryUnited States
CityMinneapolis
Period6/2/196/5/19

Fingerprint

Sampling
Costs
Classifiers
Problem-Based Learning

Keywords

  • Active learning
  • adaptive sampling
  • autonomous systems
  • mobile sensor
  • path planning

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Safety, Risk, Reliability and Quality
  • Computational Theory and Mathematics
  • Artificial Intelligence

Cite this

Wang, D., Lipor, J., & Dasarathy, G. (2019). Distance-Penalized Active Learning via Markov Decision Processes. In 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings (pp. 155-159). [8755602] (2019 IEEE Data Science Workshop, DSW 2019 - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/DSW.2019.8755602

Distance-Penalized Active Learning via Markov Decision Processes. / Wang, Dingyu; Lipor, John; Dasarathy, Gautam.

2019 IEEE Data Science Workshop, DSW 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 155-159 8755602 (2019 IEEE Data Science Workshop, DSW 2019 - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, D, Lipor, J & Dasarathy, G 2019, Distance-Penalized Active Learning via Markov Decision Processes. in 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings., 8755602, 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc., pp. 155-159, 2019 IEEE Data Science Workshop, DSW 2019, Minneapolis, United States, 6/2/19. https://doi.org/10.1109/DSW.2019.8755602
Wang D, Lipor J, Dasarathy G. Distance-Penalized Active Learning via Markov Decision Processes. In 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 155-159. 8755602. (2019 IEEE Data Science Workshop, DSW 2019 - Proceedings). https://doi.org/10.1109/DSW.2019.8755602
Wang, Dingyu ; Lipor, John ; Dasarathy, Gautam. / Distance-Penalized Active Learning via Markov Decision Processes. 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 155-159 (2019 IEEE Data Science Workshop, DSW 2019 - Proceedings).
@inproceedings{300b1aecc5064a7693f084e10e96fb6f,
title = "Distance-Penalized Active Learning via Markov Decision Processes",
abstract = "We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost.",
keywords = "Active learning, adaptive sampling, autonomous systems, mobile sensor, path planning",
author = "Dingyu Wang and John Lipor and Gautam Dasarathy",
year = "2019",
month = "6",
doi = "10.1109/DSW.2019.8755602",
language = "English (US)",
series = "2019 IEEE Data Science Workshop, DSW 2019 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "155--159",
booktitle = "2019 IEEE Data Science Workshop, DSW 2019 - Proceedings",

}

TY - GEN

T1 - Distance-Penalized Active Learning via Markov Decision Processes

AU - Wang, Dingyu

AU - Lipor, John

AU - Dasarathy, Gautam

PY - 2019/6

Y1 - 2019/6

N2 - We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost.

AB - We consider the problem of active learning in the context of spatial sampling, where the measurements are obtained by a mobile sampling unit. The goal is to localize the change point of a one-dimensional threshold classifier while minimizing the total sampling time, a function of both the cost of sampling and the distance traveled. In this paper, we present a general framework for active learning by modeling the search problem as a Markov decision process. Using this framework, we present time-optimal algorithms for the spatial sampling problem when there is a uniform prior on the change point, a known non-uniform prior on the change point, and a need to return to the origin for intermittent battery recharging. We demonstrate through simulations that our proposed algorithms significantly outperform existing methods while maintaining a low computational cost.

KW - Active learning

KW - adaptive sampling

KW - autonomous systems

KW - mobile sensor

KW - path planning

UR - http://www.scopus.com/inward/record.url?scp=85069436922&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069436922&partnerID=8YFLogxK

U2 - 10.1109/DSW.2019.8755602

DO - 10.1109/DSW.2019.8755602

M3 - Conference contribution

AN - SCOPUS:85069436922

T3 - 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings

SP - 155

EP - 159

BT - 2019 IEEE Data Science Workshop, DSW 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -