An ontology-driven, SVM approach for hyperspectral image classification

Xiran Zhou, WenWen Li, Sheng Wu, Sizhe Wang

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

​​Although support vector machine (SVM) has shown powerful capability to support supervised classification for hyperspectral data, such model does not have the capability to define semantically the classification and reasoning rules which can potentially contribute to a more accurate supervised classification. In this article, we present an ontology-driven framework to support SVM-based hyperspectral data classification. First, we proposed a dimension reduction algorithm to allow automatic selection of prominent spectral characteristics for each land cover class included in a hyperspectral image. The result of the prominent spectral characteristics includes ranking and weights, used to signify the importance of a subset of all wavebands in distinguishing a particular land cover class from others. Then, we developed an ontology named HIC-Ontology to formally represent the extracted spectral characteristics to support the final training and classification process. The experimental results show that the proposed technique achieves better performance in classifying hyperspectral data than using the classic classification algorithm alone. We expect this work to contribute significantly in hyper-spectral image processing by introducing this knowledge-based approach.

Original languageEnglish (US)
Pages (from-to)112-129
Number of pages18
JournalInternational Journal of Image and Data Fusion
Volume8
Issue number2
DOIs
StatePublished - Apr 3 2017

Keywords

  • Ontology
  • dimension reduction
  • hyperspectral image classification
  • support vector machine

ASJC Scopus subject areas

  • Computer Science Applications
  • General Earth and Planetary Sciences

Fingerprint

Dive into the research topics of 'An ontology-driven, SVM approach for hyperspectral image classification'. Together they form a unique fingerprint.

Cite this