GeoImageNet: a multi-source natural feature benchmark dataset for GeoAI and supervised machine learning

Wenwen Li, Sizhe Wang, Samantha T. Arundel, Chia Yu Hsu

Research output: Contribution to journalArticlepeer-review

Abstract

The field of GeoAI or Geospatial Artificial Intelligence has undergone rapid development since 2017. It has been widely applied to address environmental and social science problems, from understanding climate change to tracking the spread of infectious disease. A foundational task in advancing GeoAI research is the creation of open, benchmark datasets to train and evaluate the performance of GeoAI models. While a number of datasets have been published, very few have centered on the natural terrain and its landforms. To bridge this gulf, this paper introduces a first-of-its-kind benchmark dataset, GeoImageNet, which supports natural feature detection in a supervised machine-learning paradigm. A distinctive feature of this dataset is the fusion of multi-source data, including both remote sensing imagery and DEM in depicting spatial objects of interest. This multi-source dataset allows a GeoAI model to extract rich spatio-contextual information to gain stronger confidence in high-precision object detection and recognition. The image dataset is tested with a multi-source GeoAI extension against two well-known object detection models, Faster-RCNN and RetinaNet. The results demonstrate the robustness of the dataset in aiding GeoAI models to achieve convergence and the superiority of multi-source data in yielding much higher prediction accuracy than the commonly used single data source.

Original languageEnglish (US)
JournalGeoInformatica
DOIs
StateAccepted/In press - 2022

Keywords

  • Deep learning
  • GeoAI
  • Object detection
  • Remote sensing
  • RetinaNet

ASJC Scopus subject areas

  • Geography, Planning and Development
  • Information Systems

Fingerprint

Dive into the research topics of 'GeoImageNet: a multi-source natural feature benchmark dataset for GeoAI and supervised machine learning'. Together they form a unique fingerprint.

Cite this