Abstract

Various research applications require detailed metrics to describe the form and composition of cities at fine scales, but the parameter computation remains a challenge due to limited data availability, quality, and processing capabilities. We developed an innovative big data approach to derive street-level morphology and urban feature composition as experienced by a pedestrian from Google Street View (GSV) imagery. We employed a scalable deep learning framework to segment 90-degree field of view GSV image cubes into six classes: sky, trees, buildings, impervious surfaces, pervious surfaces, and non-permanent objects. We increased the classification accuracy by differentiating between three view directions (lateral, down, and up) and by introducing a void class as training label. To model the urban environment as perceived by a pedestrian in a street canyon, we projected the segmented image cubes onto spheres and evaluated the fraction of each surface class on the sphere. To demonstrate the application of our approach, we analyzed the urban form and composition of Philadelphia County and three Philadelphia neighborhoods (suburb, center city, lower income neighborhood) using stacked area graphs. Our method is fully scalable to other geographic locations and constitutes an important step towards building a global morphological database to describe the form and composition of cities from a human-centric perspective.

Original languageEnglish (US)
Pages (from-to)122-132
Number of pages11
JournalLandscape and Urban Planning
Volume183
DOIs
StatePublished - Mar 2019

Keywords

  • Deep learning
  • Google Street View
  • Human-centric
  • Spherical fractions
  • Street canyon
  • Urban form and composition

ASJC Scopus subject areas

  • Ecology
  • Nature and Landscape Conservation
  • Management, Monitoring, Policy and Law

Fingerprint

Dive into the research topics of 'Urban form and composition of street canyons: A human-centric big data and deep learning approach'. Together they form a unique fingerprint.

Cite this