On the lumpability of regional income convergence

Levi John Wolf, Sergio Rey

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

This paper examines how “lumping”, the aggregation of different states of a Markov chain into one state, affects the underlying properties of the Markov process. Specifically, a Markov chain model of income convergence for US states is estimated, and different quantile lumpings are tested to determine if they preserve the Markov property. This work ties into the broader literature on modelling regional income convergence using Markov processes, specifically with attempts to quanity how reasonable choices about state space compression are and the quantification of these choices’ consequences. First, we estimate a rank Markov model. From this, we find that Markov models for regional income convergence lose the Markov property when quantile lumps are large and contain many states, but perform well when lumps get smaller, containing fewer states. This new positive finding and technical work paves the way for broader studies of lumpability of discrete Markov models for geographic or policy regions.

Original languageEnglish (US)
JournalLetters in Spatial and Resource Sciences
DOIs
StateAccepted/In press - Sep 21 2015

Fingerprint

income
Markov chain
quantification
aggregation
compression
Income convergence
Markov model
modeling
Markov process
Quantile
Markov chain model
U.S. States
Quantification
State space
Compression
Modeling
policy
preserve
literature

Keywords

  • Convergence
  • Lumpability
  • Markov
  • Regional income distributions
  • Spatial dynamics

ASJC Scopus subject areas

  • Geography, Planning and Development
  • Demography
  • Urban Studies
  • Economics and Econometrics

Cite this

On the lumpability of regional income convergence. / Wolf, Levi John; Rey, Sergio.

In: Letters in Spatial and Resource Sciences, 21.09.2015.

Research output: Contribution to journalArticle

@article{1a6f0111681b4846801ec008f2655177,
title = "On the lumpability of regional income convergence",
abstract = "This paper examines how “lumping”, the aggregation of different states of a Markov chain into one state, affects the underlying properties of the Markov process. Specifically, a Markov chain model of income convergence for US states is estimated, and different quantile lumpings are tested to determine if they preserve the Markov property. This work ties into the broader literature on modelling regional income convergence using Markov processes, specifically with attempts to quanity how reasonable choices about state space compression are and the quantification of these choices’ consequences. First, we estimate a rank Markov model. From this, we find that Markov models for regional income convergence lose the Markov property when quantile lumps are large and contain many states, but perform well when lumps get smaller, containing fewer states. This new positive finding and technical work paves the way for broader studies of lumpability of discrete Markov models for geographic or policy regions.",
keywords = "Convergence, Lumpability, Markov, Regional income distributions, Spatial dynamics",
author = "Wolf, {Levi John} and Sergio Rey",
year = "2015",
month = "9",
day = "21",
doi = "10.1007/s12076-015-0156-0",
language = "English (US)",
journal = "Letters in Spatial and Resource Sciences",
issn = "1864-4031",
publisher = "Springer Verlag",

}

TY - JOUR

T1 - On the lumpability of regional income convergence

AU - Wolf, Levi John

AU - Rey, Sergio

PY - 2015/9/21

Y1 - 2015/9/21

N2 - This paper examines how “lumping”, the aggregation of different states of a Markov chain into one state, affects the underlying properties of the Markov process. Specifically, a Markov chain model of income convergence for US states is estimated, and different quantile lumpings are tested to determine if they preserve the Markov property. This work ties into the broader literature on modelling regional income convergence using Markov processes, specifically with attempts to quanity how reasonable choices about state space compression are and the quantification of these choices’ consequences. First, we estimate a rank Markov model. From this, we find that Markov models for regional income convergence lose the Markov property when quantile lumps are large and contain many states, but perform well when lumps get smaller, containing fewer states. This new positive finding and technical work paves the way for broader studies of lumpability of discrete Markov models for geographic or policy regions.

AB - This paper examines how “lumping”, the aggregation of different states of a Markov chain into one state, affects the underlying properties of the Markov process. Specifically, a Markov chain model of income convergence for US states is estimated, and different quantile lumpings are tested to determine if they preserve the Markov property. This work ties into the broader literature on modelling regional income convergence using Markov processes, specifically with attempts to quanity how reasonable choices about state space compression are and the quantification of these choices’ consequences. First, we estimate a rank Markov model. From this, we find that Markov models for regional income convergence lose the Markov property when quantile lumps are large and contain many states, but perform well when lumps get smaller, containing fewer states. This new positive finding and technical work paves the way for broader studies of lumpability of discrete Markov models for geographic or policy regions.

KW - Convergence

KW - Lumpability

KW - Markov

KW - Regional income distributions

KW - Spatial dynamics

UR - http://www.scopus.com/inward/record.url?scp=84942026808&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84942026808&partnerID=8YFLogxK

U2 - 10.1007/s12076-015-0156-0

DO - 10.1007/s12076-015-0156-0

M3 - Article

JO - Letters in Spatial and Resource Sciences

JF - Letters in Spatial and Resource Sciences

SN - 1864-4031

ER -