Information-Based Optimal Subdata Selection for Big Data Linear Regression

Hai Ying Wang, Min Yang, John Stufken

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)1-13
Number of pages13
JournalJournal of the American Statistical Association
DOIs
StateAccepted/In press - Jun 25 2018

Fingerprint

Linear regression
Subsampling
Estimator
Data analysis
Sampling Distribution
Data Reduction
Parallel Computing
Distributed Computing
Large Data Sets
Statistical method
Covariance matrix
Convergence Rate
Slope
Branch
Converge
Term
Demonstrate
Simulation
Sampling

Keywords

  • D-optimality
  • Information matrix
  • Linear regression
  • Massive data
  • Subdata

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Cite this

Information-Based Optimal Subdata Selection for Big Data Linear Regression. / Wang, Hai Ying; Yang, Min; Stufken, John.

In: Journal of the American Statistical Association, 25.06.2018, p. 1-13.

Research output: Contribution to journalArticle

@article{bc5a273428a242a0acb3902693c2c1ff,
title = "Information-Based Optimal Subdata Selection for Big Data Linear Regression",
abstract = "Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.",
keywords = "D-optimality, Information matrix, Linear regression, Massive data, Subdata",
author = "Wang, {Hai Ying} and Min Yang and John Stufken",
year = "2018",
month = "6",
day = "25",
doi = "10.1080/01621459.2017.1408468",
language = "English (US)",
pages = "1--13",
journal = "Journal of the American Statistical Association",
issn = "0162-1459",
publisher = "Taylor and Francis Ltd.",

}

TY - JOUR

T1 - Information-Based Optimal Subdata Selection for Big Data Linear Regression

AU - Wang, Hai Ying

AU - Yang, Min

AU - Stufken, John

PY - 2018/6/25

Y1 - 2018/6/25

N2 - Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.

AB - Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.

KW - D-optimality

KW - Information matrix

KW - Linear regression

KW - Massive data

KW - Subdata

UR - http://www.scopus.com/inward/record.url?scp=85049141952&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049141952&partnerID=8YFLogxK

U2 - 10.1080/01621459.2017.1408468

DO - 10.1080/01621459.2017.1408468

M3 - Article

AN - SCOPUS:85049141952

SP - 1

EP - 13

JO - Journal of the American Statistical Association

JF - Journal of the American Statistical Association

SN - 0162-1459

ER -