The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs

Claire Le Goues, Neal Holtschulte, Edward K. Smith, Yuriy Brun, Premkumar Devanbu, Stephanie Forrest, Westley Weimer

Research output: Contribution to journalArticle

77 Citations (Scopus)

Abstract

The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, ManyBugs and IntroClass, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.

Original languageEnglish (US)
Article number7153570
Pages (from-to)1236-1256
Number of pages21
JournalIEEE Transactions on Software Engineering
Volume41
Issue number12
DOIs
StatePublished - Dec 1 2015
Externally publishedYes

Fingerprint

Repair
Defects
Computer science

Keywords

  • Automated program repair
  • benchmark
  • INTROCLASS
  • MANYBUGS
  • reproducibility
  • subject defect

ASJC Scopus subject areas

  • Software

Cite this

Le Goues, C., Holtschulte, N., Smith, E. K., Brun, Y., Devanbu, P., Forrest, S., & Weimer, W. (2015). The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs. IEEE Transactions on Software Engineering, 41(12), 1236-1256. [7153570]. https://doi.org/10.1109/TSE.2015.2454513

The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs. / Le Goues, Claire; Holtschulte, Neal; Smith, Edward K.; Brun, Yuriy; Devanbu, Premkumar; Forrest, Stephanie; Weimer, Westley.

In: IEEE Transactions on Software Engineering, Vol. 41, No. 12, 7153570, 01.12.2015, p. 1236-1256.

Research output: Contribution to journalArticle

Le Goues, C, Holtschulte, N, Smith, EK, Brun, Y, Devanbu, P, Forrest, S & Weimer, W 2015, 'The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs', IEEE Transactions on Software Engineering, vol. 41, no. 12, 7153570, pp. 1236-1256. https://doi.org/10.1109/TSE.2015.2454513
Le Goues, Claire ; Holtschulte, Neal ; Smith, Edward K. ; Brun, Yuriy ; Devanbu, Premkumar ; Forrest, Stephanie ; Weimer, Westley. / The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs. In: IEEE Transactions on Software Engineering. 2015 ; Vol. 41, No. 12. pp. 1236-1256.
@article{de92a4e2e84344a4a7b6b8c25c754917,
title = "The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs",
abstract = "The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, ManyBugs and IntroClass, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.",
keywords = "Automated program repair, benchmark, INTROCLASS, MANYBUGS, reproducibility, subject defect",
author = "{Le Goues}, Claire and Neal Holtschulte and Smith, {Edward K.} and Yuriy Brun and Premkumar Devanbu and Stephanie Forrest and Westley Weimer",
year = "2015",
month = "12",
day = "1",
doi = "10.1109/TSE.2015.2454513",
language = "English (US)",
volume = "41",
pages = "1236--1256",
journal = "IEEE Transactions on Software Engineering",
issn = "0098-5589",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "12",

}

TY - JOUR

T1 - The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs

AU - Le Goues, Claire

AU - Holtschulte, Neal

AU - Smith, Edward K.

AU - Brun, Yuriy

AU - Devanbu, Premkumar

AU - Forrest, Stephanie

AU - Weimer, Westley

PY - 2015/12/1

Y1 - 2015/12/1

N2 - The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, ManyBugs and IntroClass, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.

AB - The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, ManyBugs and IntroClass, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.

KW - Automated program repair

KW - benchmark

KW - INTROCLASS

KW - MANYBUGS

KW - reproducibility

KW - subject defect

UR - http://www.scopus.com/inward/record.url?scp=84961576983&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84961576983&partnerID=8YFLogxK

U2 - 10.1109/TSE.2015.2454513

DO - 10.1109/TSE.2015.2454513

M3 - Article

VL - 41

SP - 1236

EP - 1256

JO - IEEE Transactions on Software Engineering

JF - IEEE Transactions on Software Engineering

SN - 0098-5589

IS - 12

M1 - 7153570

ER -