An evaluation framework for software crowdsourcing

Wenjun Wu, Wei Tek Tsai, Wei Li

Research output: Contribution to journalArticlepeer-review

50 Scopus citations

Abstract

Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework, this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.

Original languageEnglish (US)
Pages (from-to)694-709
Number of pages16
JournalFrontiers of Computer Science
Volume7
Issue number5
DOIs
StatePublished - Oct 2013

Keywords

  • competition rules
  • crowdsourcing
  • game theory
  • software engineering

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'An evaluation framework for software crowdsourcing'. Together they form a unique fingerprint.

Cite this