Model elements identification using neural networks: a comprehensive study

Kaushik Madala, Shraddha Piparia, Eduardo Blanco, Hyunsook Do, Renee Bryce

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element identification tasks has not been studied. In this paper, we perform an empirical study on automatic model elements identification for component state transition models from use case documents. We analyzed four different neural network architectures: feed forward neural network, convolutional neural network, recurrent neural network (RNN) with long short-term memory, and RNN with gated recurrent unit (GRU), and the trade-offs among them using six use case documents. We analyzed the effect of factors such as types of splitting, types of predictions, types of designs, and types of annotations on performance of neural networks. The results of neural networks on the test and unseen data showed that RNN with GRU is the most effective neural network architecture. However, the factors that result in effective predictions of neural networks are dependent on the type of the model element.

Original languageEnglish (US)
Pages (from-to)67-96
Number of pages30
JournalRequirements Engineering
Volume26
Issue number1
DOIs
StatePublished - Mar 2021
Externally publishedYes

Keywords

  • Automated requirements modeling
  • Component state transition diagrams
  • Empirical study
  • Neural networks
  • Requirements analysis
  • Sequence labeling

ASJC Scopus subject areas

  • Software
  • Information Systems

Fingerprint

Dive into the research topics of 'Model elements identification using neural networks: a comprehensive study'. Together they form a unique fingerprint.

Cite this