Published in

SAGE Publications, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 7(28), p. 546-546

DOI: 10.1177/154193128402800704

PsycEXTRA Dataset

DOI: 10.1037/e574242012-001

Links

Tools

Export citation

Search in Google Scholar

On the structure of information in software: (574242012-001)

Journal article published in 2013 by Deborah A. Boehm-Davis
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Recent research suggests that errors made early in a software development project and carried on into testing and integration are the most costly type of errors to find and correct. Yet, there is almost a total absence of research examining the impact of tools and methodologies early in the process, such as in program design. One approach to improving the design process has been the use of program design methodologies, which provide strategies to programmers for structuring solutions to computer problems. The basic difference among methodologies is the criterion used to decompose the problem into smaller units. The approaches basically vary along one dimension: the extent to which the decomposition relies upon data structures as an organizing principle for modularization. On one end of the dimension are data structure techniques that rely primarily on the data structures present in the specifications as the basis for modularization. On the other end of the dimension are techniques that rely primarily on operations as the basis for structuring the problem. In the former case, modules are organized around data structures, while in the latter, modules are organized around operations. Falling between the two extremes are techniques which rely partially on data structures and partially on operations as the basis for structuring the programs. Using this dimension to classify methodologies, it was possible to generate programs decomposed in each of these ways, and to evaluate the effects of these decompositions in terms of the initial coding process, the quality of the resulting code, and the subsequent maintainability of the program. The focus of the research was on a comprehensive evaluation of programs produced by the different classes of methodologies. Professional programmers were provided with the specifications for each of three problems and asked to produce pseudo-code for each specification. Each time the programmers worked on the program, they were asked to complete a summary sheet for the session. The intermediate versions of the programs and these summary sheets were collected for analysis. In addition, the participants were asked to complete a final questionnaire at the end of the project which provided us with information about each programmer's programming background, familiarity with the methodology, and reactions to the problems used in this research. The measures collected were the time to design, number and types of design errors, time to code, number and types of coding errors, number and types of commands used to create the programs, amount of computer time needed to execute the program, complexity (as measured by several metrics including, for example, the McCabe metric), and consistency between different programmer's solutions. The preliminary results suggest that there were differences in time to code, number of design errors detected, and consistency and complexity ratings among methodologies. These differences will be discussed in light of their impact on the comprehensibility, reliability, and maintainability of the programs produced.