Published in

14th International Conference eLearning and Software for Education, 2018

DOI: 10.12753/2066-026x-18-224

Links

Tools

Export citation

Search in Google Scholar

Recent Methodological and Conceptual Issues in Software Reliability Engineering

Proceedings article published in 2019 by Florin Popentiu Vladicescu, Grigore Albeanu, Henrik Madsen ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

The large number of sensors in the field, and the usage of IoT based equipment generate big collections of data, some of them being useful to address maintenance policies, replacement requirements, calibration, research on new data analysis models etc. Both industrial and social systems are increasing in complexity due to new technologies applied to information processing. Recent developments in embedding and integration offered new opportunities to collect, filter, analyse, and interpret huge collections of data generated by large populations of sensor, special devices, or people, and named "Big Data". This work emphasizes on conceptual issues and methodological aspects related to data registration, filtering, smoothing, analysing in order to predict important indicators of the quality of life. The systems reliability engineering field is revisited taking into account both data sources and the new methodologies used for reliability data. Software reliability of applications for smart cities is also addressed. The following frameworks are considered: Systems of Systems (SoS), Big Data, System Operating/Environmental (SOE) data, and Smart cities reliability. The SoS reliability engineering depends on its nature: virtual - based on resource sharing, collaborative - based on agreements, acknowledged - based on collaborative management through a well defined interface, and directed SoS - based on centralized management. The SoS reliability is estimated differently depending on the specific architecture and particular reliability requirements. When the reliability is considered in context Big data, both technologies are considered: batch processing (based on analytics on "data at rest") or stream processing (analytics on "data in motion"). The adequacy of existing reliability models to the Big data reliability concerns, taking into account the 'curse of dimensionality" is considered in the last section.