Published in

Springer, Artificial Intelligence Review, 4(57), 2024

DOI: 10.1007/s10462-024-10707-4

Links

Tools

Export citation

Search in Google Scholar

New formulation for predicting total dissolved gas supersaturation in dam reservoir: application of hybrid artificial intelligence models based on multiple signal decomposition

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractTotal dissolved gas (TDG) concentration plays an important role in the control of the aquatic life. Elevated TDG can cause gas-bubble trauma in fish (GBT). Therefore, controlling TDG fluctuation has become of great importance for different disciplines of surface water environmental engineering.. Nowadays, direct estimation of TDG is expensive and time-consuming. Hence, this work proposes a new modelling framework for predicting TDG based on the integration of machine learning (ML) models and multiresolution signal decomposition. The proposed ML models were trained and validated using hourly data obtained from four stations at the United States Geological Survey. The dataset are composed from: (i) water temperature (Tw), (ii) barometric pressure (BP), and (iii) discharge (Q), which were used as the input variables for TDG prediction. The modelling strategy is conducted based on two different steps. First, six singles ML model namely: (i) multilayer perceptron neural network, (ii) Gaussian process regression, (iii) random forest regression, (iv) random vector functional link, (v) adaptive boosting, and (vi) Bootstrap aggregating (Bagging), were developed for predicting TDG using Tw, BP, and Q, and their performances were compared. Second, a new framework was introduced based on the combination of empirical mode decomposition (EMD), the variational mode decomposition (VMD), and the empirical wavelet transform (EWT) preprocessing signal decomposition algorithms with ML models for building new hybrid ML models. Hence, the Tw, BP, and Q signals were decomposed to extract the intrinsic mode functions (IMFs) by using the EMD and VMD methods and the multiresolution analysis (MRA) components by using the EWT method. Then after, the IMFs and MRA components were selected and regraded as new input variables for the ML models and used as an integral part thereof. The single and hybrid prediction models were compared using several statistical metrics namely, root mean square error, mean absolute error, coefficient of determination (R2), and Nash–Sutcliffe efficiency (NSE). The single and hybrid models were trained several times with high number of repetitions, depending on the kind of modeling process. The obtained results using single models gave good agreement between the predicted TDG and the situ measured dataset. Overall, the Bagging model performed better than the other five models with R2 and NSE values of 0.906 and 0.902, respectively. However, the extracted IMFs and MRA components using the EMD, VMD and the EWT have contributed to an improvement of the hybrid models’ performances, for which the R2 and NSE were significantly increased reaching the values of 0.996 and 0.995. Experimental results showed the superiority of hybrid models and more importantly the importance of signal decomposition in improving the predictive accuracy of TDG. Graphical abstract