Pure and Applied Geophysics, cilt.182, sa.8, ss.3107-3138, 2025 (SCI-Expanded, Scopus)
The modeling of rainfall (Pi) and runoff (Qi) represents a significant challenge currently facing the field of hydrology. Numerous methodologies can be employed in this regard, spanning the spectrum from conceptual approaches to those that are entirely data-driven and physically based. This paper presents a method for estimating rainfall values at nine observation stations in the Tigris River Basin using four machine learning algorithms: the adaptive neuro-fuzzy inference system (ANFIS), the long short-term memory (LSTM) algorithm, the support vector machine (SVM) algorithm, and the Gaussian process regression (GPR) algorithm. The methodology is founded upon rainfall data obtained from seven meteorological observation stations within the basin. Thiessen polygons were employed to associate rainfall and runoff stations. In the study region, 11 models were constructed using the input parameters Pi, Pi−1, Pi−2, Pi−3, and Qi−1 to ascertain the rainfall–runoff relationship. The efficacy of the estimation methods was evaluated using the mean absolute error (MAE), root mean square error (RMSE), determination coefficient (R2), Nash–Sutcliffe efficiency coefficient (NSE), Kling–Gupta efficiency (KGE), and percent bias (PBIAS) criteria. The study’s findings indicated that the LSTM method demonstrated superior performance compared to the other models in all cases. In the LSTM method, the average MAE, RMSE, R2, NSE, and PBIAS criteria for all models (from Model 1 to Model 11) were obtained as 7.14, 9.99, 0.97, 0.96, and 7.38 for training and 6.46, 9.06, 0.96, 0.91, and −2.59 for testing, respectively. The analysis of variance (ANOVA) test results indicated the efficacy of the methods, except for Models 9, 10, and 11, which employed the ANFIS method. Moreover, the exceptional predictive performance of the LSTM model is clearly illustrated in the graphical representation of the results, as demonstrated in violin plots and Taylor diagrams.