LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORK FOR STATE PREDICTION AND RESOURCE ALLOCATION OPTIMIZATION IN DISTRIBUTED SYSTEMS
Abstract
Introduction. This paper considers a method based on a Long Short-Term Memory (LSTM) neural network for optimal resource allocation in distributed systems. The developed algorithm ensures high accuracy in predicting resource states and optimal spatial distribution with minimal processing time. The relevance of this research is determined by the growing need for intelligent automation of resource management processes in service infrastructure facilities. A locker management system in sports facilities is used as a practical demonstration of the method's effectiveness.
Materials and Methods. To address the prediction and optimization tasks, an LSTM-based architecture with 32 hidden neurons and a sequence length of 10 time steps is proposed. The LSTM model processes sequential occupancy data to capture temporal dependencies and generate probability estimates for future resource states. A multi-factor scoring function is developed to transform predictions into optimal allocation decisions, considering spatial constraints and user preferences. The method is systematically compared with classical approaches: heuristic algorithms (Sequential, Round-Robin), statistical time series models (ARIMA, exponential smoothing), and machine learning methods (logistic regression, random forest, gradient boosting). All methods are evaluated on identical datasets using consistent metrics, including prediction accuracy, F1-score, spatial balance index, and zone variance.
Results. Using LSTM neural networks for the prediction task achieves 85% accuracy, which is statistically significantly higher than Random Forest (79%, p=0.0023) and ARIMA (68%, p=0.0001). The spatial balance index improved by 8.5% compared to the best classical method (0.89 versus 0.82). Inference time remains acceptable for real-time applications (18.9 ms per prediction).
Conclusions. The proposed LSTM-based method demonstrates satisfactory accuracy in predicting resource states and optimizing their allocation within minimal timeframes. The ability to model long-term temporal dependencies provides significant advantages over classical fixed-window methods. Therefore, the method can be effectively applied to enhance the functionality of distributed resource management systems.
Keywords: LSTM neural network, resource allocation, time series prediction, recurrent neural networks, optimization.
Full Text:
PDFReferences
[1] Erl, T., Puttini, R., & Mahmood, Z. (2013). Cloud computing: Concepts, technology & architecture. Pearson Education.
[2] Oussous, A., Benjelloun, F. Z., Ait Lahcen, A., & Belfkih, S. (2018). Big Data technologies: A survey. Journal of King Saud University - Computer and Information Sciences, 30(4), 431-448. https://doi.org/10.1016/j.jksuci.2017.06.001
[3] Pinedo, M. (2022). Scheduling: Theory, algorithms and systems (6th ed.). Springer. https://doi.org/10.1007/978-3-031-05921-6
[4] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
[5] Tsemko, A., & Matskiv, M. (2025). Problems of using neural networks to predict the price of virtual assets. Electronics and Information Technologies, 29, 69–78. https://doi.org/10.30970/eli.29.7
[6] Sarker, I. H. (2021). Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2(6), 420. https://doi.org/10.1007/s42979-021-00815-1
[7] Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2023). Dive into deep learning. Cambridge University Press.
[8] Torres, J. F., Hadjout, D., Sebaa, A., Martínez-Álvarez, F., & Troncoso, A. (2021). Deep learning for time series forecasting: A survey. Big Data, 9(1), 3–21. https://doi.org/10.1089/big.2020.0159
[9] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.
[10] Lim, B., Arık, S. Ö., Loeff, N., & Pfister, T. (2021). Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting, 37(4), 1748–1764. https://doi.org/10.1016/j.ijforecast.2021.03.012
[11] Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., & Zhang, W. (2021). Informer: Beyond efficient transformer for long sequence time-series forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11106–11115. https://doi.org/10.1609/aaai.v35i12.17325
[12] Smyl, S. (2020). A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. International Journal of Forecasting, 36(1), 75–85. https://doi.org/10.1016/j.ijforecast.2019.03.017
[13] Salinas, D., Flunkert, V., Gasthaus, J., & Januschowski, T. (2020). DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3), 1181–1191. https://doi.org/10.1016/j.ijforecast.2019.07.001
[14] Lara-Benítez, P., Carranza-García, M., & Riquelme, J. C. (2021). An experimental review on deep learning architectures for time series forecasting. International Journal of Neural Systems, 31(5), 2130001. https://doi.org/10.1142/S0129065721300011
[15] Hewamalage, H., Bergmeir, C., & Bandara, K. (2021). Recurrent neural networks for time series forecasting: Current status and future directions. International Journal of Forecasting, 37(1), 388–427. https://doi.org/10.1016/j.ijforecast.2020.06.008
DOI: http://dx.doi.org/10.30970/eli.33.7
Refbacks
- There are currently no refbacks.

Electronics and information technologies / Електроніка та інформаційні технології