Comparison of LSTM and ARIMA Methods in Predicting the Inflation Rate in Manado City
DOI:
https://doi.org/10.59934/jaiea.v5i2.1929Keywords:
Time series forecasting, inflation, ARIMA/SARIMAX, LSTM, accuracy evaluation (RMSE, MAE)Abstract
Forecasting city-level inflation is challenging due to seasonal patterns, nonlinear dynamics, and limited exogenous variables, while short-term accuracy is required for timely policy responses. This study focuses on monthly inflation in Manado City over the period 2010–2024, explicitly accounting for the role of the Consumer Price Index (CPI). We compare a seasonal SARIMA baseline with a multivariate LSTM model that jointly ingests inflation and CPI series. The contributions of this work are an end-to-end, reproducible forecasting pipeline and an evidence-based comparison that identifies the conditions under which a feature-rich nonlinear model is preferable. The methodology includes aligning and preprocessing monthly series, conducting stationarity tests, selecting SARIMA specifications via information criteria and residual diagnostics, and training a 12-month window LSTM (Adam optimizer, MSE loss) with internal validation. The results show that the LSTM yields lower errors on the test horizon (RMSE 0.497; MAE 0.398) than the SARIMA (1,1,1)×(1,1,1,12) model (RMSE 0.661; MAE 0.486), with a smoother 12-month-ahead forecast path under a constant-CPI scenario; visual findings are consistent with the metrics, and a Diebold–Mariano test can be used to assess the significance of the difference. In conclusion, although SARIMA remains a strong and interpretable baseline, the multivariate LSTM delivers a practically meaningful gain in short-term accuracy when the inflation–CPI interaction is nonlinear, making it relevant for regional policy planning.
Downloads
References
R. J. Hyndman and G. Athanasopoulos, Forecasting: Principles and Practice, 3rd ed. Melbourne, Australia: OTexts, 2021.
B. Lim and Z. Zohren, “Time-series forecasting with deep learning: A survey,” Philosophical Transactions of the Royal Society A, vol. 379, no. 2194, 2021, Art. no. 20200209.
S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “M5 accuracy competition: Results, findings and conclusions,” International Journal of Forecasting, vol. 38, no. 4, pp. 1346–1364, 2022.
S. Smyl, “A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting,” International Journal of Forecasting, vol. 36, no. 1, pp. 75–85, 2020.
P. J. Brockwell and R. A. Davis, Introduction to Time Series and Forecasting, 3rd ed. New York, NY, USA: Springer, 2016.
D. Kwiatkowski, P. C. B. Phillips, P. Schmidt, and Y. Shin, “Testing the null hypothesis of stationarity against the alternative of a unit root,” Journal of Econometrics, vol. 54, nos. 1–3, pp. 159–178, 1992.
D. A. Dickey and W. A. Fuller, “Distribution of the estimators for autoregressive time series with a unit root,” Journal of the American Statistical Association, vol. 74, no. 366, pp. 427–431, 1979.
G. E. P. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time Series Analysis: Forecasting and Control, 6th ed. Hoboken, NJ, USA: Wiley, 2015.
X. Chen, S. Wei, and L. Chen, “A comprehensive survey on LSTM networks for time series forecasting,” ACM Computing Surveys, vol. 55, no. 1, pp. 1–35, 2022.
F. X. Diebold and R. S. Mariano, “Comparing predictive accuracy,” Journal of Business & Economic Statistics, vol. 13, no. 3, pp. 253–263, 1995.
R. J. Hyndman and Y. Khandakar, “Automatic time series forecasting: The forecast package for R,” Journal of Statistical Software, vol. 27, no. 3, pp. 1–22, 2008.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “The M4 competition: Results, findings, conclusion and way forward,” International Journal of Forecasting, vol. 34, no. 4, pp. 802–808, 2018.
G. P. Zhang, “Time series forecasting using a hybrid ARIMA and neural network model,” Neurocomputing, vol. 50, pp. 159–175, 2003.
G. Bontempi, S. Ben Taieb, and Y.-A. Le Borgne, “Machine learning strategies for time series forecasting,” European Journal of Operational Research, vol. 247, no. 3, pp. 641–651, 2015.
F. Petropoulos et al., “Forecasting: theory and practice,” International Journal of Forecasting, vol. 38, no. 3, pp. 705–871, 2022.
B. Lim, S. Ö. Arık, N. Loeff, and T. Pfister, “Temporal fusion transformers for interpretable multi-horizon time series forecasting,” in Proc. NeurIPS, 2020, pp. 1–13.
S. N. Oreshkin, N. Carpov, N. Chapados, and Y. Bengio, “N-BEATS: Neural basis expansion analysis for interpretable time series forecasting,” in Proc. ICLR, 2020, pp. 1–21.
S. J. Taylor and B. Letham, “Forecasting at scale,” The American Statistician, vol. 72, no. 1, pp. 37–45, 2018.
D. I. Harvey, S. J. Leybourne, and P. Newbold, “Testing the equality of prediction mean squared errors,” International Journal of Forecasting, vol. 13, no. 2, pp. 281–291, 1997
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Journal of Artificial Intelligence and Engineering Applications (JAIEA)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.







