Investigating explainablity methods in recurrent neural network architectures for financial time-series data
Researcher: Warren Freeborough, University of the Witwatersrand, Johannesburg
Supervisors: Prof. Terence van Zyl, University of the Witwatersrand, Johannesburg
Statistical methods were traditionally used for time series forecasting. However, new hybrid methods demonstrate competitive accuracy, leading to increased machine learning-based methodologies in the financial sector. However, very little development has been seen in explainable AI (XAI) for financial time series prediction, with a growing mandate for explainable systems. This study aims to determine if the existing XAI methodology is transferable to the context of financial time series prediction. Four popular methods, namely: ablation, permutation, added noise, integrated gradients, were applied to an RNN, LSTM, and a GRU network trained S&P 500 stocks data to determine the importance of features, individual data points and specific cells in each architecture. The explainability analysis reveals that GRU displayed the most significant ability to retain long-term information, while the LSTM disregarded most of the given input and instead showed the most notable granularity to the considered inputs. Lastly, the RNN displayed features indicative of no long-term memory retention. The applied XAI methods produced complementary results, reinforcing paradigms on significant differences in how different architectures predict. The results show that these methods are transferable in the financial forecasting sector, but a more sophisticated hybrid prediction system requires further confirmation.