I wonder if you have considered the lime package, which was a sort of a buzz in the machine learning interpretability sphere. It seems you are using feature importance for after training the Random Forest, which gives some insight, as you mention. However, I would imagine that LIME may help with interpreting reasons for individual predicted cases when you use your RF for forecasting and may offer further insight for the mapping process.
Relatedly, what was the reasoning behind using Partial Dependence (as opposed to other interpretability methods)?
I wonder if you have considered the lime package, which was a sort of a buzz in the machine learning interpretability sphere. It seems you are using feature importance for after training the Random Forest, which gives some insight, as you mention. However, I would imagine that LIME may help with interpreting reasons for individual predicted cases when you use your RF for forecasting and may offer further insight for the mapping process.
Relatedly, what was the reasoning behind using Partial Dependence (as opposed to other interpretability methods)?