Repository hosting code used to reproduce results in "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
"The remaining features are generally time series that slowly change over time, such as demographics or followed creators. We compress these time series by keeping the earliest entry per consecutive segment and then merge the results into the main time series."
It appears that this method is not currently demonstrated in the public dataset. Could the authors provide examples that utilize both the main time series and auxiliary time series? Alternatively, a guide with pseudocode on how to adapt this approach would be greatly appreciated.
Additionally, is there a more detailed evaluation conducted with or without the auxiliary time series? Specifically, is it GR (interactions only) vs. GR in Tables 6 and 7?
Thank you for your great work!
In the paper, it states:
It appears that this method is not currently demonstrated in the public dataset. Could the authors provide examples that utilize both the main time series and auxiliary time series? Alternatively, a guide with pseudocode on how to adapt this approach would be greatly appreciated.
Additionally, is there a more detailed evaluation conducted with or without the auxiliary time series? Specifically, is it GR (interactions only) vs. GR in Tables 6 and 7?