Linxyhaha / TransRec

Bridging Items and Language: A Transition Paradigm for Large Language Model-Based Recommendation (KDD'24)
https://arxiv.org/pdf/2310.06491
10 stars 1 forks source link

Inquiry regarding training and reconstruction time #1

Open eunseongc opened 3 months ago

eunseongc commented 3 months ago

Thank you for sharing impressive research. I have a few questions about the training process.

Although I have seen mention of using four NVIDIA A5000 GPUs, could you please provide an estimate of the total training time required?

Additionally, I am working with the beauty dataset and noticed that the reconstruct.py script with 30 threads could take about 3 hours. Is this how long the process usually takes? Below is the line of command I input.

source reconstruct.sh beauty 5

Thanks for your time and help.

Linxyhaha commented 3 months ago

Thank you for sharing impressive research. I have a few questions about the training process.

Although I have seen mention of using four NVIDIA A5000 GPUs, could you please provide an estimate of the total training time required?

Additionally, I am working with the beauty dataset and noticed that the reconstruct.py script with 30 threads could take about 3 hours. Is this how long the process usually takes? Below is the line of command I input.

source reconstruct.sh beauty 5

Thanks for your time and help.

Hi, thanks for your interest in our work! Regarding the training time, it takes approximately 2~3 days with 4 A5000 GPUs for model to converge.

The reconstruction indeed can take a couple of hours to run. The large amount of time mainly comes from the consideration of each possible subsequence within a user's whole interaction sequence.

eunseongc commented 3 months ago

Thank you for your response. I will take it into consideration and reproduce it!

Congratulations on the KDD publication :)