zyang1580 / CoLLM

The implementation for the work "CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation".
BSD 3-Clause "New" or "Revised" License
47 stars 6 forks source link

Minimum hardware to reproduce the work. #4

Open elloza opened 6 months ago

elloza commented 6 months ago

Hi! Congratulations on your work!

I would like to start trying to reproduce your work and was wondering if I could do it with more limited hardware such as an rtx 3090 24GB VRAM.

You say in the Lora Tuning: To launch the first stage training, run the following command. In our experiments, we use 2 A100.

If you could give me some guidelines (or some advice) I would really appreciate it.

Thank you very much in advance!

zyang1580 commented 5 months ago

Thank you for your interest in our work. Our experiments have been conducted exclusively on the A100 machine. It is feasible to run on A40 and 3090 machines by reducing the batch size, but certain parameters may require corresponding adjustments.

TianhaoShi2001 commented 2 months ago

In A40, with the dataset used in the original text, I reduced the train_batch_size to 16, resulting in a maximum memory usage of less than 40GB (about 33GB), making it runnable on A40. At this batch size, I set the min_lr parameter to 1e-4, achieving results close to those in the original text. If running on a 3090 GPU, it may require multi-GPU parallelism or further reduction of batch size to meet the GPU requirements, along with corresponding adjustments to the learning rate.