hanliu95 / MetaMMF

GNU General Public License v3.0
1 stars 0 forks source link

Regarding the Use of GPU Memory #1

Closed sun2ot closed 1 month ago

sun2ot commented 1 month ago

Thank you for your contributions in the field of multimodal recommendation. I would like to know on what GPU memory is your model trained?

I used a dataset with the same format as data_stample, but smaller than TikTok in your paper, and still prompted CUDA out of memory. My GPU is A30 (24G).

sun2ot commented 1 month ago

user num: 38403 item num: 51937 v_feat dim: 16 t_feat dim: 16

And the model Tried to allocate 101.44 GiB ?

hanliu95 commented 1 month ago

Thank you for your question. Increasing the value of the parameter “indices_or_sections” in line 22 of GCN_model.py can solve this problem.

sun2ot @.***> 于2024年10月29日周二 10:17写道:

Thank you for your contributions in the field of multimodal recommendation. I would like to know on what GPU memory is your model trained?

I used a dataset with the same format as data_stample, but smaller than TikTok in your paper, and still prompted CUDA out of memory. My GPU is A30 (24G).

— Reply to this email directly, view it on GitHub https://github.com/hanliu95/MetaMMF/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJJAG4OZ5SDMNKNCYNUTG7TZ53V2DAVCNFSM6AAAAABQYYKVSKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGYYTSOJUG4YDMNI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

sun2ot commented 1 month ago

Thank you very much for your reply!💗