Closed thgngu closed 5 years ago
Hi,
You're right, some of the algorithms have this functionality, others haven't. I should look into this. Best, Diviyan
On Fri, Aug 3, 2018, 00:28 thongnnguyen notifications@github.com wrote:
Hi,
I would like suggest to implement mini-batch training. Specifically, I tried to run FSGNN on my data and got the following error
not enough memory: you tried to allocate 160465GB. Buy new RAM! at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/TH/THGeneral.c:218
I looked into the code but unfortunately I'm not familiar with GAN to modify the code.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Diviyan-Kalainathan/CausalDiscoveryToolbox/issues/9, or mute the thread https://github.com/notifications/unsubscribe-auth/ATuWdD1TTPtrFfA1czHtjmO9tZxdmmMSks5uM0vtgaJpZM4Vs1PE .
I'm sorry that it took so long, but it should be done. It revealed itself to be trickier than expected. Now instead of feeding raw data as a numpy array or Tensor, you can feed torch.utils.data.Dataset types, with a custom loading function of elements that you write accordingly to your data.
I will be closing this issue, as it should be solved. Don't hesitate to reopen it if the bug still persists in the latest version. Best, Diviyan
Hi,
I would like suggest to implement mini-batch training. Specifically, I tried to run
FSGNN
on my data and got the following errorI looked into the code but unfortunately I'm not familiar with GAN to modify the code.