FenTechSolutions / CausalDiscoveryToolbox

Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.
https://fentechsolutions.github.io/CausalDiscoveryToolbox/html/index.html
MIT License
1.12k stars 198 forks source link

NCC example: pytorch error #29

Closed HughTom closed 5 years ago

HughTom commented 5 years ago

Hi,

I've been trying the NCC-example from the docs and get an error from torch:

from cdt.causality.pairwise import NCC import networkx as nx import matplotlib.pyplot as plt from cdt.data import load_dataset from sklearn.model_selection import train_test_split data, labels = load_dataset('tuebingen') X_tr, X_te, y_tr, y_te = train_test_split(data, labels, train_size=.5) obj = NCC() obj.fit(X_tr, y_tr) Epochs: 0%| | 0/50 [00:00<?, ?it/s] Traceback (most recent call last): File "", line 1, in File "/opt/conda/lib/python3.6/site-packages/cdt-0.5.5-py3.6.egg/cdt/causality/pairwise/NCC.py", line 183, in fit for (batch, label), i in zip(da, t): File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 529, in next batch = self.collate_fn([self.dataset[i] for i in indices]) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate return [default_collate(samples) for samples in transposed] File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 68, in return [default_collate(samples) for samples in transposed] File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 43, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 349 and 392 in dimension 3 at /tmp/pip-req-build-l1dtn3mo/aten/src/THC/generic/THCTensorMath.cu:71

The error is almost the same from within python 3.6.8 pytorch 1.1.0 cdt 0.5.5 or with the nvidia-docker:0.5.5

I'm not sure if it's up to my hardware. I get it on my notebook: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | 0 GeForce 940MX Off | 00000000:02:00.0 Off | N/A | | N/A 40C P0 N/A / N/A | 269MiB / 2004MiB | 0% Default |

and on a workstation:

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | 0 Quadro P600 Off | 00000000:18:00.0 Off | N/A | | 34% 40C P8 N/A / N/A | 17MiB / 1999MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M40 Off | 00000000:3B:00.0 Off | Off | | N/A 56C P8 17W / 250W | 0MiB / 12215MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla M40 Off | 00000000:D8:00.0 Off | Off | | N/A 65C P8 17W / 250W | 0MiB / 12215MiB | 0% Default | +-------------------------------+----------------------+----------------------+

Thank you in advance for any hint. Best Tom

ritik99 commented 5 years ago

Hi,

We were able to reproduce the error. The problem seems to be with the way the dataset is being loaded during training. We should get back to you soon.

Thanks, Ritik

HughTom commented 5 years ago

Thanks for the quick response. It seems that a similar (dataset handling) error occurs in my environment with the SAM-example from the docs. But I will check it again and get back to you.

Best Tom

siddsuresh97 commented 5 years ago

Has there been any update on this error? I am facing the same issue Is there any other way I can use this?

diviyank commented 5 years ago

Hello, Sorry for the delay, The fix should be pushed, and released in 0.5.8 :) It was coming from the fact that we didn't take into account the fact that the pairs might have different sizes... I'll be closing this issue, don't hesitate to reopen it if the issue arises again

Best, Diviyan

HughTom commented 4 years ago

Hi, and thank you for fixing that!

Best, Tom