Open magicpr opened 3 days ago
When running the code with the CPU, the error is reported as follows:
`Traceback (most recent call last):
File "/home/zbin/SEDR-master/SEDR/integration_cpu.py", line 73, in
IndexError: index 10885 is out of bounds for dimension 0 with size 10812`
I solved this problem by taking the smallest value in file tutorial 3、SEDR_module.py and SEDR_model.py
, but I wonder if it will affect the results?
tutorial 3
> sedr_feat, _, _, _ = sedr_net.process()
if sedr_feat.shape[0] != adata.shape[0]:
min_size = min(sedr_feat.shape[0], adata.shape[0])
sedr_feat = sedr_feat[:min_size]
adata = adata[:min_size]
adata.obsm['SEDR'] = sedr_feat
SEDR_module.py
> num_mask_nodes = int(mask_rate * num_nodes)
mask_nodes = perm[: num_mask_nodes]
keep_nodes = perm[num_mask_nodes:]
if x.shape[0] < num_nodes:
padding_size = num_nodes - x.shape[0]
padding = torch.zeros((padding_size, x.shape[1]), device=x.device)
x = torch.cat((x, padding), dim=0)
elif x.shape[0] > num_nodes:
x = x[:num_nodes]
out_x = x.clone()
token_nodes = mask_nodes
SEDR_model.py
> def reconstruction_loss(decoded, x):
if decoded.shape != x.shape:
min_size = min(decoded.shape[0], x.shape[0])
decoded = decoded[:min_size]
x = x[:min_size]
loss_func = torch.nn.MSELoss()
loss_rcn = loss_func(decoded, x)
return loss_rc
def mask_generator(self, N=1):
x = torch.repeat_interleave(self.adj_label.indices()[0], N)
y = torch.concat(list_non_neighbor)
# 确保 x 和 y 的大小一致
if x.size(0) != y.size(0):
min_size = min(x.size(0), y.size(0))
x = x[:min_size]
y = y[:min_size]
indices = torch.stack([x, y])
indices = torch.concat([self.adj_label.indices(), indices], axis=1)
现在运行的结果如下
>enc_mask_token.shape: torch.Size([1, 200])
2024-11-05 12:59:32,202 - harmonypy - INFO - Computing initial centroids with sklearn.KMeans...
2024-11-05 12:59:34,982 - harmonypy - INFO - sklearn.KMeans initialization complete.
2024-11-05 12:59:35,161 - harmonypy - INFO - Iteration 1 of 10
2024-11-05 12:59:39,809 - harmonypy - INFO - Iteration 2 of 10
2024-11-05 12:59:41,276 - harmonypy - INFO - Iteration 3 of 10
2024-11-05 12:59:42,782 - harmonypy - INFO - Converged after 3 iterations
@magicpr I have made revision and now it works. Please try again to see if it also works for you.
Hello, thank you for your changes; they did indeed take effect. Besides that, I have two additional questions I'd like to consult:
I currently have three datasets: two in CEL format and one in DCM format. However, I noticed that the model requires various data formats (like TSV, H5, PNG, CSV, and JSON). I might only be able to convert my data into H5, CSV, or H5 and PNG formats. I'd like to ask, are all these data formats necessary for the model, or is just one or two formats sufficient? If all formats are necessary, could you give me some advice on how to handle these datasets?
Does SEDR currently support data annotation? I’m considering whether it’s possible to integrate data annotation functionality into this model. If you have thoughts on this, could you provide some suggestions?
Hi, I recently had this problem while reproducing the tutorial 3:
100%|████████████████████████████████████████████████████████████████████████| 3/3 [00:08<00:00, 2.83s/it] 0%| | 0/200 [00:00<?, ?it/s]../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [0,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [1,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [2,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [3,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [4,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [5,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [6,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [7,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [8,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [9,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [10,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [11,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [12,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [13,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [14,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [1382,0,0], thread: [15,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. 0%| | 0/200 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/zbin/SEDR-master/SEDR/integration.py", line 70, in <module> sedr_net.train_without_dec() File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/SEDR/SEDR_model.py", line 142, in train_without_dec latent_z, mu, logvar, de_feat, _, feat_x, _, loss_self = self.model(self.X, self.adj_norm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/SEDR/SEDR_module.py", line 176, in forward mu, logvar, feat_x = self.encode(x, adj) ^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/SEDR/SEDR_module.py", line 160, in encode feat_x = self.encoder(x) ^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) ^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) ^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zbin/anaconda3/envs/sedr_env/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling
cublasCreate(handle)``My GPU use is about close to 100%, so I think the main reason should be the array out of the index, but I don't know why others have not encountered, the relevant data about DLPFC was downloaded from the analyses and the tutorial. Please take a look at how this should be solved?