Closed Alice314 closed 3 years ago
I also have the same issue, I have a feeling it might be due to the warning given in the output.
I also have the same issue, I have a feeling it might be due to the warning given in the output. Have you solved the problem yet?
I fixed the "warning" (assuming it was an unintended error") but the model error is still the same, too high.
Any Update on this? I tried it too and it is way too high. Also the amount of unique labels doesn't fit the number in the readme? 100/100 vs 11000 in the readme
Ok so after reading a few other issues it is due to synthetic generated graphs, right? So since the question arises from time to time it might be useful to provide the whole dataset by the author @benedekrozemberczki to reproduce similar results.
Yes. However, uploading a larger set of JSONs here would not be feasible as GitHub limits the number of files. Adding a compressed file might help, but based on 2 years of experience in open source people do not tend to look ad the readme files.
On Wed, 25 Nov 2020 at 08:15, Erlix322 notifications@github.com wrote:
Ok so after reading a few other issues it is due to synthetic generated graphs, right? So since the question arises from time to time it might be useful to provide the whole dataset by the author @benedekrozemberczki https://github.com/benedekrozemberczki to reproduce similar results.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/benedekrozemberczki/SimGNN/issues/29#issuecomment-733542048, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEETMF6V3SXOSZXNIOEOPR3SRS4IJANCNFSM4TIV4SBA .
Yes. However, uploading a larger set of JSONs here would not be feasible as GitHub limits the number of files. Adding a compressed file might help, but based on 2 years of experience in open source people do not tend to look ad the readme files. … On Wed, 25 Nov 2020 at 08:15, Erlix322 @.***> wrote: Ok so after reading a few other issues it is due to synthetic generated graphs, right? So since the question arises from time to time it might be useful to provide the whole dataset by the author @benedekrozemberczki https://github.com/benedekrozemberczki to reproduce similar results. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#29 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEETMF6V3SXOSZXNIOEOPR3SRS4IJANCNFSM4TIV4SBA .
maybe you r right, but when i tried to set the training dataset and the test dataset to be the same,i found that the result was still wrong.So I do not think it's a problem with the dataset.@benedekrozemberczki
python src/main.py +---------------------+------------------+ | Batch size | 128 | +=====================+==================+ | Bins | 16 | +---------------------+------------------+ | Bottle neck neurons | 16 | +---------------------+------------------+ | Dropout | 0.500 | +---------------------+------------------+ | Epochs | 5 | +---------------------+------------------+ | Filters 1 | 128 | +---------------------+------------------+ | Filters 2 | 64 | +---------------------+------------------+ | Filters 3 | 32 | +---------------------+------------------+ | Histogram | 0 | +---------------------+------------------+ | Learning rate | 0.001 | +---------------------+------------------+ | Tensor neurons | 16 | +---------------------+------------------+ | Testing graphs | ./dataset/train/ | +---------------------+------------------+ | Training graphs | ./dataset/train/ | +---------------------+------------------+ | Weight decay | 0.001 | +---------------------+------------------+
Enumerating unique labels.
100%|█████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 12001.56it/s]
Model training.
Epoch: 0%| | 0/5 [00:00<?, ?it/s] /home/jovyan/SimGNN/src/simgnn.py:221: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size. losses = losses + torch.nn.functional.mse_loss(data["target"], prediction) Epoch (Loss=2.87421): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.78it/s] Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.07it/s]
Model evaluation.
100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 268.85it/s]
Baseline error: 0.48942.
Model test error: 0.84929.
I'm sorry to bother you.But when I tried to replicate your work,I ran into some difficulties. Here is the problem I met:
python src/main.py +---------------------+------------------+ | Batch size | 128 | +=====================+==================+ | Bins | 16 | +---------------------+------------------+ | Bottle neck neurons | 16 | +---------------------+------------------+ | Dropout | 0.500 | +---------------------+------------------+ | Epochs | 5 | +---------------------+------------------+ | Filters 1 | 128 | +---------------------+------------------+ | Filters 2 | 64 | +---------------------+------------------+ | Filters 3 | 32 | +---------------------+------------------+ | Histogram | 0 | +---------------------+------------------+ | Learning rate | 0.001 | +---------------------+------------------+ | Tensor neurons | 16 | +---------------------+------------------+ | Testing graphs | ./dataset/test/ | +---------------------+------------------+ | Training graphs | ./dataset/train/ | +---------------------+------------------+ | Weight decay | 0.001 | +---------------------+------------------+
Enumerating unique labels.
100%|██████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00<00:00, 2533.57it/s]
Model training.
Epoch: 0%| | 0/5 [00:00<?, ?it/s] /home/jovyan/SimGNN/src/simgnn.py:212: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they havethe same size. losses = losses + torch.nn.functional.mse_loss(data["target"], prediction) Epoch (Loss=3.87038): 100%|██████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.23s/it] Batches: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.68s/it]
Model evaluation.
100%|█████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 102.39it/s]
Baseline error: 0.41597.
Model test error: 0.94024.
I found the model test error too high! The only thing I changed was the version of the libraries,which I replaced with the latest. Could you help me with this problem?