snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.89k stars 397 forks source link

Problems while processing ogbn-papers100M dataset. #407

Closed skdbsxir closed 1 year ago

skdbsxir commented 1 year ago

Hello.

I just finished downloading ogbn-products100M dataset by dataset = PygNodePropPredDataset(name='ogbn-papers100M'), but there are some problems while processing files.

Processing...
Loading necessary files...
This might take a while.
Processing graphs...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 31068.92it/s]
Converting graphs into PyG objects...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4310.69it/s]
Saving...
Killed

I typed dmesg | grep -E -i -B100 'killed process', and found it was OOM.

[1210926.911880] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1001.slice/session-685.scope,task=python,pid=107611,uid=1001
[1210926.911942] Out of memory: Killed process 107611 (python) total-vm:159489312kB, anon-rss:126791128kB, file-rss:0kB, shmem-rss:4kB, UID:1001 pgtables:251436kB oom_score_adj:0

I found other issues, and I found https://github.com/snap-stanford/ogb/issues/229 's answer. After deleting processed folder in dataset directory, I tried it with dataset = NodePropPredDataset(name = 'ogbn-papers100M'). But I got same problem. (also it was OOM.)

Loading necessary files...
This might take a while.
Processing graphs...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 29959.31it/s]
Saving...
Killed
[1212477.817041] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1001.slice/session-685.scope,task=python,pid=108624,uid=1001
[1212477.817084] Out of memory: Killed process 108624 (python) total-vm:154449520kB, anon-rss:127580292kB, file-rss:0kB, shmem-rss:4kB, UID:1001 pgtables:252488kB oom_score_adj:0

Below is my CPU info.

$ lscpu

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              2
Core(s) per socket:              16
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           8
Model name:                      AMD Ryzen Threadripper PRO 5955WX 16-Cores
Stepping:                        2
Frequency boost:                 enabled
CPU MHz:                         1800.000
CPU max MHz:                     7031.2500
CPU min MHz:                     1800.0000
BogoMIPS:                        7984.86
Virtualization:                  AMD-V
L1d cache:                       512 KiB
L1i cache:                       512 KiB
L2 cache:                        8 MiB
L3 cache:                        64 MiB
NUMA node0 CPU(s):               0-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonst
                                 op_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_
                                 legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ib
                                 rs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_tota
                                 l cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vm
                                 save_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm

Below is my RAM info.

$ free -g

              total        used        free      shared  buff/cache   available
Mem:            125           2         122           0           0         121
Swap:             1           1           0

And below is my python packages info.

$ pip list
Package                  Version
------------------------ -----------------
certifi                  2022.12.7
charset-normalizer       3.0.1
idna                     3.4
Jinja2                   3.1.2
joblib                   1.2.0
littleutils              0.2.2
MarkupSafe               2.1.2
numpy                    1.24.1
nvidia-cublas-cu11       11.10.3.66
nvidia-cuda-nvrtc-cu11   11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11        8.5.0.96
ogb                      1.3.5
pandas                   1.5.3
pip                      22.3.1
psutil                   5.9.4
pyg-lib                  0.1.0+pt112cu116
pyparsing                3.0.9
python-dateutil          2.8.2
pytz                     2022.7.1
requests                 2.28.2
scikit-learn             1.2.1
scipy                    1.10.0
setuptools               65.6.3
six                      1.16.0
threadpoolctl            3.1.0
torch                    1.12.0+cu116
torch-cluster            1.6.0+pt112cu116
torch-geometric          2.2.0
torch-scatter            2.1.0+pt112cu116
torch-sparse             0.6.16+pt112cu116
torch-spline-conv        1.2.1+pt112cu116
tqdm                     4.64.1
typing_extensions        4.4.0
urllib3                  1.26.14
wheel                    0.38.4

I also checked https://github.com/snap-stanford/ogb/issues/46 , but I have more than 100GB CPU memory as above. Does it matter with my memory capacity?

weihua916 commented 1 year ago

I believe it is a matter of memory capacity. We have not exactly tested out how much CPU memory you would need though. The saved data itself should be smaller than 100GB but it may require more CPU memory to process the data.

skdbsxir commented 1 year ago

Thanks a lot! I thought it might be memory issue, your answer seems to have convinced me.

But is there any other way to process obgn-papers100M by my self? I'm using ogbn-products now, but I want to use more large data for my experiment.

weihua916 commented 1 year ago

Increasing the CPU memory would be the best way; there is no easy workaround.

skdbsxir commented 1 year ago

I'll find other way, thank you!

idontkonwher commented 1 year ago

I think it might coused by multiprocess, when use copy.copy rather than copy.depp.copy(), there will be memory leak.

UTKRISHTPATESARIA commented 1 year ago

@skdbsxir I am also facing the same issue. I tried using the following link, by processing it on a machine with larger DRAM and saving it on a disk.

https://discuss.dgl.ai/t/paper100m-download-failed/3287

But Im facing OOM issue even after that.

Was curious if you figured out some other way? Also, is it possible to trim down the dataset ?

skdbsxir commented 6 months ago

@UTKRISHTPATESARIA Sorry for my late response.

I tried various ways (also your given dgl link), but I couldn't process dataset.

So I just decided not using ogbn-papers100M, and decided using ogbn-products only :cry:

Thank you.