Hi,we run the "rapidsai/notebooks-contrib/../cugraph/multi_gpu_pagerank.ipynb" on 'twitter-2010.csv' dataset by using 4 Tesla T4 (16GB), these gpus are not connected with nvlink. Our process throw warning during "Read the data from disk". The warning below: "Memory use is high but worker has no data to disk. Perhaps some other process is leaking memory? Process memory: 5.83GB -- Worker memory limit: 8.29GB". Our code is the same as this example.
Besides, we found that the needed libraries are import dask_cugraph.pagerank as dcg in rapids docs, not the import cugraph.dask.pagerank as dcg in this example, but we can't find the dask_cugraph in anaconda repo, why?
Our environment: rapids 0.12.0, cuda10.1, centos 7.6, py3.7.
Could you please show me some tips about this issue?
Hi,we run the "rapidsai/notebooks-contrib/../cugraph/multi_gpu_pagerank.ipynb" on 'twitter-2010.csv' dataset by using 4 Tesla T4 (16GB), these gpus are not connected with nvlink. Our process throw warning during "Read the data from disk". The warning below:
"Memory use is high but worker has no data to disk. Perhaps some other process is leaking memory? Process memory: 5.83GB -- Worker memory limit: 8.29GB".
Our code is the same as this example. Besides, we found that the needed libraries areimport dask_cugraph.pagerank as dcg
in rapids docs, not theimport cugraph.dask.pagerank as dcg
in this example, but we can't find the dask_cugraph in anaconda repo, why? Our environment: rapids 0.12.0, cuda10.1, centos 7.6, py3.7. Could you please show me some tips about this issue?Sincerely.