User of dgl may not need graphbolt, so import dgl should not automatically import graphbolt. While in the latest dgl, import dgl will automatically import grapbolt
To Reproduce
Steps to reproduce the behavior:
python -c "import dgl;print(dgl.version)"
And it shows graphbolts warning:
root@59f2c7030f7b:/opt/dgl/dgl-source/examples/graphbolt/pyg# python -c "import dgl;print(dgl.__version__)"
/usr/local/lib/python3.10/dist-packages/dgl/graphbolt/__init__.py:114: GBWarning:
An experimental feature for CUDA allocations is turned on for better allocation
pattern resulting in better memory usage for minibatch GNN training workloads.
See https://pytorch.org/docs/stable/notes/cuda.html#optimizing-memory-usage-with-pytorch-cuda-alloc-conf,
and set the environment variable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:False`
if you want to disable it and set it True to acknowledge and disable the warning.
gb_warning(WARNING_STR_TO_BE_SHOWN)
2.4
Expected behavior
Do not import graphbolt in this case.
Environment
DGL Version (e.g., 1.0): 2.4
Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3): PyTorch
OS (e.g., Linux): Linx
How you installed DGL (conda, pip, source):
Build command you used (if compiling from source):
🐛 Bug
User of dgl may not need graphbolt, so import dgl should not automatically import graphbolt. While in the latest dgl, import dgl will automatically import grapbolt
To Reproduce
Steps to reproduce the behavior:
And it shows graphbolts warning:
Expected behavior
Do not import graphbolt in this case.
Environment
conda
,pip
, source):