rusty1s / pytorch_cluster

PyTorch Extension Library of Optimized Graph Cluster Algorithms
MIT License
824 stars 148 forks source link

Still there: Incompatibility with bfloat16 #203

Closed borisfom closed 3 months ago

borisfom commented 9 months ago

There was an issue filed previously of lack of support for bfloat16 - looks like it's still there. I am getting the same error : RuntimeError: "_" not implemented for 'BFloat16' running the code below:

from torch_cluster import radius, radius_graph
import torch
from torch import tensor
x =     tensor([[  4.0625, -27.3750,  -4.3438],
                [  3.0312, -27.6250,  -3.4844],
                [  5.0312, -28.5000,  -4.2812],
                [ -8.1875, -17.7500,  -2.9062],
                [ -8.1875, -19.0000,  -2.2812],
                [ -8.0625, -20.2500,  -2.9688]], device='cuda:0', dtype=torch.bfloat16)

radius_inp = (
    x,
    5.0,
    tensor([0, 0, 0, 0, 0, 0], device='cuda:0'),
    10
)

radius_edges = radius_graph(*radius_inp)
borisfom commented 9 months ago

Same issue with radius() call. This is for the package built from the trunk, on A6000 box.

rusty1s commented 9 months ago

Currently, bfloat16 support only exists on CPU :(

borisfom commented 9 months ago

Any plans to include bfloat16 support on GPU soon ?

rusty1s commented 9 months ago

Currently no, since this repo is no longer in active development.

github-actions[bot] commented 3 months ago

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?