neuralmagic / deepsparse

Sparsity-aware deep learning inference runtime for CPUs
https://neuralmagic.com/deepsparse/
Other
2.99k stars 173 forks source link

Assuming I am using a framework like pytorch-geometric or DGL, can deepsparse be applied for accelerating GNNs on CPUs ? #823

Closed whiz-Tuhin closed 1 year ago

whiz-Tuhin commented 1 year ago

Discussed in https://github.com/neuralmagic/deepsparse/discussions/822

Originally posted by **whiz-Tuhin** December 20, 2022 Hi deepsparse community, I stumbled on the project through [this](https://www.youtube.com/watch?v=0PAiQ1jTN5k) video. I've recently been learning about Graph Neural Networks (GNNs) and wondered if deepsparse can be used to develop GNNs (learning or inference) on CPUs. Looking forward to your response. Regards
rahul-tuli commented 1 year ago

Hi @whiz-Tuhin, thanks for reaching out, right now, deepsparse only supports inference, your best bet would be to convert the final checkpoint to onnx, and then run deepsparse.benchmark on it, this would print out fraction_of_supported_ops which is a good indication of how much of the network is supported by deepsparse