-
### 🚀 The feature, motivation and pitch
# Background
Currently, the project supports various hardware accelerators such as GPUs, but there is no support for NPUs. Adding NPU support could signific…
-
# **Read before creating a new issue**
- Users who want to use SpikingJelly should first be familiar with the usage of PyTorch.
- If you do not know much about PyTorch, we recommend that the user…
QAQXS updated
8 months ago
-
_via Zendesk https://coiled.zendesk.com/agent/tickets/81_
A user on Zendesk wrote:
> We want to perform inference using Pytorch on GPU workers.
> Can someone suggest the docker images that wou…
-
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no simi…
-
Thank you VISSL team for this phenomenal and timely repo.
I was wondering if you might be able to provide a minimal working example for loading one of your model zoo models from TorchHub (or via s…
-
Use "--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ "
instead of "-f https://developer.intel.com/ipex-whl-stable-xpu"
in the instruction guide for bigdl-llm install…
-
I have noticed that the RMS norm cuda kernel implementation of FlashAttention drastically reduces memory requirements in our codebase compared to the pure torch implementation based on Meta's LLaMA re…
-
We should be able to support intel GPUs! We are using the intel developer cloud. Please advise.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.4 LTS
Release: 22.04
Codename: …
-
Hi,I want to use pyinn when pytorch is greater than 1.3. Can you tell me how to change it? When pytorch is greater than 1.3, the function class only supports static. I have changed the Im2Col class on…
-
The question has been asked: is there a tutorial on using a custom PyTorch model as a `fiftyone.core.models.Model`?