-
I've got a 280x with 4 gbs of vram.
I'm forcing myself to wait for the 16 gb cards that may be annunced in the gtx series, but an option could be getting a 4-6 gb cuda card from ebay and ''wait more h…
-
### Description
```shell
Branch: Main
Base Docker Image: nvcr.io/nvidia/tritonserver:23.01-py3 (the image is likely irrelevant here)
System: AGX Orin w/jetpack 5.1
```
### Reproduced Steps
```sh…
-
This is not an issue but couldn't find a better place to share.
Current README.md has the following code for installing the library
**Code**
```
git clone https://github.com/facebookresearch/s…
-
# Progress
- [x] Implement TPU executor that works on a single TPU chip (without tensor parallelism) #5292
- [x] Support single-host tensor parallel inference #5871
- [x] Support multi-host ten…
-
**Update: see below for error on lastest 1.8-beta3**
During Installation
When trying to install Pytorch deps with GPU support, `pip` kept installing a different PyTorch version in the second com…
-
### What is the issue?
```
ollama run llama3.1 (is ok)
```
switch to a different terminal
```
ollama run yi-coder
Error: llama runner process has terminated: CUDA error
ollama run llama3.1 (is…
-
**Describe the current behavior**
WARNING:jax._src.xla_bridge:CUDA backend failed to initialize: Found CUDA version 12010, but JAX was built against version 12020, which is newer. The copy of CUDA th…
-
I want to use torchsharp for android and ios .How can i do it?
-
## 🐛 Bug
Thanks for your teams' great work! But during using the C++ API of pytorch on gpu, there are some confusing bugs. When I try to load a .pt file as module and then do a forward operator, I …
-
https://github.com/LuxDL/LuxLib.jl/blob/ef784ed12d473269f3fc4a4d3470d0f447b66ea7/src/impl/dense.jl#L35
I imagine it would be fairly similar to https://github.com/FluxML/NNlib.jl/blob/master/ext/NNl…