-
### Anything you want to discuss about vllm.
I run the model on the server with 4 x NVIDIA GeForce RTX 4090 cards:
CUDA_VISIBLE_DEVICES=4,5,6,7 python -m vllm.entrypoints.openai.api_server --served…
-
### Description
```python
import jax
import jax.numpy as jnp
mesh = jax.make_mesh((4, 2), ('data', 'model'))
data_sharding = jax.sharding.NamedSharding(
mesh, jax.sharding.PartitionSpec('d…
-
### What happened?
There's no shadows when using Iris 1.8.0 with [Miniature](https://www.curseforge.com/minecraft/shaders/miniature-shader). The shader works fine on Optifine. I tried a few older ver…
-
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Zed consistently crashes when I try paste this large JavaScript into an empty file.
I can uploa…
-
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/mnt/sdb/humannorm/launch.py", line 237, in
[rank1]: main(args, extras)
[rank1]: File "/mnt/sdb/humannorm/launch.py", line 1…
-
We have a workstation driving in total 10 displays via 3 GPUs:
> nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 4090 (4 displays)
GPU 1: NVIDIA GeForce GTX 1650 (2 displays)
GPU 2: NVIDIA GeForce RTX 4090…
-
If a monitor is disabled when I run GetAllPotentialDisplays, I have no way to enable it. But if I use an earlier returned value, it works.
Here's an easy repro:
```
# display 0 is enabled
$disp…
-
### Your current environment
```text
WARNING 11-14 02:19:07 _custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
Collecting environment information…
-
Hi there,
I use NVIDIA GeForce RTX 3090 with 24G memory. However, it shows cuda out of memory while initializing the unet. I just wonder how to handle that or the minimum memory required
-
**Describe the bug**
"AlienFX Fan Control" and "AlienFX Monitor" are not showing/getting temperature accurately.
The programs may be getting the temperature from the Intel UHD Graphics card.
**…