-
Hi, since int4 version of Qwen-vl is avaialble and more friendly for low end gpu, is it a plug and play model for clot?
kexul updated
6 months ago
-
its good to support dense retrieval from multiple faiss index shard.
this would be more friendly for machine with limited RAM. (and further support GPU retrieval)
-
# Open Grant Proposal: `NGPU -- AI DePin`
**Project Name:** `NGPU`
**Proposal Category:** `Integrations`
**Individual or Entity Name:** `Metadata Labs Inc.`
**Proposer:** `Alain Garner `
…
-
### Describe the Bug
The AI pathfinding/simulation is broken and it's impossible to finish the game.
The most obvious example is Mission 4.2, where dropping friendlies on the battlefield results in …
-
- The deliverable here is to be able to run quantized models with the tinygrad inference engine
- Bonus (+$200) bounty as an easy follow up is to add support for MLX community models: https://github.…
-
Thanks for creating the notebooks!
I was interested in trying out the examples with an easy access to GPU. Are there plans to support Colab-friendly notebooks?
I put up a small example using th…
-
At Zcon0, some of us had really great conversations on how to speed up SNARKs and some (I believe the codaprotocol folks) mentioned using GPU for parallelize proving. Since SNARKs are highly paralleli…
-
The `Deeplasmid Docker container for GPU` section in the README states to use the following command for running deeplasmid for plasmid identification on GPU:
```
sudo /usr/bin/docker run -it -…
-
```
Julia 1.9.1
[052768ef] CUDA v4.4.0
[872c559c] NNlib v0.9.1
```
Here is the MWE
```
using CUDA
using NNlib
function mwe()
channels = 256
x = rand(Float32,1024, channels, …
mashu updated
7 months ago
-
Video preview now using CPU to decode, but sometimes it's not friendly to HD video, please add ffdshow support, GPU accelerating will lower CPU usage and improve video quality.