-
Is there any plan to incorporate GPU-based calculations to massively speed-up the simulation? This can be done for conflict-detection-resolution, propagating aircraft dynamics forward, or to allow for…
-
I am still very new to LLMs. I have access to a large amount of GPUs, and I would like to train this model across multiple GPUs ( though, I am not sure whether this is necessary/overkill ). Previously…
-
### Discussed in https://github.com/orgs/mfem/discussions/4488
Originally posted by **CINTROINI** September 5, 2024
Dear MFEM community,
We are developing a new code based on MFEM to simu…
-
Explore massive parallelization trying for a GPU implementation.
-
Be careful with implementation details so we can take advantage of GPU/parallel features.
-
### Description
On `trunk-minor`:
- [x] Deprecate `gpu_ids`.
- [x] Replace with `gpu_id`.
On `trunk-major`:
- [ ] Remove the multi-GPU code.
### Motivation and context
Multi-GPU was specifica…
-
Hi Tim-Oliver,
Many thanks for developing cryoCARE!
Could you implement GPU parellization over multiple GPUs or/and an option to choose which GPU to use for calculation please (when you have a spa…
-
How can we best support parallelization of ML potentials across GPUs?
We're dealing with models that are small enough to be replicated on each GPU, and only O(N) data (positions, box vectors) needs…
-
* face_recognition version: Latest
* Python version: 2.7
* Operating System: Ubuntu 16.04
### Description
Hi.
When I extract faces on a video file, It takes a lot of time.
I use dlib and cud…
-
```
$> codon build -release mandelbrot_gpu.codon
/usr/bin/ld: mandelbrot_gpu.o: in function `main':
mandelbrot_gpu.codon:(.text+0x2ae): undefined reference to `seq_nvptx_load_module'
/usr/bin/ld: …