-
### Description
As the project develops, many of our tools work with lists of Audio objects with the goal to be that they can be optimized into Pydra workflows and have easy to use pipelines, especia…
-
We use Cuda right now for GPU parallelization, but OpenCL is more widely supported. Perhaps we should look into adding OpenCL support.
-
Hi!
Thanks so much for the helpful training code and documentation. Apologies in advance for the naive question--I'm pretty new to machine learning.
I'm trying to train my own watermarking model…
-
When I try to run the model on several GPUs I am getting a numerical error:
```
Warning: NaN or Inf found in input tensor.
Warning: NaN or Inf found in input tensor.
Warning: NaN or Inf found in i…
-
## Info
HPhi asked me to run this script (↓) with expert mode
```shell
root@MyComputerName:/home/HPhi-3.5.2/src# ./HPhi -s stan.in
(HPhi Logo, taken away)
##### Parallelization Info. ##…
-
The witness generation for the slots involving Poseidon hashing could be performed on GPU, which would probably provide further improvements on top of the parallelization implemented in #766.
-
### System Info
- transformers version: 4.28.1
- Platform: Linux-3.10.0-1160.95.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version…
-
In particular, are we leveraging the graph execution optimizations (e.g., parallelization, memory management, GPU usage) of tensorflow and torch or do we need to do more to get that?
-
### The problem
Currently, `WolframModel` and related functions are always running sequentially.
While it might be tricky to parallelize the symbolic code, it should be reasonably straightforward …
-
### The problem
This is similar to #155, but for GPUs rather than CPUs. We need both because some users might not have access to a GPU, especially if we don't support all of them.
This issue speci…