-
## Context:
I recently raised this question: https://github.com/hrcorval/behavex/issues/157.
Basically, I want a way to run scenarios in parallel on `Scenario` level only and not on `Example` leve…
-
Hi, Thanks for the package! I came across the source code in all branches, but I couldn't find any CUDA-related kernels, device/host functions, or CUDA includes/macros. From what I observed, the compu…
-
I am trying to load a high number of small image files from S3 (each is around 100 kB - 1MB in size) for model training. Currently I achieve 100 image / sec loading from a single AZ bucket using 32 da…
tsdev updated
4 months ago
-
### Your current environment
I have a server with only one NVLink connection, so I need to use pipeline parallelism and tensor parallelism within a single node to improve its performance. I would lik…
-
See https://pytorch.org/docs/stable/distributed.tensor.parallel.html
llama 405b paper discusses using FSDP, pipeline parallelism, context parallelism, and tensor parallelism
It'd be relatively s…
-
Hi, thanks for you great work! The issue I am concerned about is the deployed parallelism when compared to Lookahead. As far as I know, Lookahead currently does not supports tensor parallelism which i…
-
I've been using `atq.INT4_AWQ_CFG` and observing a performance drop when quantizing a Llama 70B model with tensor parallelism with`atq.quantize(model, quant_cfg, forward_loop=calibrate_loop)`.
Quan…
-
Jira Link: [DB-13522](https://yugabyte.atlassian.net/browse/DB-13522)
Relevant config options in ora2pg for Table Level Parallelism
```
# This configuration directive adds multiprocess support…
-
This blog post provides a good background / introduction: https://threedots.tech/post/go-test-parallelism/
When I ran the following command
```shell
GOMAXPROCS=1 go test -parallel 128 -p 16 -json ./.…
-
i see the feature of this https://github.com/vllm-project/vllm/issues/7519 . i want to know when will this feature can be try ?