-
Hey, I've fine tuned mello tts for indian accent and a few indian languages. I wanted to use the weights in the tone converter but realized voice_conversion expects the averaged tensor values for sour…
-
### Describe the bug
Code:
```_t1.py
from TTS.api import TTS
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda:2")
tts.voice_conversion_to_…
-
I can see that optimum-quanto provides several external (weight-only) quantization algorithm such as smoothquant and awq in [here](https://github.com/huggingface/optimum-quanto/tree/main/external).
…
-
## ❓ Question
im able to `torch.export` and generate an ExportedProgram with no issues for my model. upon compiling with `torch_tensorrt`...
```python
ep = torch.export.load("...")
example_inpu…
-
For enhancing accessibility and collaboration, we can upload the CAM model and its weights to the Hugging Face Hub. This requires converting the model weights into PyTorch's `.pt` format to ensure com…
-
May I ask why the need of Tinygrad for the weights conversions? The script seems to be dumping them with np afterwards is read by tinygrad.
-
Hello,
We converted the paxml checkpoint and resumed training with following config:
```
base_config: "base.yml"
tokenizer_path: "/dockerx/vocab/c4_en_301_5Mexp2_spm.model"
dataset_type: "tfds"
…
-
I have fine tuned "meta-llama-3.1-8b-bnb-4bit" model using unsloth. I have downloaded the lora weights and able to do inferencing using those on Colab GPU.
But i want use this fine tuned model for …
-
Our ONNX converters could be improved.
To convert from pytorch [ONNX dynamo](https://pytorch.org/docs/stable/onnx_dynamo.html) looks promising.
-
It is possible to convert GPTQ models without act_order (when g_idx is not used) to AWQ gemv compatible format since AWQ gemv changed the pack order to a natural order.
GPTQ storage format:
```
q…
K024 updated
10 months ago