-
Hi,
Great Job!
I want to finetune the model unsing my own data. However, I don't find the Semantic-Aware Speech Tokenizer in your open-source codes. Do you plan to open-source the Semantic-Aware S…
-
```
Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=exec…
-
I fine-tuned an llm based on the llama skeleton and used convert_hf_checkpoint and quantize to complete the quantification. However, when generating, the tokenizer.model file is missing. How can I ope…
-
In `lavis/models/blip_models/blip_rel_det.py`
```python
tokenizer = StageBertTokenizer.from_pretrained(
"/public/home/lirj2/projects/LAVIS_GITM/data/bert-base-uncased",
loc…
-
I have setting the config file as the readme file.
but there is a error when load tokenizer, as shown in the figure.
So I cann't inference with your pretrained weight.
Can you help to fix this bug…
-
tokenizer = Tokenizer.from_file(str(tokenizer_path))
Exception: data did not match any variant of untagged enum PyNormalizerTypeWrapper at line 49 column 3
-
### System Info
There was a regression in commit b4727a1216bb21df2795e973063ed07202235d7e that prevents loading of some tokenizers.
### Who can help?
@ArthurZucker
### Information
- …
-
🔴 If you have installed AllTalk in a custom Python environment, I will only be able to provide limited assistance/support. AllTalk draws on a variety of scripts and libraries that are not written or m…
-
I get this error:
```
Traceback (most recent call last):
File "/home/denis/Documents/ai/unsloth/llama3-chat-template.py", line 20, in
model, tokenizer = FastLanguageModel.from_pretrained(…
-
Hey,
I want to train a Tokenizer that operates on a custom PreTokenizer. I tried a mix of [this documentation post](https://huggingface.co/docs/tokenizers/pipeline) and [this example](https://githu…