Closed jtsai-quid closed 8 months ago
Indeed. Could you try with the latest release? Otherwise I'll have look at what I can do!
Just try the version 0.14.1
and the error still occurs. 😞
hi @ArthurZucker , Would this PR fix this issue? https://github.com/huggingface/hf-hub/pull/34
Ah! Yeah most probably because now we use the hf-hub api to load files, so if proxy is an issue there, will affect us.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
hi @ArthurZucker , I have noticed hf-hub has fixed this issue. https://github.com/huggingface/hf-hub/pull/34 Would it be possible to use the latest version of hf-hub in the tokenizer? Thanks~
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Hi hf,
I encountered an issue where I couldn't load the tokenizer using from_pretrained via the http_proxy in version 0.14.0, while it worked successfully in version 0.13.3. This caused the fast tokenizer initialization issue in TGI 1.1.0. https://github.com/huggingface/text-generation-inference/issues/1108
Here is the code snippet that I use to test for testing.
Error output
I suspect that this is related to the client refactoring in here
Thanks and appreciate for any help from you!