huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.33k stars 26.86k forks source link

Bad request: Can't load config for 'None'. Make sure that: - 'None' is a correct model identifier listed on 'https://huggingface.co/models' - or 'None' is the correct path to a directory containing a config.json file #30470

Closed KaifAhmad1 closed 6 months ago

KaifAhmad1 commented 6 months ago

System Info

Cuda : 12.1 OS : Windows x64 pip : 24.0 python : 3.10.10 transformers : 4.40.0 bitsandbytes: 0.43.1

Who can help?

Hey, @ArthurZucker @younesbelkada I am getting a config.js file error.

Information

Tasks

Reproduction

# Training Arguments
training_arguments = TrainingArguments(
    output_dir='Phi-3-hindi-3.4k-history',
    per_device_train_batch_size=4,
    gradient_accumulation_steps=4,
    optim='paged_adamw_32bit',
    learning_rate=2e-4,
    lr_scheduler_type='cosine',
    save_strategy='epoch',
    logging_steps=10,
    save_steps=10,
    num_train_epochs=10,
    max_steps=200,
    fp16=True,
    warmup_ratio=0.05,
    push_to_hub=True,
)

# SFTTrainer Arguments
trainer = SFTTrainer(
    model=model,
    train_dataset=train_dataset,
    peft_config=peft_config,
    dataset_text_field='text',
    args=training_arguments,
    tokenizer=tokenizer,
    packing=False,
    max_seq_length=512
)

trainer.train()

from huggingface_hub import HfApi

username = "kaifahmad"
MODEL_NAME = "microsoft/Phi-3-mini-128k-instruct"
api = HfApi(token="")

output_model_dir = "/content/Phi-3-hindi-3.4k-history"
trainer.model.save_pretrained(output_model_dir)
tokenizer.save_pretrained(output_model_dir)

('/content/Phi-3-hindi-3.4k-history/tokenizer_config.json',
 '/content/Phi-3-hindi-3.4k-history/special_tokens_map.json',
 '/content/Phi-3-hindi-3.4k-history/tokenizer.model',
 '/content/Phi-3-hindi-3.4k-history/added_tokens.json',
 '/content/Phi-3-hindi-3.4k-history/tokenizer.json')

Expected behavior

Getting Error when starting the Huggingface Space using Streamlit or Gradio

Logs

Build
Container
Lock scroll

Clear

===== Application Startup at 2024-04-25 06:52:09 =====

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.

  You can now view your Streamlit app in your browser.

  Network URL: http://10.19.42.211:8501
  External URL: http://34.197.127.12:8501

Fetching model from: https://huggingface.co/kaifahmad/Phi-3-hindi-3.4k-history
Caching examples at: '/home/user/app/gradio_cached_examples/16'
Caching example 1/1
2024-04-25 08:52:29.557 Uncaught app exception
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/kaifahmad/Phi-3-hindi-3.4k-history

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 584, in _run_script
    exec(code, module.__dict__)
  File "/home/user/app/app.py", line 3, in <module>
    gr.load("models/kaifahmad/Phi-3-hindi-3.4k-history").launch()
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
    return load_blocks_from_repo(
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
    blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 373, in from_model
    interface = gradio.Interface(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 515, in __init__
    self.render_examples()
  File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 861, in render_examples
    self.examples_handler = Examples(
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 74, in create_examples
    examples_obj.create()
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 307, in create
    self._start_caching()
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 358, in _start_caching
    client_utils.synchronize_async(self.cache)
  File "/usr/local/lib/python3.10/site-packages/gradio_client/utils.py", line 858, in synchronize_async
    return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs)  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
    raise return_result
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
    result[0] = await coro
  File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 479, in cache
    prediction = await Context.root_block.process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1788, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1340, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 759, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 357, in query_huggingface_inference_endpoints
    data = fn(*data)  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_client.py", line 1208, in question_answering
    response = self.post(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_client.py", line 267, in post
    hf_raise_for_status(response)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
    raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError:  (Request ID: dvWXT1tk921n2FIRNSwUp)

Bad request:
Can't load config for 'None'. Make sure that:

- 'None' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'None' is the correct path to a directory containing a config.json file
KaifAhmad1 commented 6 months ago

Here is the model link: https://huggingface.co/kaifahmad/Phi-3-hindi-3.4k-history

vasqu commented 6 months ago

Has nothing to do with the error or anything, but remove the api key in the description and remove that one from your account. Api keys are not meant to be shared publicly, they can oftentimes be used maliciously.

amyeroberts commented 6 months ago

@KaifAhmad1 In addition to the wise words of @vasqu I'd suggest regenerating your API key. Although it's now removed from the message displayed here, it's still accessible through the issue's history