allenai / cached_path

A file utility for accessing both local and remote files through a unified interface.
https://cached-path.readthedocs.io/
Apache License 2.0
35 stars 11 forks source link

Update huggingface-hub requirement from <0.24.0,>=0.8.1 to >=0.8.1,<0.25.0 #239

Open dependabot[bot] opened 2 months ago

dependabot[bot] commented 2 months ago

Updates the requirements on huggingface-hub to permit the latest version.

Release notes

Sourced from huggingface-hub's releases.

v0.24.0: Inference, serialization and optimizations

⚡️ OpenAI-compatible inference client!

The InferenceClient's chat completion API is now fully compliant with OpenAI client. This means it's a drop-in replacement in your script:

- from openai import OpenAI
+ from huggingface_hub import InferenceClient
  • client = OpenAI(
  • client = InferenceClient( base_url=..., api_key=..., )

output = client.chat.completions.create( model="meta-llama/Meta-Llama-3-8B-Instruct", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Count to 10"}, ], stream=True, max_tokens=1024, )

for chunk in output: print(chunk.choices[0].delta.content)

Why switching to InferenceClient if you already use OpenAI then? Because it's better integrated with HF services, such as the Serverless Inference API and Dedicated Endpoints. Check out the more detailed answer in this HF Post.

For more details about OpenAI compatibility, check out this guide's section.

(other) InferenceClient improvements

Some new parameters have been added to the InferenceClient, following the latest changes in our Inference API:

  • prompt_name, truncate and normalize in feature_extraction
  • model_id and response_format, in chat_completion
  • adapter_id in text_generation
  • hypothesis_template and multi_labels in zero_shot_classification

Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

... (truncated)

Commits
  • 44b6e0f Release: v0.24.0
  • 83b5d7f Merge branch 'main' into v0.24-release
  • 1f9dbf8 fix: Handle single return value. (#2396)
  • b5e0d76 Update _async_client.py
  • 7fe286b Merge branch 'main' into tv0.24-release
  • 9c98a4b Do not mention gitalyUid in expand parameter (#2395)
  • d8e92aa Release: v0.24.0.rc0
  • 36396f1 [InferenceClient] Add support for adapter_id (text-generation) and `respons...
  • 6ddaf44 Fix list_accepted_access_requests if grant user manually (#2392)
  • e370fa6 Prevent empty commits if files did not change (#2389)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)