v0.16.2: Inference, CommitScheduler and Tensorboard
Inference
Introduced in the v0.15 release, the InferenceClient got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.
Async client
Asyncio calls are supported thanks to AsyncInferenceClient. Based on asyncio and aiohttp, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
Support asyncio with AsyncInferenceClient by @Wauplin in #1524
Text-generation
Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.
Text generation has 4 modes depending on details (bool) and stream (bool) values. By default, a raw string is returned. If details=True, more information about the generated tokens is returned. If stream=True, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)
Of course, the async client also supports text-generation (see docs):
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Updates the requirements on huggingface-hub to permit the latest version.
Release notes
Sourced from huggingface-hub's releases.
... (truncated)
Commits
95dcdd5
Release: v0.16.2792ff68
Release: v0.16.1aee62e7
Release: v0.16.08613733
Release: v0.16.0.rc06527e07
Support asyncio with AsyncInferenceClient (#1524)618bb23
Update readme and contributing guide (#1534)06836f3
make repo_info publicb9094e7
safer commit scheduler0c15cd7
update tip5b5aff7
fix doc exampleDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)