Bumps the pip group with 1 update in the /LLM/autogen/autogen_fastapi_ex directory: idna.
Bumps the pip group with 1 update in the /LLM/llama_index/samples/llama-index-milvus-example directory: idna.
Bumps the pip group with 2 updates in the /LLM/llama_index/samples/mixtral_ollama directory: idna and transformers.
Bumps the pip group with 1 update in the /LLM/src/observe_with_langfuse directory: idna.
Bumps the pip group with 1 update in the /data_management/dvc directory: idna.
Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/local_streaming_llm directory: idna and transformers.
Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_distilbert directory: idna and transformers.
Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_stablediffusion directory: idna and transformers.
Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_yolov5s directory: idna and transformers.
Bumps the pip group with 1 update in the /model-vcs/mlflow/simple_mlflow_fastapi_k8s directory: idna.
Bumps the pip group with 1 update in the /model-vcs/mlflow/sklearn_mlflow directory: idna.
Bumps the pip group with 2 updates in the /ray/ray-air-with-gpt-j-6b directory: idna and transformers.
Bumps the pip group with 2 updates in the /ray/zerocopy_loading directory: idna and transformers.
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Series of fixes for backwards compatibility (AutoAWQ and other quantization libraries, imports from trainer_pt_utils) and functionality (LLaMA tokenizer conversion)
The Llama, Cohere and the Gemma model both no longer cache the triangular causal mask unless static cache is used. This was reverted by #29753, which fixes the BC issues w.r.t speed , and memory consumption, while still supporting compile and static cache. Small note, fx is not supported for both models, a patch will be brought very soon!
New model addition
Cohere open-source model
Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere's industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
Strong accuracy on RAG and Tool Use
Low latency, and high throughput
Longer 128k context and lower pricing
Strong capabilities across 10 key languages
Model weights available on HuggingFace for research and evaluation
Llava next is the next version of Llava, which includes better support for non padded images, improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.
Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:
Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications.
Better world knowledge and logical reasoning.
Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Series of fixes for backwards compatibility (AutoAWQ and other quantization libraries, imports from trainer_pt_utils) and functionality (LLaMA tokenizer conversion)
The Llama, Cohere and the Gemma model both no longer cache the triangular causal mask unless static cache is used. This was reverted by #29753, which fixes the BC issues w.r.t speed , and memory consumption, while still supporting compile and static cache. Small note, fx is not supported for both models, a patch will be brought very soon!
New model addition
Cohere open-source model
Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere's industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
Strong accuracy on RAG and Tool Use
Low latency, and high throughput
Longer 128k context and lower pricing
Strong capabilities across 10 key languages
Model weights available on HuggingFace for research and evaluation
Llava next is the next version of Llava, which includes better support for non padded images, improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.
Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:
Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications.
Better world knowledge and logical reasoning.
Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Series of fixes for backwards compatibility (AutoAWQ and other quantization libraries, imports from trainer_pt_utils) and functionality (LLaMA tokenizer conversion)
The Llama, Cohere and the Gemma model both no longer cache the triangular causal mask unless static cache is used. This was reverted by #29753, which fixes the BC issues w.r.t speed , and memory consumption, while still supporting compile and static cache. Small note, fx is not supported for both models, a patch will be brought very soon!
New model addition
Cohere open-source model
Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere's industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
Strong accuracy on RAG and Tool Use
Low latency, and high throughput
Longer 128k context and lower pricing
Strong capabilities across 10 key languages
Model weights available on HuggingFace for research and evaluation
Llava next is the next version of Llava, which includes better support for non padded images, improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.
Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:
Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications.
Better world knowledge and logical reasoning.
Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Series of fixes for backwards compatibility (AutoAWQ and other quantization libraries, imports from trainer_pt_utils) and functionality (LLaMA tokenizer conversion)
The Llama, Cohere and the Gemma model both no longer cache the triangular causal mask unless static cache is used. This was reverted by #29753, which fixes the BC issues w.r.t speed , and memory consumption, while still supporting compile and static cache. Small note, fx is not supported for both models, a patch will be brought very soon!
New model addition
Cohere open-source model
Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere's industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:
Strong accuracy on RAG and Tool Use
Low latency, and high throughput
Longer 128k context and lower pricing
Strong capabilities across 10 key languages
Model weights available on HuggingFace for research and evaluation
Llava next is the next version of Llava, which includes better support for non padded images, improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.
Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:
Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications.
Better world knowledge and logical reasoning.
Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*
Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25)
++++++++++++++++
Fix regression to include tests in source distribution.
3.5 (2023-11-24)
++++++++++++++++
Update to Unicode 15.1.0
String codec name is now "idna2008" as overriding the system codec
"idna" was not working.
Fix typing error for codec encoding
"setup.cfg" has been added for this release due to some downstream
lack of adherence to PEP 517. Should be removed in a future release
so please prepare accordingly.
Removed reliance on a symlink for the "idna-data" tool to comport
with PEP 517 and the Python Packaging User Guide for sdist archives.
Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions
to this release.
Bumps the pip group with 1 update in the /LLM/autogen/autogen_fastapi_ex directory: idna. Bumps the pip group with 1 update in the /LLM/llama_index/samples/llama-index-milvus-example directory: idna. Bumps the pip group with 2 updates in the /LLM/llama_index/samples/mixtral_ollama directory: idna and transformers. Bumps the pip group with 1 update in the /LLM/src/observe_with_langfuse directory: idna. Bumps the pip group with 1 update in the /data_management/dvc directory: idna. Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/local_streaming_llm directory: idna and transformers. Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_distilbert directory: idna and transformers. Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_stablediffusion directory: idna and transformers. Bumps the pip group with 2 updates in the /ml-serving/custom-serving/fastapi/ray/ray_yolov5s directory: idna and transformers. Bumps the pip group with 1 update in the /model-vcs/mlflow/simple_mlflow_fastapi_k8s directory: idna. Bumps the pip group with 1 update in the /model-vcs/mlflow/sklearn_mlflow directory: idna. Bumps the pip group with 2 updates in the /ray/ray-air-with-gpt-j-6b directory: idna and transformers. Bumps the pip group with 2 updates in the /ray/zerocopy_loading directory: idna and transformers.
Updates
idna
from 3.4 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
idna
from 3.4 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
transformers
from 4.38.0 to 4.39.3Release notes
Sourced from transformers's releases.
... (truncated)
Commits
09f9f56
add version 4.39.3ac6a350
[generate
] fix breaking change for patch (#29976)839c2a1
[BC
] Fix BC for AWQ quant (#29965)97c00cd
Release: v4.39.2e40fe39
[LlamaSlowConverter
] Slow to Fast better support (#29797)02b1012
[BC
] Fix BC for other libraries (#29934)1b6d501
Safe import of LRScheduler (#29919)cbe58b4
Release: v4.39.1056df1d
[SuperPoint
] Fix doc example (#29816)e49ebae
[cleanup
] vestiges of causal mask (#29806)Updates
idna
from 3.4 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
transformers
from 4.38.0 to 4.39.3Release notes
Sourced from transformers's releases.
... (truncated)
Commits
09f9f56
add version 4.39.3ac6a350
[generate
] fix breaking change for patch (#29976)839c2a1
[BC
] Fix BC for AWQ quant (#29965)97c00cd
Release: v4.39.2e40fe39
[LlamaSlowConverter
] Slow to Fast better support (#29797)02b1012
[BC
] Fix BC for other libraries (#29934)1b6d501
Safe import of LRScheduler (#29919)cbe58b4
Release: v4.39.1056df1d
[SuperPoint
] Fix doc example (#29816)e49ebae
[cleanup
] vestiges of causal mask (#29806)Updates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
transformers
from 4.38.0 to 4.39.3Release notes
Sourced from transformers's releases.
... (truncated)
Commits
09f9f56
add version 4.39.3ac6a350
[generate
] fix breaking change for patch (#29976)839c2a1
[BC
] Fix BC for AWQ quant (#29965)97c00cd
Release: v4.39.2e40fe39
[LlamaSlowConverter
] Slow to Fast better support (#29797)02b1012
[BC
] Fix BC for other libraries (#29934)1b6d501
Safe import of LRScheduler (#29919)cbe58b4
Release: v4.39.1056df1d
[SuperPoint
] Fix doc example (#29816)e49ebae
[cleanup
] vestiges of causal mask (#29806)Updates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
transformers
from 4.38.0 to 4.39.3Release notes
Sourced from transformers's releases.
... (truncated)
Commits
09f9f56
add version 4.39.3ac6a350
[generate
] fix breaking change for patch (#29976)839c2a1
[BC
] Fix BC for AWQ quant (#29965)97c00cd
Release: v4.39.2e40fe39
[LlamaSlowConverter
] Slow to Fast better support (#29797)02b1012
[BC
] Fix BC for other libraries (#29934)1b6d501
Safe import of LRScheduler (#29919)cbe58b4
Release: v4.39.1056df1d
[SuperPoint
] Fix doc example (#29816)e49ebae
[cleanup
] vestiges of causal mask (#29806)Updates
idna
from 3.6 to 3.7Release notes
Sourced from idna's releases.
Changelog
Sourced from idna's changelog.
Commits
1d365e1
Release v3.7c1b3154
Merge pull request #172 from kjd/optimize-contextj0394ec7
Merge branch 'master' into optimize-contextjcd58a23
Merge pull request #152 from elliotwutingfeng/dev5beb28b
More efficient resolution of joiner contexts1b12148
Update ossf/scorecard-action to v2.3.1d516b87
Update Github actions/checkout to v4c095c75
Merge branch 'master' into dev60a0a4c
Fix typo in GitHub Actions workflow key5918a0e
Merge branch 'master' into devUpdates
transformers
from 4.38.0 to 4.39.3Release notes
Sourced from transformers's releases.