Querent-ai / querent-python-research

Querent
https://querent.xyz
Other
2 stars 0 forks source link

Bump the pip group with 3 updates #313

Open dependabot[bot] opened 3 months ago

dependabot[bot] commented 3 months ago

Bumps the pip group with 3 updates: langchain-community, requests and transformers.

Updates langchain-community from 0.0.25 to 0.2.5

Release notes

Sourced from langchain-community's releases.

langchain-community==0.2.5

Release langchain-community==0.2.5

Changes since langchain-community==0.2.4

community: release 0.2.5 (#22923) docs: Fix wrongly referenced class name in confluence.py (#22879) community[minor]: Fix long_context_reorder.py async (#22839) community[major], experimental[patch]: Remove Python REPL from community (#22904) community[patch]: SitemapLoader restrict depth of parsing sitemap (CVE-2024-2965) (#22903) core[patch]: fix validation of @​deprecated decorator (#22513) [Community]: HuggingFaceCrossEncoder score accounting for pairs. (#22578) community[minor]: add chat model llamacpp (#22589) community[minor]: Prem Templates (#22783) community[minor]: Implement ZhipuAIEmbeddings interface (#22821) docs, cli[patch]: document loaders doc template (#22862) ci: Add script to check for pickle usage in community (#22863) community[patch]: FAISS VectorStore deserializer should be opt-in (#22861) [docs]: added info for TavilySearchResults (#22765) minor functionality change: adding API functionality to tavilysearch (#22761) docs: improved recursive url loader docs (#22648) ci: add testing with Python 3.12 (#22813) community[patch]: fix database uri type in SQLDatabase (#22661) community[patch]: Update root_validators embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub, Toolkits: Connery, ChatModels: PAI_EAS, (#22828) community[minor]: implement huggingface show_progress consistently (#22682) community[patch]: fix hunyuan message include chinese signature error (#22795) (#22796) community[patch]: bugfix for YoutubeLoader's LINES format (#22815) langchain[minor]: Make EmbeddingsFilters async (#22737) community[patch]: fix hunyuan client json analysis (#22452) (#22767) community[patch]: Support for old clients (Thin and Thick) Oracle Vector Store (#22766) community[patch]: Load YouTube transcripts (captions) as fixed-duration chunks with start times (#21710) community[minor]: Adds a vector store for Azure Cosmos DB for NoSQL (#21676) [Community]: Added Metadata filter support for DocumentDB Vector Store (#22777) Ollama vision support (#22734) community[minor]: fix redis store docstring and streamline initialization code (#22730) community[patch]: Kinetica Integrations handled error in querying; quotes in table names; updated gpudb API (#22724) community[minor]: Add support for OVHcloud AI Endpoints Embedding (#22667) community[patch]: Add missing type annotations (#22758) community[patch]: fix WandbTracer to work with new "RunV2" API (#22673) community[patch]: fix deepinfra inference (#22680) community[patch]: Add function response to graph cypher qa chain (#22690) community[minor]: add Volcengine Rerank (#22700) community[patch]: Small Fix in OutlookMessageLoader (Close the Message once Open) (#22744) Community[minor]: Add language parser for Elixir (#22742) community[patch]: Use Custom Logger Instead of Root Logger in get_user_agent Function (#22691) community[minor]: Add SQL storage implementation (#22207) couchbase: Add the initial version of Couchbase partner package (#22087) community[minor]: Add UpstashRatelimitHandler (#21885)

langchain-community==0.2.4

Release langchain-community==0.2.4

... (truncated)

Commits


Updates requests from 2.31.0 to 2.32.2

Release notes

Sourced from requests's releases.

v2.32.2

2.32.2 (2024-05-21)

Deprecations

  • To provide a more stable migration for custom HTTPAdapters impacted by the CVE changes in 2.32.0, we've renamed _get_connection to a new public API, get_connection_with_tls_context. Existing custom HTTPAdapters will need to migrate their code to use this new API. get_connection is considered deprecated in all versions of Requests>=2.32.0.

    A minimal (2-line) example has been provided in the linked PR to ease migration, but we strongly urge users to evaluate if their custom adapter is subject to the same issue described in CVE-2024-35195. (#6710)

v2.32.1

2.32.1 (2024-05-20)

Bugfixes

  • Add missing test certs to the sdist distributed on PyPI.

v2.32.0

2.32.0 (2024-05-20)

🐍 PYCON US 2024 EDITION 🐍

Security

Improvements

  • verify=True now reuses a global SSLContext which should improve request time variance between first and subsequent requests. It should also minimize certificate load time on Windows systems when using a Python version built with OpenSSL 3.x. (#6667)
  • Requests now supports optional use of character detection (chardet or charset_normalizer) when repackaged or vendored. This enables pip and other projects to minimize their vendoring surface area. The Response.text() and apparent_encoding APIs will default to utf-8 if neither library is present. (#6702)

Bugfixes

  • Fixed bug in length detection where emoji length was incorrectly calculated in the request content-length. (#6589)
  • Fixed deserialization bug in JSONDecodeError. (#6629)
  • Fixed bug where an extra leading / (path separator) could lead urllib3 to unnecessarily reparse the request URI. (#6644)

... (truncated)

Changelog

Sourced from requests's changelog.

2.32.2 (2024-05-21)

Deprecations

  • To provide a more stable migration for custom HTTPAdapters impacted by the CVE changes in 2.32.0, we've renamed _get_connection to a new public API, get_connection_with_tls_context. Existing custom HTTPAdapters will need to migrate their code to use this new API. get_connection is considered deprecated in all versions of Requests>=2.32.0.

    A minimal (2-line) example has been provided in the linked PR to ease migration, but we strongly urge users to evaluate if their custom adapter is subject to the same issue described in CVE-2024-35195. (#6710)

2.32.1 (2024-05-20)

Bugfixes

  • Add missing test certs to the sdist distributed on PyPI.

2.32.0 (2024-05-20)

Security

Improvements

  • verify=True now reuses a global SSLContext which should improve request time variance between first and subsequent requests. It should also minimize certificate load time on Windows systems when using a Python version built with OpenSSL 3.x. (#6667)
  • Requests now supports optional use of character detection (chardet or charset_normalizer) when repackaged or vendored. This enables pip and other projects to minimize their vendoring surface area. The Response.text() and apparent_encoding APIs will default to utf-8 if neither library is present. (#6702)

Bugfixes

  • Fixed bug in length detection where emoji length was incorrectly calculated in the request content-length. (#6589)
  • Fixed deserialization bug in JSONDecodeError. (#6629)
  • Fixed bug where an extra leading / (path separator) could lead urllib3 to unnecessarily reparse the request URI. (#6644)

Deprecations

... (truncated)

Commits
  • 88dce9d v2.32.2
  • c98e4d1 Merge pull request #6710 from nateprewitt/api_rename
  • 92075b3 Add deprecation warning
  • aa1461b Move _get_connection to get_connection_with_tls_context
  • 970e8ce v2.32.1
  • d6ebc4a v2.32.0
  • 9a40d12 Avoid reloading root certificates to improve concurrent performance (#6667)
  • 0c030f7 Merge pull request #6702 from nateprewitt/no_char_detection
  • 555b870 Allow character detection dependencies to be optional in post-packaging steps
  • d6dded3 Merge pull request #6700 from franekmagiera/update-redirect-to-invalid-uri-test
  • Additional commits viewable in compare view


Updates transformers from 4.36.0 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)

input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2" )

input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b", device_map="auto", load_in_4bit=True ) </tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/Querent-ai/querent/network/alerts).