microsoft / rag-experiment-accelerator

The RAG Experiment Accelerator is a versatile tool designed to expedite and facilitate the process of conducting experiments and evaluations using Azure Cognitive Search and RAG pattern.
https://github.com/microsoft/rag-experiment-accelerator
Other
193 stars 68 forks source link

Bump sentence-transformers from 3.0.0 to 3.0.1 #586

Closed dependabot[bot] closed 5 months ago

dependabot[bot] commented 5 months ago

Bumps sentence-transformers from 3.0.0 to 3.0.1.

Release notes

Sourced from sentence-transformers's releases.

v3.0.1 - Patch introducing new Trainer features, model card improvements and evaluator fixes

This patch release introduces some improvements for the SentenceTransformerTrainer, as well as some updates for the automatic model card generation. It also patches some minor evaluator bugs and a bug with MatryoshkaLoss. Lastly, every single Sentence Transformer model can now be saved and loaded with the safer model.safetensors files.

Install this version with

# Full installation:
pip install sentence-transformers[train]==3.0.1

Inference only:

pip install sentence-transformers==3.0.1

SentenceTransformerTrainer improvements

  • Implement gradient checkpointing for lower memory usage during training (#2717)
  • Implement support for push_to_hub=True Training Argument, also implement trainer.push_to_hub(...) (#2718)

Model Cards

This patch release improves on the automatically generated model cards in several ways:

  • Your training datasets are now automatically linked if they're on Hugging Face (#2711)
  • A new generated_from_trainer tag is now also added (#2710)
  • The automatically included widget examples are now improved, especially for question-answering. Previously, the widget could give examples of comparing two questions with eachother (#2713)
  • If you save a model locally, then load it again and upload it, it would previously still show
...
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
...

This now gets replaced with your new model ID on Hugging Face (#2714)

  • The exact training dataset size is now included in the model metadata, rather than as a bucket of e.g. 1K<n<10K (#2728)

Evaluators fixes

  • The primary metric of evaluators in SequentialEvaluator would be ignored in the scores calculation (#2700)
  • Fix confusing print statement in TranslationEvaluator when using print_wrong_matches=True (#1894)
  • Fix bug that prevents you from customizing the primary_metric in InformationRetrievalEvaluator (#2701)
  • Allow passing a list of evaluators to the STTrainer rather than a SequentialEvaluator (#2717)

Losses fixes

  • Fix MatryoshkaLoss crash if the first dimension is not the biggest (#2719)

Security

  • Integrate safetensors with all modules, including Dense, LSTM, CNN, etc. to prevent needing pickled pytorch_model.bin anymore (#2722)

All changes

... (truncated)

Commits
  • 8a02e45 Merge branch 'master' into v3.0-release
  • f012ab3 typo: SentenceTransformersTrainingArguments -> SentenceTransformerTrainingArg...
  • d079878 Specify the exact dataset size as a tag, will be bucketized by HF eventually ...
  • 6ea9903 [feat] Integrate safetensors with Dense, etc. modules too. (#2722)
  • 8ded768 Merge pull request #2727 from tomaarsen/can_return_loss
  • d9c2b0c Merge pull request #2726 from tomaarsen/fix/no_evaluator
  • 08b340b Set can_return_loss=True globally, instead of via the data collator
  • 4d3e357 Fix edge case with evaluator being None
  • b5e98e1 Merge pull request #2724 from tomaarsen/improve_typing
  • 936f283 Add py.typed to satify mypy (etc.) requirements
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 5 months ago

Looks like sentence-transformers is up-to-date now, so this is no longer needed.