atong01 / conditional-flow-matching

TorchCFM: a Conditional Flow Matching library
https://arxiv.org/abs/2302.00482
MIT License
1.25k stars 101 forks source link

Bump torchmetrics from 0.11.0 to 1.1.2 #52

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Bumps torchmetrics from 0.11.0 to 1.1.2.

Release notes

Sourced from torchmetrics's releases.

Weekly patch release

[1.1.2] - 2023-09-11

Fixed

  • Fixed tie breaking in ndcg metric (#2031)
  • Fixed bug in BootStrapper when very few samples were evaluated that could lead to crash (#2052)
  • Fixed bug when creating multiple plots that lead to not all plots being shown (#2060)
  • Fixed performance issues in RecallAtFixedPrecision for large batch sizes (#2042)
  • Fixed bug related to MetricCollection used with custom metrics have prefix/postfix attributes (#2070)

Contributors

@​GlavitsBalazs, @​SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Weekly patch release

[1.1.1] - 2023-08-29

Added

  • Added average argument to MeanAveragePrecision (#2018)

Fixed

  • Fixed bug in PearsonCorrCoef is updated on single samples at a time (#2019)
  • Fixed support for pixel-wise MSE (#2017)
  • Fixed bug in MetricCollection when used with multiple metrics that return dicts with same keys (#2027)
  • Fixed bug in detection intersection metrics when class_metrics=True resulting in wrong values (#1924)
  • Fixed missing attributes higher_is_better, is_differentiable for some metrics (#2028)

Contributors

@​adamjstewart, @​SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Into Generative AI

In version v1.1 of Torchmetrics, in total five new metrics have been added, bringing the total number of metrics up to 128! In particular, we have two new exciting metrics for evaluating your favorite generative models for images.

Perceptual Path length

Introduced in the famous StyleGAN paper back in 2018 the Perceptual path length metric is used to quantify how smoothly a generator manages to interpolate between points in its latent space. Why does the smoothness of the latent space of your generative model matter? Assume you find an image at some point in your latent space that generates an image you like, but you would like to see if you could find a better one if you slightly change the latent point it was generated from. If your latent space could be smoother, this because very hard because even small changes to the latent point can lead to large changes in the generated image.

CLIP image quality assessment

CLIP image quality assessment (CLIPIQA) is a very recently proposed metric in this paper. The metrics build on the OpenAI CLIP model, which is a multi-modal model for connecting text and images. The core idea behind the metric is that different properties of an image can be assessed by measuring how similar the CLIP embedding of the image is to the respective CLIP embedding of a positive and negative prompt for that given property.

... (truncated)

Changelog

Sourced from torchmetrics's changelog.

[1.1.2] - 2023-09-11

Fixed

  • Fixed tie breaking in ndcg metric (#2031)
  • Fixed bug in BootStrapper when very few samples were evaluated that could lead to crash (#2052)
  • Fixed bug when creating multiple plots that lead to not all plots being shown (#2060)
  • Fixed performance issues in RecallAtFixedPrecision for large batch sizes (#2042)
  • Fixed bug related to MetricCollection used with custom metrics have prefix/postfix attributes (#2070)

[1.1.1] - 2023-08-29

Added

  • Added average argument to MeanAveragePrecision (#2018)

Fixed

  • Fixed bug in PearsonCorrCoef is updated on single samples at a time (#2019)
  • Fixed support for pixel-wise MSE (#2017)
  • Fixed bug in MetricCollection when used with multiple metrics that return dicts with same keys (#2027)
  • Fixed bug in detection intersection metrics when class_metrics=True resulting in wrong values (#1924)
  • Fixed missing attributes higher_is_better, is_differentiable for some metrics (#2028)

[1.1.0] - 2023-08-22

Added

  • Added source aggregated signal-to-distortion ratio (SA-SDR) metric (#1882
  • Added VisualInformationFidelity to image package (#1830)
  • Added EditDistance to text package (#1906)
  • Added top_k argument to RetrievalMRR in retrieval package (#1961)
  • Added support for evaluating "segm" and "bbox" detection in MeanAveragePrecision at the same time (#1928)
  • Added PerceptualPathLength to image package (#1939)
  • Added support for multioutput evaluation in MeanSquaredError (#1937)
  • Added argument extended_summary to MeanAveragePrecision such that precision, recall, iou can be easily returned (#1983)
  • Added warning to ClipScore if long captions are detected and truncate (#2001)
  • Added CLIPImageQualityAssessment to multimodal package (#1931)
  • Added new property metric_state to all metrics for users to investigate currently stored tensors in memory (#2006)

[1.0.3] - 2023-08-08

Added

  • Added warning to MeanAveragePrecision if too many detections are observed (#1978)

Fixed

... (truncated)

Commits
  • 520625c releasing 1.1.2
  • 813f3c0 Bugfix for custom prefix/postfix and metric collection (#2070)
  • 5c2db0b Fix tie breaking in ndcg metric (#2031)
  • 725f493 Clarify language about one-vs-rest for classification metrics (#2051)
  • 7325d6f Fix bootstrapping with few samples (#2052)
  • 10243d7 Fix plot splitter (#2060)
  • 3f15c0d build(deps): bump pytest from 7.4.0 to 7.4.1 in /requirements (#2055)
  • d13d637 build(deps): update jiwer requirement from <=3.0.2,>=2.3.0 to >=2.3.0,<3.1.0 ...
  • cfc6ca1 Docs: fix code reference links (#2044)
  • 7f25079 build(deps): bump pypa/gh-action-pypi-publish from 1.8.8 to 1.8.10 (#2043)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 1 year ago

Superseded by #58.