-
### We would like to learn about your use case. For example, if this feature is needed to adopt Narwhals in an open source project, could you please enter the link to it below?
I am abstracting …
-
Hi,
I am looking for some guidance on how to generate Ranking metrics such as mean average precision. I am trying to achieve it through "[RankingMetrics](https://spark.apache.org/docs/latest/api/j…
-
This issue is to acknowledge and respond to a use case for changing the way we present results to users in the ICEES KG. Specifically, ARAs are interested in considering how to use the various statist…
-
adding a feature that compares feature importance or SHAP/LIME results across multiple models
-
Create a usage example on how these ranking metrics can be used:
```
+------------------------------------------------------------+-----------------------------------------------------------------…
-
### URL Hash
`#/sklearn/sklearn.metrics._scorer/get_scorer/scoring`
### Actual Annotation Type
`@optional`
### Actual Annotation Inputs
```json5
{
"target": "sklearn/sklearn.metrics._scorer…
-
Implement a custom loss function that is a weighted combination of metrics.
something like
loss = .8 * diceCE + .1* hassdorfs + .1 other
implement:
https://lightning.ai/docs/torchmetrics/stab…
-
About Hacktoberfest contributions: https://github.com/evidentlyai/evidently/wiki/Hacktoberfest-2024
**Description**
The ROUGE (Recall-Oriented Understudy for Gisting Evaluation) metric evaluates…
-
I'd like to ask about using the `'sample_weight'` argument for both the Retrieval and the Ranking Tasks' call methods.
[Retrieval docs](https://www.tensorflow.org/recommenders/api_docs/python/tfrs…
-
Create a usage example on how these ranking metrics can be used:
```
+------------------------------------------------------------+-----------------------------------------------------------------…