-
**Your Question**
I am using the following code for the evaluation of my dataset. I upgraded recently from 0.1.13 to 0.1.18 to use the new metrics ( noise_sensitivity_relevant, noise_sensitiv…
-
### Is your feature request related to a problem? Please describe.
Upon reviewing the Boolean distance metrics in `scipy.spatial.distance`, I find a number of inconsistencies with respect to non-Bo…
-
Can you please disclose the GT of the test set and how the test metrics are calculated? When I use semantic similarity for scoring, the results of the eval set on Obj An and Com An are very different…
-
# Summary
[summary]: #summary
This RFC proposes adding support for the latest `pgvector` features into the `vecs` Python client. These include new vector types (`halfvec`, `sparsevec`), enhanced i…
-
from datasets import Dataset
questions = ["恐龙是怎么被命名的?",
"恐龙怎么分类的?",
"体型最大的是哪种恐龙?",
"体型最长的是哪种恐龙?它在哪里被发现?",
"恐龙采样什么样的方式繁殖?",
"恐…
-
#### Introduction
Vector databases have gained significant importance due to the rise of AI, machine learning, and deep learning applications. These databases store high-dimensional vectors repre…
-
To keep it a bit more organized, here a single list with the metrics we are currently missing.
## Cell-aggregation metrics
- [ ] https://github.com/r-spatialecology/landscapemetrics/issues/288
…
-
Hi - The current code does not seem to cover the proposed evaluation portion. Would the authors potentially consider sharing their evaluation pipeline ? More specifically the implementations behind AI…
-
Hey,
This is not an actual issue regarding the project, but I didn't know where to ask instead. So I have a dataframe that contains m/z and intensity arrays as well as cluster labels. I want to eva…
-
After looking over its documentation in more detail, I've seen that splink has some nice utilities for [visualising string comparison metrics](https://moj-analytical-services.github.io/splink/topic_gu…