-
Why we can only test the data which is included in the keys of `**odometry_benchmark**`. ? Like the below code shows
```
if dataset_name not in dataset.odometry_benchmark.keys():
co…
-
I wonder what to do with the following observation: Future LLMs will see our repo during training and thus probably be very good add passing our tests. Is that a good or a bad thing?
-
# Crosslingual phonological feature discrimination
Similar to existing SSLR classification probes ([Cormac English et al 2022](https://aclanthology.org/2022.sigmorphon-1.9.pdf)), we evaluate whethe…
-
Train and store an RF classifier on a sampling of data from many countries, then see how it performs on a benchmark dataset relative to a classifier trained on data from that country alone
-
We've been asked by several folks for a matrix of benchmarks across different algorithms across different scales of datasets. We should just go ahead and do one which we publish on RAFT's documentatio…
-
这是我修改的微调config脚本文件:
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (Checkpoi…
-
The most frequently used benchmark in time series classification/regression is the [UCR datasets](https://timeseriesclassification.com/), which consists of 128 time-series datasets. Both…
-
Traceback (most recent call last):
File "tools/train_net.py", line 156, in
main()
File "tools/train_net.py", line 152, in main
model = train(cfg, args.local_rank, args.distributed)
…
-
Thank you for your solid work.
Does the repo implement the function that pick the model weights that perform best in val dataset to evaluate in test dataset?
From the code below, it seems that the …
-
- [ ] [blog/mteb.md at main · huggingface/blog](https://github.com/huggingface/blog/blob/main/mteb.md?plain=1)
# Title: blog/mteb.md at main · huggingface/blog
**Description:**
"---
title: "MTEB: …