-
Dear Authors,
Thank you for releasing the evaluation code.
Could you provide details on how to setup the training and evaluation datasets/benchmarks? Including the twitter dataset you mention in…
-
Hi, I'd like to execute SPFresh code using billion-scale dataset.
In this [document](https://github.com/SPFresh/SPFresh/tree/main/Script_AE), it says that "we strongly recommend the reviewers to us…
-
Hi All,
Thank you for your excellent work!
I have 2 questions:
1. You request to organize the dataset as follow:
```
ONCE_Benchmark
├── data
│ ├── once
│ │ │── ImageSets
| | | ├──…
-
Hi, i want to reproduce the result of Visualized BGE, but zero-shot benchmark not clear, such as WebQA. Can you provide evaluation dataset and codes for zero-shot benchmark. Thanks!
zwhus updated
2 months ago
-
One can [(re)write a dataset (partitioned or not) without reading the full thing into memory with pyarrow](https://arrow.apache.org/docs/python/dataset.html#writing-datasets). We currently have a benc…
-
# RFC133: Creating a benchmark dataset for OCR
## Named Concepts
## Summary
create benchmark dataset for OCR from all the transcribed line images, use them to filter out line images randomly …
-
Hello when I read into your code specifically for all benchmark, data used for both evaluating and train is the same dataset and in the original paper there are no mention of the split ratio or where …
-
We can use:
https://github.com/tiago4orion/DataGen
To generate lots of data and then we can evaluate how fast we really are :-)
-
I was wondering if you still have the dataset you used to create the deep learning graphs in your paper. I think these datasets can be a very interesting benchmark. The deep offline RL space is curren…
-
## Description
When requesting tokens per second in benchmark metrics (-t option specified) while providing the path to the tokenizer.json file as well as a payloads dataset, aws curl return the wa…