-
-
### Data Owner Name
ONSE MEDIA
### Data Owner Country/Region
Korea, Republic of
### Data Owner Industry
Arts & Recreation
### Website
https://www.onsemedia.com/
### Social Media Handle
filsog…
-
Hello, I have run the training and embedding extraction and I'm wondering how I can see any examples of text that the model retrieved.
The embeddings and h5 files seem to be mostly numeric--How do …
-
I have a question regarding the text queries in the DiDeMo test set.
In your provided flie `data/datasets/didemo/QuerySet/test-query.txt` I see that only a single caption appears for each video. I…
-
### Version
1
### DataCap Applicant
BitsAndBytes
### Project ID
1
### Data Owner Name
Human Pangenome Reference Consortium
### Data Owner Country/Region
United States
### Data Owner Industry…
-
### Feature Request
As demonstrated in this video https://youtu.be/QMaWfbosR_E there is a potential to use Retrieval Augmented Generation to redirect LLM calls to our own functions for specific use c…
-
Not so crazy like this: http://www.youtube.com/watch?v=kv_uyUTx5Po, but in the same direction, podfilter.de could analyze: topics, speaker, shownotes, links, textes, comments, whatever, to analyze the…
-
My goal is to build a unique multimodal WooCommerce search experience with Vespa multivectors and an hybrid ranking on text-BM25, text-vectors, and image-vectors.
For instance, E-commerce can use:
…
-
Hi Team,
First of all, I want to thank the authors for such great work.
Coming to the demo for Video to Text (V2T) retrieval code provided (https://github.com/OpenGVLab/InternVideo/tree/main/In…
-
I am trying to verify/reproduce your paper's validation results **without training** it myself and expected 42.6% R@1 accuracy for MSR-VTT.
But when I follow the instructions from [TRAIN_AND_VALID…