-
Thank you for your excellent work! I noticed that in the `eval_moment_retrieval` function https://github.com/jayleicn/moment_detr/blob/main/standalone_eval/eval.py#L136, there are four predefined time…
-
Hello. We'd like to introduce our paper "Query-Dependent Video Representation for Moment Retrieval and Highlight Detection (CVPR 2023 Paper)" regarding cross-modal moment retrieval.
Code : https://…
-
Thanks for the impressive work. I tested some videos in qvhighlights val split and follow the prompt: "Find the video segment that corresponds to the given textual query '{}' and determine its start a…
-
### API Spec link
https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/ComputerVision/stable
### API Spec version
2024-02-01
### Please descri…
-
# 論文情報
[yamada_paper_reading_2020_4_28.pdf](https://github.com/naoymd/paper_reading/files/5011686/yamada_paper_reading_2020_4_28.pdf)
### 著者
Zhu Zhang, Zhijie Lin, Zhou Zhao, Zhenxin Xiao
### …
-
I'm a current user of tsunami-udp and I'm wondering - does this re-implementation add subdirectory support? I'd like to maintain the directory structure while also grabbing all files in the tree.
-
A good work! But I have a question. When I have a video to get the highlight clip, what is the words I need to input ? just like "Man and women are dancing together" in Figure 2. How to get it?
-
Glancing at the Parameterised Exercises, the random value being generated uses OJS. Is there a way to generate an R or Python value and pass it forward?
I remember we briefly chatted about the syn…
-
### Description
Learned Sparse Vectors claim to combine the benefits of sparse (i.e. lexical) and dense (i.e. vector) representations
From https://en.wikipedia.org/wiki/Learned_sparse_retrieval:…
-
### TL;DR
Spark is growing in a few ways at the moment.
1. It is moving beyond being just a measurement protocol for Filecoin retrieval into being a proof of retrievability protocol.
2. It is …