-
Hi, I can't figure out with this features.
The training, validation and test features, both the visuals "_resnet.npy" and the motions "_bn.npy", are the same used in [densecap](https://github.com/Luo…
-
Apologies in advance if this is not the right place.
So the paper this code this based on states that model was pretrained using cross entropy (XE) and then after a certain number of epochs switche…
-
### Issue description
Hello, I have a problem. On Huawei P9, using exoplayer to simply play a h265 video will encounter problems.
Using the old version 2.10.8 can play normally, while using the new …
-
Hi, is there any pre-trained model of transformer under unsupervised mode available? Thanks.
-
I am having a very strange issue in that I cannot refersnce a .net sdk project from inside a .web project in .net 6 this worked before in .net 5 This first project layout is my libary project and the …
-
What should we do with voice?
Relevant CG(s):
* [Speech API](https://www.w3.org/community/speech-api/)
* [Voice Interaction CG](https://www.w3.org/community/voiceinteraction/)
[MS input (Novem…
-
Hi hobincar,thanks for your excellent work!
I wonder how you extract the 3D-ResNext feature in your [video-feature-extractor](https://github.com/hobincar/pytorch-video-feature-extractor).
I have tri…
-
arXiv论文跟踪
-
Hi there,
Thanks for providing this wonderful codebase. So I'm using a customized video captioning model (it's end-to-end trained together with the feature extractor so I cannot use the OpenNMT cod…
-
# For the latest MTL Video links, please:
## 1) go to https://mtl.how/documents
## 2) find the depend_products column on the far right
## 3) click on the relevant session & the video link will be a…