-
Hi thanks for the great work.
I wanted to test the model on a 4 minutes video that is not from any of the stated datasets. Is there a demo interface or any instructions on how to upload a single vi…
-
Hi there, Im sorry in advance I have to open a feature thread for asking help. I correctly installed the script by following the information of this github but Im getting 2 errors when trying to capti…
-
Can this model only be used to calculate fractions? How do I train to generate new captioning?
-
Hi, thank you for sharing nice work
I am new to video2dataset, and I do not fully understand what subsampler is in video2dataset.
Currently, what I understand is that subsampler processes somethin…
-
**Reported by nvdakor on 2012-11-15 06:59**
Hi,
A number of users use a screen reader which reads subtitles on videos (for example, language subtitles for foreign film videos). Currently, NVDA does no…
-
do quick demo
FileNotFoundError: [Errno 2] No such file or directory: './models/table1/vatex/best_checkpoint/training_args.bin'
please help me. thank you.
-
## 💻 What to do
Create a tutorial on using [Chirp](https://lablab.ai/tech/google/chirp) on Google Cloud.
**_Please, avoid using chatGPT for text. Include code samples. If needed, add graphs._**
…
-
Thanks for the great work! I have a question about the way to evaluate the model on paragraph captioning: do you fine-tune the pre-trained checkpoint on the paragraph captioning task, or just remove t…
-
Hi folks!
Impressive work :) as I really liked this work I decided to contribute it to 🤗 Transformers.
Documentation can be found here: https://huggingface.co/docs/transformers/main/en/model_doc…
-
# New ticket as of 2023-06-22
Update is due to client feedback of not needing closed captions to display in the UV
## Summary
UTK would like their videos to have the ability to turn on closed captions…