-
I've installed and tried to generate a subtitle for a movie that have audio in english and want subtitle in romanian but it only do this :
[2023-06-16 21:49:07,634] {model.py:130} INFO - Transcrib…
-
The code you gave is not complete, does it really implement the experiment?
-
Hi thanks for your work and public release of the code.
I have checked your code and I could not find the generate function of your model while using the VQA model. I want to be able to input new q…
-
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Salesforce/blip-image-captioning-large is not the path to a directory…
-
### New Feature Summary
With a number of recent development, I'd like to propose more vocab types that are subcategories of `TextDocument` (all names are tentative in the proposal)
- `Transcript`:…
-
after a first pass by many --
* vidlets are fast and complete. pausing is necessary, but it's nice to be able to skip
* some of the output / script was covered by video
I used streamlabs OBS an…
-
**[ ID ]** ead18f07-4fcb-41d6-baa3-87ae84bcd3c7
**[ Submitter's Name ]** Joanna Kao
**[ Submitter's Affiliated Organisation ]** Financial Times
**[ Submitter's Twitter ]** @joannaskao
**[ Space ]** …
-
## ❓ Questions and Help
In bottom up features, every images contains a 2048 dim features and a five tuples features about bounding boxs (x1, y1, x2, y2, w*h). In [Auto-Encoding Scene Graphs for Ima…
-
Change the custom loader your are using for the Dataset API. Keep the same functionalities like data augmentation.
Change the rest of the code to integrate it.
https://www.tensorflow.org/tutorials/e…
-
Do you support generating captions using model that trained with Bottom up features?
If not, can you give me a hint to predict caption from our raw images using that BU Model?
Thanks a lot for your …