-
Hello, when I run python coco_scripts/train.py-- exp_name captioning_model-- batch_size 100-- lr 5e-4, the following questions arise: ValueError: not enough values to unpack (expected7, got6) have you…
-
Tensorflow and keras have metrics that can be used to evaluate how a model is performing. In our application, the metric should evaluate how good the generated captions are.
More info on tensorflow m…
-
Hello, thanks for open-sourcing your code, this is a very inspiring work. I had a question: do you perform or experiment with bootstrapping continuously? Like currently it is train on pre-trained data…
-
Hi Yihui!
I noticed recently that there is captioning available within `kable`. Awesome!
I can't seem to find a way to change the position of the caption, it was mentioned [here](https://github.com/…
-
**Is your feature request related to a problem? Please describe.**
Add live lyrics in the lyrics embed.
**Describe the solution you'd like**
Live lyrics with the player embed changing based on wh…
-
In line 94 in caption.py you use:
`scores = F.log_softmax(scores, dim=1)`
Could you explain the reason for `log_softmax` here? You did not use it in `forward()` method.
More than that, I tried …
-
The Insertable Streams API provides access to the RTP payload, which has generated considerable interest. I have heard suggestions that it might be used to implement some of the following:
* Supp…
aboba updated
4 years ago
-
Why do I always encounter CUDA out of memory problem when I load the load_model_process function? Can the RTX 3090 be used for the BLIP-2 model?"
-
Results look promising to me, http://arxiv.org/pdf/1506.03099v3.pdf + the future direction as pointed by the authors. Any thoughts/discussions you had regarding this to NMT?
-
The goal is to build an image captioning model that generates descriptive captions for images using a combination of Convolutional Neural Networks (CNNs) for feature extraction and Long Short-Term Mem…