-
The plots that I do by hand look like this:
![Electricity Consumption - 000 - Original](https://github.com/ruiking04/COCA/assets/22840510/d16dd822-3f49-4c07-8abd-48bf5bbd6811)
The plot generated b…
-
### 🚀 The feature, motivation and pitch
Thank you for your awesome works!
I have some questions about CoCa model implementation.
1)
In */multimodal/torchmultimodal/models/coca/coca_model.py, it…
-
Hello,
Currently, COCA accepts datasets that are univariate. Is it possible to expand to multivariate?
Thanks,
Marcia
-
We currently have a Pepsi brand, it'd be great if we can add Cocacola to the mix for all the Coke fans.
-
It would be nice to let users put more information on their profile pages.
Maybe a short "about" text, possibly links to social media accounts.
And a location of the form city + country. That wo…
-
![20230603_211147](https://github.com/Amo0303/Bala/assets/135475726/4538fea4-d91e-4e8d-8b8e-76def41d4f50)
Take away coca cola
-
Primero se creó la coca cola luego se creó la Pepsi y desde ay tienen una pelea
-
- [ ] Use the attention module in modalities.nn in GPT2
-
Research and try to find well known items with wikidata queries
-
Hi, thanks for your CoCa implementation! I have a question on the multimodal transformer: typically in a decoder layer I would expect to see self-attention, then cross-attention, then an MLP. But it s…