-
I keep getting nan contrastive loss. Is there any specific format in which the labels and features have to be passed? I tried both:
```py
features.shape = B * num_features # num_features is 640
l…
-
The source code of "Masked Vision and Language Pre-training with Unimodal and Multimodal Contrastive Losses for Medical Visual Question Answering" is https://github.com/pengfeiliHEU/MUMC
-
微博内容精选
-
Contexto sobre o funcionamento da perda está em https://github.com/vitalwarley/research/issues/33#issuecomment-1705631136.
Minha ideia:
```python
def contrastive_loss(x1, x2, ages_x1, ages_x2, …
-
I see that the code base has 2 methods one passage_embed and embed, but upon inspection of the code, I think that both are essentially the same, is there any difference between them. Or is it intended…
-
### System Info
transformers version: 4.17.0
Python version: 3.7.0
torch version: 1.10.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My o…
-
Dear author,
Thanks for your great work.
I want to test your methods in my experiments and I want to know when the code will be available? Besides, Does the proposed method take the similar time a…
-
Need to setup BevoEncoder for MLM, or use off the shelf encoder.
[CLS] good . [CLS] prompt > [CLS]. bad [CLS].prompt
Per token basis like Electra?
What corruptions are allowed? As long we u…
-
### System Info
transformers v4.33.0
### Who can help?
@ArthurZucker @younesbelkada @amyeroberts
### Information
- [X] The official example scripts
- [X] My own modified scripts
##…
-
foggyspace code can training ,but when i run VOC2clipar my instructions is :CUDA_VISIBLE_DEVICES=0,1,2,3 python train_net.py --num-gpus 4 --config configs/faster_rcnn_R101_cross_clipart_b4.yaml OUT…