zhihou7 / BatchFormer

CVPR2022, BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning, https://arxiv.org/abs/2203.01522
242 stars 20 forks source link

doubt about the DeformableTransformerDecoder of batchformerv2 #17

Open wanliliuxiansen opened 1 year ago

wanliliuxiansen commented 1 year ago

I found the two parameters bboxembed and class_embed of DeformableTransformerDecoder are not set.Can you give me the complete code of DeformableTransformerDecoder? Thank you very much!

zhihou7 commented 1 year ago

Hi @wanliliuxiansen, I do not get your point. Does Deformable-DETR require to set the two parameters by hand or in DeformableTransformerDecoder? The directory batchformer-v2 is the complete code.

Regards, Zhi Hou

wanliliuxiansen commented 1 year ago

OK,thank you very much!I'm currently working on Image Harmony,the topic belongs to the problem of image generation.I want to use the BatchFormer to address the problem,but I don't know how to set the two parameters.Image Harmony is also limited by the features of a single image,so it needs the features of other images.I have read your paper,which has related details.However,I do not understand the two shared classifiers,and i don't konw whether the Batchformer is helpful to solve Image Harmony.

------------------ 原始邮件 ------------------ 发件人: "Zhi @.>; 发送时间: 2022年11月24日(星期四) 晚上10:08 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [zhihou7/BatchFormer] doubt about the DeformableTransformerDecoder of batchformerv2 (Issue #17)

Hi @wanliliuxiansen, I do not get your point. Does Deformable-DETR require to set the two parameters by hand or in DeformableTransformerDecoder?

Regards, Zhi Hou

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

zhihou7 commented 1 year ago

Hi @wanliliuxiansen, Thanks for your interest. You do not need to set bboxembed and class_embed. I implement the BatchFormerV2 with a two-stream pipeline. Specifically, I copy the original batch, and apply batchformerv2 on the new batch, then concatenate the two batches to input the next modules (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/models/deformable_transformer.py#L278). Meanwhile, I also repeat the labels to match the dimension of the future (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/engine.py#L43)

For the corresponding part in the code, I have added more annotations. Feel free to ask if you have further questions.

Regards,

wanliliuxiansen commented 1 year ago

Thank you for your help!

------------------ 原始邮件 ------------------ 发件人: "zhihou7/BatchFormer" @.>; 发送时间: 2022年11月25日(星期五) 上午9:18 @.>; @.**@.>; 主题: Re: [zhihou7/BatchFormer] doubt about the DeformableTransformerDecoder of batchformerv2 (Issue #17)

Hi @wanliliuxiansen, Thanks for your interest. You do not need to set bboxembed and class_embed. I implement the BatchFormerV2 with a two-stream pipeline. Specifically, I copy the original batch, and apply batchformerv2 on the new batch, then concatenate the two batches to input the next modules (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/models/deformable_transformer.py#L278). Meanwhile, I also repeat the ground truth to match the dimension of the future (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/engine.py#L43)

For the corresponding part in the code, I have comment more annotations. Feel free if you have further questions.

Regards,

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>