dvlab-research / Mask-Attention-Free-Transformer

Official Implementation for "Mask-Attention-Free Transformer for 3D Instance Segmentation"
59 stars 6 forks source link

Reproduction of the experimental results in Fig. 1 #9

Open Wang-pengfei opened 10 months ago

Wang-pengfei commented 10 months ago

Hi, how can I reproduce the experimental results shown in Fig.1? I tried modifying the epoch parameter in the configs directly, but the results were significantly different. Can you assist me with this issue?

Linsanity1 commented 9 months ago

Hi, how can I reproduce the experimental results shown in Fig.1? I tried modifying the epoch parameter in the configs directly, but the results were significantly different. Can you assist me with this issue?

Hi, can you reproduce the 58.4 mAP? The result on my machine is 58.0 mAP. Do I need to modify hyperparameters to achieve such high mAP?

wdczz commented 9 months ago

Hi, how can I reproduce the experimental results shown in Fig.1? I tried modifying the epoch parameter in the configs directly, but the results were significantly different. Can you assist me with this issue?

Hi, can you reproduce the 58.4 mAP? The result on my machine is 58.0 mAP. Do I need to modify hyperparameters to achieve such high mAP?

I also get 58.0 mAP TUT, It's diffcult to achieve 58.4 mAP by modifying hyperparameters. Just I think!

Wang-pengfei commented 8 months ago

Hi, how can I reproduce the experimental results shown in Fig.1? I tried modifying the epoch parameter in the configs directly, but the results were significantly different. Can you assist me with this issue?

Hi, can you reproduce the 58.4 mAP? The result on my machine is 58.0 mAP. Do I need to modify hyperparameters to achieve such high mAP?

I can only get 57.8 mAP....