lichengunc / MAttNet

MAttNet: Modular Attention Network for Referring Expression Comprehension
http://vision2.cs.unc.edu/refer
MIT License
293 stars 75 forks source link

Demo website not working #12

Closed mees closed 6 years ago

mees commented 6 years ago

Hi, the demo website appears to be down http://vision2.cs.unc.edu/refer/comprehension could you check it please?

lichengunc commented 6 years ago

I intentionally shut it down for the coming CVPR deadline (we are running out of gpus). It will be recovered in a few days (early next week). Thanks for your interest.

lichengunc commented 6 years ago

Back now.

mees commented 6 years ago

The demo is online and works with the predefined images, but it throws an error when you try to upload an image 'server error 500'.

lichengunc commented 6 years ago

Fixed :-) Thanks for the remind.

mees commented 6 years ago

Quick question, which pretrained model is running on the web demo?

lichengunc commented 6 years ago

The pretrained model running on the demo is not released. Demo model was trained on a combination of RefCOCO, RefCOCO+, RefCOCOg and a subset of Visual Genome.

mees commented 6 years ago

Cool! Do you plan on releasing that model and/or instructions to train it? I am interested in knowing how you combine visual genome since there are no segmentation masks there.

On Mon, 19 Nov 2018, 20:24 Licheng Yu <notifications@github.com wrote:

The pretrained model running on the demo is not released. Demo model was trained on a combination of RefCOCO, RefCOCO+, RefCOCOg and a subset of Visual Genome.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/lichengunc/MAttNet/issues/12#issuecomment-440011891, or mute the thread https://github.com/notifications/unsubscribe-auth/AAx9ZzVsSWRCzZbDMYqdLKXjp_TVn4N3ks5uwwV9gaJpZM4YdrEH .

lichengunc commented 6 years ago

Hm..we don't have plan to release the demo model. As for the Genome data, we use the 80 COCO categories appeared in Visual Genome, thus it is still 80 COCO categories. Our training are now with more (also noisy) data (as Genome's expressions are not strictly referring expressions). For some categories, this strategy is helping.

mees commented 6 years ago

I see, but are there gt masks for the 80 coco classes on the visual genome dataset? I think that some images of visual genome are part of coco, but I am not sure. Where can one see the split of visual genome that has gt mask data?

On Mon, 19 Nov 2018, 22:51 Licheng Yu <notifications@github.com wrote:

Hm..we don't have plan to release the demo model. As for the Genome data, we use the 80 COCO categories appeared in Visual Genome, thus it is still 80 COCO categories. Our training are now with more (also noisy) data (as Genome's expressions are not strictly referring expressions). For some categories, this strategy is helping.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/lichengunc/MAttNet/issues/12#issuecomment-440054354, or mute the thread https://github.com/notifications/unsubscribe-auth/AAx9Z8adoWnV3qzbrfFmJT81rIXKsUskks5uwyfDgaJpZM4YdrEH .

Shivanshmundra commented 5 years ago

Hey, Website is still not working. Can you check it please?