spandanagella / verse

Visual Verb Sense Disambiguation
13 stars 3 forks source link

supervised VSD setting #2

Open siinem opened 4 years ago

siinem commented 4 years ago

Hi Spandana! For the supervised Visual Verb Sense Disambiguation setting, at which ratio do you split the data into training and test sets? I could not find that information at your PAMI 2019 paper. Am I missing something?

Cheers, Sinem

spandanagella commented 4 years ago

Hey,

It should be 80, 10, 10 split on all 19 qualified verbs. We did try to get more data for supervised experiments but always ended up getting more images for the dominant sense so discontinued it. Supervised results are presented just to show how well we can do on the task if we had supervised data (not realistic scenario in this case). Let me know if you have any more questions.

Spandana

siinem commented 4 years ago

Thanks Spandana! Could you please share with us the .csv file of subset of VerSe that is used in this experiment, with particular training, val, test split?

spandanagella commented 4 years ago

Hi Siinem, I am afraid I don't have them. Can you reach out to Carina https://www.aclweb.org/anthology/D18-1282.pdf (See section 5.2). I remember sharing with them for their work.

siinem commented 4 years ago

Ok, thank you! I will try to reach them.

siinem commented 4 years ago

Hi Siinem, I am afraid I don't have them. Can you reach out to Carina https://www.aclweb.org/anthology/D18-1282.pdf (See section 5.2). I remember sharing with them for their work.

Hi Spandana, Just another thing: Has 80|10|10 split been done over images per verb or through all dataset?

I mean let's say for 19 verbs, each has at least 20 images. Have you got 80% of images per verb (so for example if a verb has 20 images, we can get 16 images into the training set for that verb) into training set or have you sampled 80% through all dataset?

spandanagella commented 4 years ago

Its been done per verb.

On Mon, Mar 2, 2020 at 9:29 AM Sinem Aslan notifications@github.com wrote:

Hi Siinem, I am afraid I don't have them. Can you reach out to Carina https://www.aclweb.org/anthology/D18-1282.pdf (See section 5.2). I remember sharing with them for their work.

Hi Spandana, Just another thing: Has 80|10|10 split been done over images per verb or through all dataset?

I mean let's say for 19 motion verbs, each has at least 20 images. Have you got 80% of images per verb (so for example for if a verb has 20 images, we can get 16 images into the training set for that verb) into training set or have you sampled 80% through all dataset?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/spandanagella/verse/issues/2?email_source=notifications&email_token=AAVVI2LZPMXS6NRGEYZV643RFPUH5A5CNFSM4KRQELQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENQGI3Q#issuecomment-593519726, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAVVI2PQSIJF5QDUBNTGH7TRFPUH5ANCNFSM4KRQELQQ .

siinem commented 4 years ago

Its been done per verb. On Mon, Mar 2, 2020 at 9:29 AM Sinem Aslan @.***> wrote: Hi Siinem, I am afraid I don't have them. Can you reach out to Carina https://www.aclweb.org/anthology/D18-1282.pdf (See section 5.2). I remember sharing with them for their work. Hi Spandana, Just another thing: Has 80|10|10 split been done over images per verb or through all dataset? I mean let's say for 19 motion verbs, each has at least 20 images. Have you got 80% of images per verb (so for example for if a verb has 20 images, we can get 16 images into the training set for that verb) into training set or have you sampled 80% through all dataset? — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#2?email_source=notifications&email_token=AAVVI2LZPMXS6NRGEYZV643RFPUH5A5CNFSM4KRQELQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENQGI3Q#issuecomment-593519726>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAVVI2PQSIJF5QDUBNTGH7TRFPUH5ANCNFSM4KRQELQQ .

thanks a lot!