astra-vision / ManiFest

Few-shot image translation method working on unstructured environments. ECCV 2022
Apache License 2.0
49 stars 4 forks source link

How to train our own dataset? #3

Closed mengjingyouling closed 1 year ago

mengjingyouling commented 2 years ago

Very solid work!I have some questions,

1.How to train our own dataset?Can you give some examples? Or a training script? For example, we have some normal images and some foggy images. How to train them?

2.If we only have a 2080TI GPU, can we finish the training?

Thank you.

mengjingyouling commented 2 years ago

Another question is,

We noticed that in your experiment:Unless mentioned otherwise, the (synthetic) anchor domains from VIPER are “night” for Day→ Night and Day → Twilight, and “day” for Clear → Fog. W

Can the (synthetic) anchor domains “day” use source domain data to instead?

fabvio commented 1 year ago

I apologize for the late reply but it was a very intense period.

In order to train on your own dataset, you can refer to the structure in the data/anchor_dataset.py file. Basically, all you have to do is to create a dedicated folder for your dataset, and three subfolders inside it: trainS, trainA and trainT, in which to put source, anchor and few-shot target images. The training usually fits in a single GPU.

As regards using source data as anchors, we didn't investigate this but I believe that some diversity between source and anchor is required in order to span the initial manifold, otherwise you would simply get an identity transformation, and as such not informative enough to be used for feature transfer.