roimehrez / contextualLoss

The Contextual Loss
http://cgm.technion.ac.il/Computer-Graphics-Multimedia/Software/Contextual/
489 stars 78 forks source link

Code for Unpaired domain transfer #6

Closed cchen156 closed 6 years ago

cchen156 commented 6 years ago

Is it possible to provide the code for unpaired domain transfer (Fig. 11). In each iteration, do you minimize the loss between a random input and a random style image?

roimehrez commented 6 years ago

It is almost the same code. Just change the data to celebA. At each iteration the loss is computed between a random pair.

בתאריך יום ב׳, 11 ביוני 2018, 21:01, מאת cchen156 ‏<notifications@github.com

:

Is it possible to provide the code for Unpaired domain transfer (Fig. 11). In each iteration, do you minimize the loss between a random input and a random style image?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/roimehrez/contextualLoss/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/AOoKJmoBDJooyomnBHlv1sqZywinHoTdks5t7rA8gaJpZM4UjI2n .

roimehrez commented 6 years ago

you can also use the following code from https://github.com/sagiebenaim/DistanceGAN:

def read_attr_file( attr_path, image_dir ):
    f = open( attr_path )
    lines = f.readlines()
    lines = [*map(lambda line: line.strip(), lines)]
    columns = ['image_path'] + lines[1].split()
    lines = lines[2:]

    items = [*map(lambda line: line.split(), lines)]
    df = pd.DataFrame( items, columns=columns )
    df['image_path'] = df['image_path'].map( lambda x: os.path.join( image_dir, x ) )

    return df
def get_celebA_files(style_A, style_B, constraint, constraint_type, test=False, n_test=200):
    attr_file = os.path.join( config.celebA_path, 'list_attr_celeba.txt' )
    image_dir = os.path.join( config.celebA_path, 'img_align_celeba' )
    image_data = read_attr_file( attr_file, image_dir )

    if constraint:
        if type(constraint_type) == int:
            constraint_type = str(constraint_type)
        image_data = image_data[image_data[constraint] == constraint_type]

    style_A_data = image_data[ image_data[style_A] == '1']['image_path'].values
    if style_B:
        style_B_data = image_data[ image_data[style_B] == '1']['image_path'].values
    else:
        style_B_data = image_data[ image_data[style_A] == '-1']['image_path'].values

    if test == False:
        return style_A_data[:-n_test], style_B_data[:-n_test]
    if test == True:
        return style_A_data[-n_test:], style_B_data[-n_test:]`
cchen156 commented 6 years ago

Thanks! I tried the code on some other dataset. For example, converting the GTA5 images to cityscapes images like cycleGAN. But the results converged to a local minimum that all the results are the same, no matter what the input is. Did you try some complex dataset other than faces?