Closed crwsr124 closed 2 years ago
1) for our A (and also B) batch, it is simply one single image augmented many times (refer to the paper). ori_A contains a batch of different images, I am choosing one of it, repeating it multiple times into a batch, and then augmenting each of them separately.
2) Since each A2B_style should be the same, I can shuffle freely. This is another form of regularization where I want all A2B_style in a batch to be exactly the same. This comes from our definition that style code is invariant towards augmentations.
Thanks! I got it, A is augmented from the same image. By the way, will you continue to improve this method? Maybe A->B and B->A can share the same encoder and decoder.
I am working on something related now but its not ready for now. Sharing encoder and decoder could work, maybe something similar to what starganv2 has done. Feel free to try stuff out!
Excellent work!I would like to make it work on mobilephone. But,When I read the code(train.py), I have two questions:
why there need to shuffle origin img batch: A = aug(ori_A[[np.random.randint(args.batch)]].expand_as(ori_A)) B = aug(ori_B[[np.random.randint(args.batch)]].expand_as(ori_B))
Won't the shuffle of style lead to mismatch? fake_A2B2A(c1, s1) != A(c1, s2), Cycle Consistency Loss maybe unsatisfied? fake_A2B2A = G_B2A.decode(A2B2A_content, shuffle_batch(A2B_style)) fake_B2A2B = G_A2B.decode(B2A2B_content, shuffle_batch(B2A_style))