Closed Bouncer51 closed 4 years ago
When you increase grid size --> the computation is increase --> we keep the same grid size with CPVTON. (2,3) parameters is for Affine transform, not TPS. And we also check to use Affine and Affine + TPS, but they are not good. grid size = 5 --> number of vertex(V) is 5 ^2 = 25 V, each V cordinate are (x, y) --> need 25 x 2 numbers. The output of regression is the new cordinate of each grid V. One notice that the value of the coordinate is in range (-1.1). You can read the STN reference paper for more detail and also gridSample from pytorch library. Regard,
Thanks for the reply, could you please tell me paper details based on which tpsgridgen is written.
Convolutional neural network architecture for geometric matching: https://arxiv.org/abs/1703.05593 Spatial Transformer Networks: https://papers.nips.cc/paper/5854-spatial-transformer-networks.pdf https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
could you please tell me any way to measure the accuracy of gmm while training per display step
thank you
could you please tell me any way to measure the accuracy of gmm while training per display step
thank you
When training GMM, the current implementation has already generated event file for loss and intermediate results into tensorboard event file. You can check loss every step by using this file.
I am trying to estimate ssim for evaluating gmm from generated warped clothes and ground truth image_parse (cloth on wearer) can you please tell me if i can use the following to evaluate ssim or is there any other way you have calculated ssim could you please share code if yes. I would also like to know how to calculate IoU for gmm model. thank you
We didn't calculate this in intermediate training output in each step. After finishing training, we generated testing results on the same cloth between in-shop cloth and cloth on the reference image, then you can use any library on python for calculating ssim(warpped cloth, extracted cloth from model image) or IOU(cloth mask, extracted cloth mask from segmentation ground-truth). then get mean over all testing images.
I found a good material to understand what exactly is thin plate spline deformation It explains about TPS in 2d deformations.
could you please explain what is analogous to target points and source points we are computing in this project as described in below. https://www.cse.wustl.edu/~taoju/cse554/lectures/lect10_Deformation2.pdf
thank you
This is souce points with gridsize = 5 x 5. (In CPVTON+ : grid size = 5 x 5)
this iss source points with gridsize = 3x4.
Then GMM tryto estimate new location of each source points --> 2*3^2 = 18 target points.
To more understand this, you should read the class TPSGridGen (or TPS implementation using numpy or matlab and run step by step.) Regards
hi, can you please tell me why is the grid size 5 and what does 5 indicate ? also, in FeatureRegression the output_dim in 2*opt.grid_size**2 where as generally it should be 6 right because of (2,3 ) parameters in tps transform. could you please elaborate on grid size and output_dim in FeatureRegression thank you