mvasil / fashion-compatibility

Learning Type-Aware Embeddings for Fashion Compatibility
BSD 3-Clause "New" or "Revised" License
152 stars 42 forks source link

how to train a model get the result like the pre-trained model #7

Closed DaPenggg closed 5 years ago

DaPenggg commented 5 years ago

@BryanPlummer your pre-trained model provide a compatibility AUC of 0.88 and fill-in-the-blank accuracy of 57.6 the following command? python main.py --name {your experiment name} --learned --l2_embed

but as i can't got this result. when i set embed_size=128,the result is :test set: Compat AUC: 0.87 FITB: 57.0 at the first epoch.

also,i have found the image similar loss is wrong:

calculate image similarity loss on the general embedding

    disti_p = F.pairwise_distance(general_y, general_z, 2)
    disti_n1 = F.pairwise_distance(general_y, general_x, 2)
    disti_n2 = F.pairwise_distance(general_z, general_x, 2)
    loss_sim_i1 = self.criterion(disti_p, disti_n1, target)
    loss_sim_i2 = self.criterion(disti_p, disti_n2, target)

i think it should be like this,as the sample y and z is the same type.

calculate image similarity loss on the general embedding

    disti_p = F.pairwise_distance(general_y, general_z, 2)
    disti_n1 = F.pairwise_distance(general_y, general_x, 2)
    disti_n2 = F.pairwise_distance(general_z, general_x, 2)
    loss_sim_i1 = self.criterion(disti_n1,disti_p, target)
    loss_sim_i2 = self.criterion(disti_n2,disti_p, target)
DaPenggg commented 5 years ago

@mvasil