sjtuplayer / anomalydiffusion

[AAAI 2024] AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model
MIT License
138 stars 19 forks source link

Anomaly Embedding ea #9

Closed TerryMelody closed 6 months ago

TerryMelody commented 6 months ago

Hello dear authors! I couldn't find the code about Anomaly Embedding ea while finding the implementation of Spatial Embedding es in embedding_manager2.py. Could you please give me some hints? Thank you so much.

TerryMelody commented 6 months ago

Is this the related code part? But if you set self.spatial_encoder_and_embedding=True as the example, there is no information of original image in "masked_img" . How can you learn Anomaly Embedding ea from the mask only?

if self.spatial_encoder_and_embedding: tmp_mask = mask.unsqueeze(1) tmp_mask[tmp_mask < 0.5] = 0 tmp_mask[tmp_mask >= 0.5] = 1 if self.data_enhance is not None: if random.random()<0.5: x,tmp_mask=self.data_enhance(x,tmp_mask) x=self.resize256(x) tmp_mask=self.resize256(tmp_mask) total_dict['mask'] = tmp_mask

masked_img = (x + 1) / 2 tmp_mask + torch.zeros_like(x) (1 - tmp_mask)

        masked_img=tmp_mask.clone()
        total_dict['masked_img'] = masked_img
    elif self.spatial_encoder:
        tmp_mask = mask.unsqueeze(1)
        tmp_mask[tmp_mask < 0.5] = 0
        tmp_mask[tmp_mask >= 0.5] = 1
        masked_img = (x + 1) / 2 * tmp_mask + torch.zeros_like(x) * (1 - tmp_mask)
        total_dict['mask'] = tmp_mask
        total_dict['masked_img']=masked_img
sjtuplayer commented 6 months ago

Hello dear authors! I couldn't find the code about Anomaly Embedding ea while finding the implementation of Spatial Embedding es in embedding_manager2.py. Could you please give me some hints? Thank you so much.

You can find e_a in Line 112-113 in embedding_manager2.py. e_a is assigned by the anomaly name.

sjtuplayer commented 6 months ago

tmp_mask = mask.unsqueeze(1

The real input is z in line 758 in ddpm.py, which is the embedded training image.

TerryMelody commented 6 months ago

Hello dear authors! I couldn't find the code about Anomaly Embedding ea while finding the implementation of Spatial Embedding es in embedding_manager2.py. Could you please give me some hints? Thank you so much.

You can find e_a in Line 112-113 in embedding_manager2.py. e_a is assigned by the anomaly name.

Thanks for your instant reply! I have another question that whether it is possible to turn the learned text embeddings like e_s and e_a to natural text descriptions? Or it is just in the latent space?

sjtuplayer commented 6 months ago

e_a and e_s are in the textual space. If you want to turn them into natural text descriptions, you may refer to related work of CLIP or just searching the nearest natural text descriptions to the learned embeddings.

TerryMelody commented 6 months ago

e_a and e_s are in the textual space. If you want to turn them into natural text descriptions, you may refer to related work of CLIP or just searching the nearest natural text descriptions to the learned embeddings.

Okay thanks for your reply. Hope you get good results in the later works!=)