Closed pfeducode closed 2 years ago
I wrote it by myself. You can refer to the following code:
This is my modified code. The result is as follows, but my result seems different from yours. Is my result correct?
def visualize_attention2(self, source_image,attention):
amap=attention.detach().cpu().numpy()
img=source_image.detach().cpu().numpy().copy()
img=np.squeeze(img)
img=img.transpose((1,2,0))
mask=amap.reshape(1,4096,64,64)
mask = np.mean(mask, axis=1) #[1,64,64]
mask = np.squeeze(mask, axis=0) #[64,64]
mask=(mask-mask.min())/(mask.max()-mask.min())
#mask=mask/mask.max()
mask=cv2.resize(mask,(img.shape[1],img.shape[0]))
img=np.float32(img)/255
heatmap=cv2.applyColorMap(np.uint8(255*mask),cv2.COLORMAP_JET)
heatmap=np.float32(heatmap)/255
cam=heatmap+np.float32(img)
cam=cam/np.max(cam)
img=np.uint8(255*cam)
return img
if i use opencv-python to save image,the result as follows if i use imageio.imsave to save image,the result as follows As the number of iterations increases, the face disappears
you should make sure that your input "source_image" ranges between 0 to 255. It seems that you put these codes in the generator.py, and the source_image in the generator.py is normalised to [0,1]. You should check that.
I have got the correct picture, thank you for your reply and guidance, and thank you again for your excellent work.
Is there any code for visualizing the attention mechanism? Or can you recommend a repository for reference,Thank you