gitabcworld / MatchingNetworks

This repo provides pytorch code which replicates the results of the Matching Networks for One Shot Learning paper on the Omniglot and MiniImageNet dataset
Other
328 stars 81 forks source link

Asking why support set images are fed into convolutional neural networks and the role played by bi-LSTM? #10

Open Hjwinging opened 1 year ago

Hjwinging commented 1 year ago

Thank you for sharing your code. I have a question here about the Omniglot dataset test, where the support set is coded with a convolutional neural network, followed by the convolutional coding of the target dataset images. At this time I output the results and find that the support set images have specific values, while the target dataset images are 0, -1, 2, etc. with 0 after the decimal point. and the embedding vectors are fed into the bi-LSTM afterwards, I don't see what the bi-LSTM does.

image

image