Closed 392625227 closed 5 months ago
I am afraid not yet. NNoM only supports one input currently but siamese has 2 input. If there is a strong need for multiple inputs, I can plan for it in the next version. Image processing is not the main focus for NNoM, but I am happy to discuss.
I try to achieve fingerprint comparison on a very low-cost MCU. It has only 70KB ram. At present, only this Siamese network is found. The accuracy is good, so it is necessary to quantify the network parameters,for that limited MCU. I have only one year's experience in neural network. I don't know what better network structure with smaller precision can be recommended?
Fingerprint comparison sounds quite interesting. If you separate the whole Siamese into 3 models, you should be able to use the current nnom to run it.
Perform 2 image with the identical Conv net model, then sub their output manually. Then pass the number to another NN for ranking.
Yes, that's the essence of Siamese network. Thank you for drawing such a detailed picture. I'm going to try it. At the same time, I have seen your octave conv example, which is very enlightening. I also want to try it to see if it can achieve or surpass the classic SIFT algorithm.
When we try to train the Siamese network, the input is img1, img2, and the label is 1 or 0, which is manually marked in advance. If we split it into two models from the subtract layer and train them separately, then the label of the previous model doesn't know how to do this?
One would train the Siamense network jointly. But use the 3 individual pieces when doing inference. Might want to prototype/test it in Python before translating to C using nnom.
When we try to train the Siamese network, the input is img1, img2, and the label is 1 or 0, which is manually marked in advance. If we split it into two models from the subtract layer and train them separately, then the label of the previous model doesn't know how to do this?
You may try this method:
Build and train the model as you did in keras.
After that, you can create 2 new models out of the one you trained. One for the image processing part and the other for the ranking.
layer_model = Model(inputs=model.input, outputs=layer.output)
This is a ref how creating a sub model out of an existing model is done. https://github.com/majianjia/nnom/blob/47be90f5d905f78cc5fe58b0b54a7b9e1ba87be9/scripts/nnom.py#L505
This has been answered, and should be closed. CC @majianjia
I have a siamese model, try to use generate_model(model, x_val, name=weights), but i don't know how to transform data to x_val? my model input is ([img1, img2], label), img1 and img2 shape is (1, 30, 30, 1), label is [1,0]
Model: "model_17"
Layer (type) Output Shape Param #
input_24 (InputLayer) (None, 30, 30, 1) 0
conv2d_22 (Conv2D) (None, 30, 30, 32) 320
max_pooling2d_22 (MaxPooling (None, 15, 15, 32) 0
conv2d_23 (Conv2D) (None, 15, 15, 32) 9248
max_pooling2d_23 (MaxPooling (None, 7, 7, 32) 0
Total params: 9,568 Trainable params: 9,568 Non-trainable params: 0
Model: "model_18"
Layer (type) Output Shape Param # Connected to
input_22 (InputLayer) (None, 30, 30, 1) 0
input_23 (InputLayer) (None, 30, 30, 1) 0
model_17 (Model) (None, 7, 7, 32) 9568 input_22[0][0]
input_23[0][0]
subtract_8 (Subtract) (None, 7, 7, 32) 0 model_17[1][0]
model_17[2][0]
conv2d_24 (Conv2D) (None, 7, 7, 32) 9248 subtract_8[0][0]
max_pooling2d_24 (MaxPooling2D) (None, 3, 3, 32) 0 conv2d_24[0][0]
flatten_6 (Flatten) (None, 288) 0 max_pooling2d_24[0][0]
dense_12 (Dense) (None, 64) 18496 flatten_6[0][0]
dense_13 (Dense) (None, 1) 65 dense_12[0][0]
Total params: 37,377 Trainable params: 37,377 Non-trainable params: 0
cutoff '.txt' is the real name: siamese-0.981.h5.txt