Closed vladmandic closed 3 years ago
@vladmandic Thank you for reporting this issue. There is no good way to track the original signature to the optimized nodes. Have you tried the tfhub model conversion? I think the signature will be preserved that way.
@pyu10055
Currently less than half of models posted on https://tfhub.dev/ have JS links (e.g. none of EfficientNet or EfficientDet or anything trained on OpenImages or ...), but they all have links to a saved model.
Or is there a way to trigger TFHub model conversion that I'm missing?
yes, you can convert the TFHub module directly using the converter.
tensorflowjs_converter \
--input_format=tf_hub \
'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' \
/mobilenet/web_model
@pyu10055 I always considered that the same as downloading saved_model
and then running TFJS converter on it.
Just did I quick try and it produces tfjs_graph_model
that is identical (and has the same signature) - meaning no real signature.
tensorflowjs_converter --input_format tf_hub --signature_name serving_default https://tfhub.dev/tensorflow/efficientdet/d0/1 .
{
'Identity_5:0': { name: 'Identity_5:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, [length]: 1 ] } },
'Identity_7:0': { name: 'Identity_7:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '90' }, [length]: 3 ] } },
'Identity_2:0': { name: 'Identity_2:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_6:0': { name: 'Identity_6:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '4' }, [length]: 3 ] } },
'Identity:0': { name: 'Identity:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_4:0': { name: 'Identity_4:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_1:0': { name: 'Identity_1:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '4' }, [length]: 3 ] } },
'Identity_3:0': { name: 'Identity_3:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '90' }, [length]: 3 ] } }
}
@vladmandic Thank you for trying, we will try to find a way that allows us to track the original signature name to the graph nodes after optimization. Will post any findings here.
@pyu10055
no need to track them during optimization, just re-map them correctly as a last step before writing model.json
graph_model
output nodes are named using Identity_{index}
where index is in exactly the same order as output nodes of saved_moded
so in this example,
Identity:0 => detection_anchor_indices
Identity_1 => detection_boxes
Identity_2 => detection_classes
...
example saved_model
output nodes:
outputs: {
detection_anchor_indices: { dtype: 'float32', name: 'StatefulPartitionedCall:0', shape: [Array] },
detection_boxes: { dtype: 'float32', name: 'StatefulPartitionedCall:1', shape: [Array] },
detection_classes: { dtype: 'float32', name: 'StatefulPartitionedCall:2', shape: [Array] },
detection_multiclass_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:3', shape: [Array] },
detection_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:4', shape: [Array] },
num_detections: { dtype: 'float32', name: 'StatefulPartitionedCall:5', shape: [Array] },
raw_detection_boxes: { dtype: 'float32', name: 'StatefulPartitionedCall:6', shape: [Array] },
raw_detection_scores: { dtype: 'float32', name: 'StatefulPartitionedCall:7', shape: [Array] }
}
example graph_model
output nodes:
outputs: {
'Identity_6:0': { name: 'Identity_6:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '4' }, [length]: 3 ] } },
'Identity_1:0': { name: 'Identity_1:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '4' }, [length]: 3 ] } },
'Identity_3:0': { name: 'Identity_3:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, { size: '90' }, [length]: 3 ] } },
'Identity_2:0': { name: 'Identity_2:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity_5:0': { name: 'Identity_5:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, [length]: 1 ] } },
'Identity_7:0': { name: 'Identity_7:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '49104' }, { size: '90' }, [length]: 3 ] } },
'Identity_4:0': { name: 'Identity_4:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } },
'Identity:0': { name: 'Identity:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '100' }, [length]: 2 ] } }
}
@vladmandic @pyu10055 any comments on this conversion for inputs. see issues https://github.com/tensorflow/tfjs/issues/4861
@rohanmuplara yup, it's the same issue and it's a really annoying one
i work around it by manually editing resulting model.json
to include correct output nodes that i get from my script that analyzes original saved model and resulting graph model (some converted models are even worse and include no signature at all and then you have to look at executor workflow to see them)
https://github.com/vladmandic/node-detector-test/blob/main/signature.js
@vladmandic can you describe a little more about what the executor versus signature stuff is. Also, can you explain how this code above edits the script in a way that is helfpul. To me, it just seems you are just iterating the input and just outputting it back out
signature is a signature part of the model.json, but some models do not contain properly filled out signature section, so next option is to look at actual model execution via executor.
yes, it's just iterating and outputing - but it allows for matching of saved vs graph model input/output nodes. with a touch more work, it could rewrite actual model.json signature part - i'll do that soon.
Thanks makes sense. So you are saying you are going to manually copy the saved_model from the saved json files so the order stays the same. Another issue is I had is from the signature of the inputs are different than the names defined in the code but the model still works when I pass in the old names. https://share.descript.com/view/OX82xb6lY7q
@rohanmuplara that's exactly why signature part exists - so tensor names can be associated with logical names without changing the model.
I have two follow up questions.
On Sun, Mar 28, 2021 at 4:18 AM Vladimir Mandic @.***> wrote:
@rohanmuplara https://github.com/rohanmuplara that's exactly why signature part exists - so tensor names can be associated with logical names without changing the model.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tfjs/issues/3942#issuecomment-808882371, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALG5Q5FFJYB2WO3ZMTGDBYLTF4F75ANCNFSM4RQKE7KQ .
This is IMO as there are no good docs :(
Placeholder
op in topology or it cannot be usedpredict()
or execute()
do not look at signature
Signature can be used to map tensors before/after inferencemodel.json
according to signature from Saved model after conversion See this example:
{
"signature": {
"inputs": { "input_1:0": {"name":"input_1:0","dtype":"DT_FLOAT","tensorShape":{"dim":[{"size":"1"},{"size":"3"},{"size":"416"},{"size":"416"}]}}},
},
"modelTopology": {
"node": [
{"name":"input_1","op":"Placeholder","attr":{"shape":{"shape":{"dim":[{"size":"1"},{"size":"3"},{"size":"416"},{"size":"416"}]}},"dtype":{"type":"DT_FLOAT"}}},
]
}
}
I really wish TFJS team fixed tensorflowjs_converter
tool...
Makes sense. thanks so much.
hey @vladmandic https://github.com/tensorflow/tfjs/issues/4861 great suggestion by tfjs team if you add names to input layers it seems to work. I also think most of these problems are on the regular tensorflow serializer side actually.
@rohanmuplara good stuff, but doesn't help when i'm converting a pretrained model.
Models converted from
saved_model
totfjs_graph_model
loose output signature information.This is not specific to any single model, it's a generic converter issue.
a) Saved model from https://tfhub.dev/tensorflow/efficientdet/d0/1?tf-hub-format=compressed
b) Same model converted to TFJS Graph model using
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model . graph
This is cosmetic as all outputs are still present, but makes converted models extremely difficult to use.
Environment: Ubuntu 20.04 with NodeJS 14.11.0 and TFJS 2.4.0