Closed WrathofBhuvan11 closed 3 years ago
Thanks for your question, @WrathofBhuvan11.
The error is due to that Keras doesn't know which features in the input dictionary to feed to the model. One solution is to name the input placeholder during model construction:
x_in = tf.keras.Input(shape=..., dtype=..., name='feature')
y_pred = ...
model = tf.keras.Model(inputs=x_in, outputs=y_pred)
With the name
argument, Keras will use the corresponding feature in the input dictionary for follow-up computation.
Thanks for your question, @WrathofBhuvan11.
The error is due to that Keras doesn't know which features in the input dictionary to feed to the model. One solution is to name the input placeholder during model construction:
x_in = tf.keras.Input(shape=..., dtype=..., name='feature') y_pred = ... model = tf.keras.Model(inputs=x_in, outputs=y_pred)
With the
name
argument, Keras will use the corresponding feature in the input dictionary for follow-up computation.
Thankyou for your response. It worked perfectly. Thankyou once again.
Thanks for your question, @WrathofBhuvan11.
The error is due to that Keras doesn't know which features in the input dictionary to feed to the model. One solution is to name the input placeholder during model construction:
x_in = tf.keras.Input(shape=..., dtype=..., name='feature') y_pred = ... model = tf.keras.Model(inputs=x_in, outputs=y_pred)
With the
name
argument, Keras will use the corresponding feature in the input dictionary for follow-up computation.
I have a quick question for you sir, regards NSL. In theory how effective will NSL work for hierarchical multi label classification? Will it be better addressing hierarchical structure compare to regular DNN classifiers?
If we have information about the hierarchical structure in the labeling space, we should be able to leverage the 'nearby' labels (for example, labels under the same parental class) to enhance the learning. But how to effectively do this remains as an open and research question. My intuition is by leveraging the structure in the labeling space, the model should learn more effectively compared to regular DNN classifiers.
Let's use semantic segmentation as an example. Assume 'beach' and 'sea' are under the same parental category 'scenery', while 'people' and 'pets' are under another 'portrait' category. In this hierarchy, the distance between 'beach' and 'sea' should be shorter (since they are under the same parental class) than the distance between 'beach' and 'pet' (under different parental class). The taste is similar to graph regularization which leverages the structure is in the feature space---and in this case the structure is in the labeling space.
This means when we are training/telling the model that it's incorrect to segment 'beach' as 'sea', but not as wrong to segment 'beach' as 'pet'. Maybe the illustration of Fig 1(a)(c) in this paper can provide some intuition for this explanation.
Happy to discuss further.
If we have information about the hierarchical structure in the labeling space, we should be able to leverage the 'nearby' labels (for example, labels under the same parental class) to enhance the learning. But how to effectively do this remains as an open and research question. My intuition is by leveraging the structure in the labeling space, the model should learn more effectively compared to regular DNN classifiers.
Let's use semantic segmentation as an example. Assume 'beach' and 'sea' are under the same parental category 'scenery', while 'people' and 'pets' are under another 'portrait' category. In this hierarchy, the distance between 'beach' and 'sea' should be shorter (since they are under the same parental class) than the distance between 'beach' and 'pet' (under different parental class). The taste is similar to graph regularization which leverages the structure is in the feature space---and in this case the structure is in the labeling space.
This means when we are training/telling the model that it's incorrect to segment 'beach' as 'sea', but not as wrong to segment 'beach' as 'pet'. Maybe the illustration of Fig 1(a)(c) in this paper can provide some intuition for this explanation.
Happy to discuss further.
Thankyou for sharing your insights.. I'll go through the paper.
If we have information about the hierarchical structure in the labeling space, we should be able to leverage the 'nearby' labels (for example, labels under the same parental class) to enhance the learning. But how to effectively do this remains as an open and research question. My intuition is by leveraging the structure in the labeling space, the model should learn more effectively compared to regular DNN classifiers.
Let's use semantic segmentation as an example. Assume 'beach' and 'sea' are under the same parental category 'scenery', while 'people' and 'pets' are under another 'portrait' category. In this hierarchy, the distance between 'beach' and 'sea' should be shorter (since they are under the same parental class) than the distance between 'beach' and 'pet' (under different parental class). The taste is similar to graph regularization which leverages the structure is in the feature space---and in this case the structure is in the labeling space.
This means when we are training/telling the model that it's incorrect to segment 'beach' as 'sea', but not as wrong to segment 'beach' as 'pet'. Maybe the illustration of Fig 1(a)(c) in this paper can provide some intuition for this explanation.
Happy to discuss further.
Hello sir, I have one follow up question.
Let's say I have 13 classes. And it's a hierarchical multi label classification problem.
So if I have embedded matrix of shape (13,100)
where 100 is word embedding dimension. How can I use this matrix knowledge in enhance traditional CNN based image classifier ? Is there a way to use Neural structure learning graph API to include this embedded matrix?
This stack-overflow question explains in depth.
https://stackoverflow.com/questions/65025374/how-to-use-word-embeddings-matrix-of-labels-output-to-get-better-results-for-hie
How to effectively leverage structured information in the labeling space remains as an open-ended and research question. Some research works have been proposed for this goal (one example work). Currently NSL does not have direct support using the structured signals in the labeling space.
Creating a new issue for this comment https://github.com/tensorflow/neural-structured-learning/issues/73#issuecomment-731274617 Using fit generator and nsl to train an image multi-label classifier I have already split full_df into train_df and valid_df dataframes.
the above functions nsl_train_generator and nsl_valid_generator, I referred #3 (comment)
I'm getting this error,