Closed jadhavsaurabh closed 5 years ago
Hope my question is clear as I don't have much knowledge of machine learning in c++ and I am good at machine learning in python
There is no tool for this AFAIK. I am currently working on a .h5 -> Dlib script, which mimics the Dlib serialization in python and thus generates the .dat file, as well as a .hpp with the network definition.
To go the other way, you would probably have to use h5py or a wrapper thereof (Keras H5Dict). I believe you can see how they do it in _serialize_model (https://github.com/keras-team/keras/blob/master/keras/engine/saving.py).
You could either use some kind of visitor directly in C++ (I think the to_xml does this), or you could copy the deserialization code to python and mirror the dlib classes, then load the .dat file in python and generate a .h5 file from that internal representation. You will frequently be reading through keras and dlib source code if you try this approach. I'm not sure if it would actually work.
OK I will try Thanks btw
It might be more difficult going from dlib to keras, because in dlib you need to know in which order the layers were serialized, for which you would need the C++ network type (e.g. use a visitor on your network). So I could imagine it is easier to write the dlib2keras conversion in C++ with h5 C(++) library, while keras2dlib is probably easier to do in python with h5py or the keras wrapper H5Dict (this is my assumption).
Instead, I should go for fine tuning in c++
@jadhavsaurabh This may help you - https://github.com/ksachdeva/dlib-to-tf-keras-converter
@ksachdeva Thank you so much, sir. This helped me a lot through my project!
Warning: this issue has been inactive for 30 days and will be automatically closed on 2019-04-07 if there is no further activity.
If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.
Warning: this issue has been inactive for 37 days and will be automatically closed on 2019-04-17 if there is no further activity.
If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.
Notice: this issue has been closed because it has been inactive for 45 days. You may reopen this issue if it has been closed in error.
There is no tool for this AFAIK. I am currently working on a .h5 -> Dlib script, which mimics the Dlib serialization in python and thus generates the .dat file, as well as a .hpp with the network definition.
To go the other way, you would probably have to use h5py or a wrapper thereof (Keras H5Dict). I believe you can see how they do it in _serialize_model (https://github.com/keras-team/keras/blob/master/keras/engine/saving.py).
You could either use some kind of visitor directly in C++ (I think the to_xml does this), or you could copy the deserialization code to python and mirror the dlib classes, then load the .dat file in python and generate a .h5 file from that internal representation. You will frequently be reading through keras and dlib source code if you try this approach. I'm not sure if it would actually work.
Do you have the keras model to Dlib converter now?
Do you have the keras model to Dlib converter now?
Well, there was never the keras2dlib converter, it was just an experimental tool. I honestly don't even remember now how far it worked, and whether it fully supported CNNs.
I found my old code, but it's not polished and it doesn't work for CNNs IIRC, unfortunately (I had some kind of issue with the layer parameters maybe). I don't even know if my approach is any good --- I slapped the same license Dlib uses on it and uploaded it on GitHub. If it helps you in any way, cool. Currently, I'm occupied with other things so I haven't spent any time on it during the past 7 months.
Can anyone explain how to use this pre-trained model 'dlib_face_recognition_resnet_model_v1.dat' in Keras or Tensorflow? or convert the weights and biases in h5py so it can be used for Further fine-tuning in keras on this model?