Closed Rutvik21 closed 4 years ago
I only have images and points saved in .pts files. I do not have csv file and json file. So how to use ?
Also I have manually annotated all the images with the boxes. So I have different files for boxes as xmls.
Hey, I am currently trying to make tfrecord using the different files that I have mentioned above. I am making json files for the points. But I have a doubt. I have read this script: https://github.com/yinguobing/image_utility/blob/master/extract_face_from_ibug.py And I do not understand after extracting faces from the image and after that we resize the image to the target size. But we are using the box co-ordinates of the original extracted face. So how it will be matched to the new resize image after calling get_valid_points method. Because we are passing valid_box coordinates which are the co-ordinates of the box before resizing the image.
Thank you.
Hi Rutvik21, in the function get_valid_points
, the locations of the points are normalized.
Yeah, that's right. But my question is how the normalized points will be calculated for final output to show on the original image ?
And please tell me how the landmark points are predicted ? means is it the regression and if it how it can be calculated ?
This repo shows how to use the trained model for prediction: https://github.com/yinguobing/head-pose-estimation
Yes, this project treats the landmark prediction as a regression problem.
Hi, I have read your script - here
Can you please tell me why you have divided points with 4.5 at line number 20?
That’s for scaling. You can change it and see what happend.
Okay will try it Thanks
Okay, so thanks for the help you have provided. And probably this will be my last question- I am currently working on head pose estimation and I have used your script of head pose estimation and one another script to get the angle on X, Y and Z-axis. And my question is, for the 68 landmarks, from where did you get the model points (3d points)? How can I get those points for any other points?
Thanks.
You are welcome!
The 3D facial model points are not used in this project. I believe you are referring to the head-pose-estimation
project, in which you can find the model's source in the README.md
file.
It's from another open source project, you can find it here: https://github.com/TadasBaltrusaitis/OpenFace/blob/master/lib/local/LandmarkDetector/model/pdms/In-the-wild_aligned_PDM_68.txt
Okay. Again thank you so much. And if you know anything about how to get these 3d points for custom landmarks other than these 68 landmarks then please let me know. Bye.
One more question, I have trained the model and it is saved in .pb file. Now when I am trying to make frozen graph from it with the arguments you have specified in readme.md file, I am getting the error that logits is not in the graph. Do you know anything about what could be the problem?
Try final_dense
. It seems logits
is no longer a valid name for the output node.
I have already tried that also, but it gives the same error.
Can you locate the output node in Tensorboard?
How to do so ??
Also here I am attaching the events file for reference. drive link for file
**Update I am able to execute the script with output node name = layer6/final_dense. Thanks. Closing this issue.
Just want to add one more thing - arguments to run demo.py may be changed as I am able to execute using following command- !python3 demo.py \ --model frozen_graph.pb \ --input_node butterfly/encoded_image_string_tensor \ --output_node butterfly/layer6/final_dense \ --image img_15.jpg
But still get an error : ValueError: Cannot feed value of shape (1, 128, 128, 3) for Tensor 'butterfly/encoded_image_string_tensor:0', which has shape '(?,)'
What should be input node ? All nodes available are as follows - butterfly/encoded_image_string_tensor butterfly/map/Shape butterfly/map/strided_slice/stack butterfly/map/strided_slice/stack_1 butterfly/map/strided_slice/stack_2 butterfly/map/strided_slice butterfly/map/TensorArray butterfly/map/TensorArrayUnstack/Shape butterfly/map/TensorArrayUnstack/strided_slice/stack butterfly/map/TensorArrayUnstack/strided_slice/stack_1 butterfly/map/TensorArrayUnstack/strided_slice/stack_2 butterfly/map/TensorArrayUnstack/strided_slice butterfly/map/TensorArrayUnstack/range/start butterfly/map/TensorArrayUnstack/range/delta butterfly/map/TensorArrayUnstack/range butterfly/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3 butterfly/map/Const butterfly/map/TensorArray_1 butterfly/map/while/iteration_counter butterfly/map/while/Enter butterfly/map/while/Enter_1 butterfly/map/while/Enter_2 butterfly/map/while/Merge butterfly/map/while/Merge_1 butterfly/map/while/Merge_2 butterfly/map/while/Less/Enter butterfly/map/while/Less butterfly/map/while/Less_1 butterfly/map/while/LogicalAnd butterfly/map/while/LoopCond butterfly/map/while/Switch butterfly/map/while/Switch_1 butterfly/map/while/Switch_2 butterfly/map/while/Identity butterfly/map/while/Identity_1 butterfly/map/while/Identity_2 butterfly/map/while/add/y butterfly/map/while/add butterfly/map/while/TensorArrayReadV3/Enter butterfly/map/while/TensorArrayReadV3/Enter_1 butterfly/map/while/TensorArrayReadV3 butterfly/map/while/DecodeJpeg butterfly/map/while/resize/ExpandDims/dim butterfly/map/while/resize/ExpandDims butterfly/map/while/resize/size butterfly/map/while/resize/ResizeBilinear butterfly/map/while/resize/Squeeze butterfly/map/while/TensorArrayWrite/TensorArrayWriteV3/Enter butterfly/map/while/TensorArrayWrite/TensorArrayWriteV3 butterfly/map/while/add_1/y butterfly/map/while/add_1 butterfly/map/while/NextIteration butterfly/map/while/NextIteration_1 butterfly/map/while/NextIteration_2 butterfly/map/while/Exit_2 butterfly/map/TensorArrayStack/TensorArraySizeV3 butterfly/map/TensorArrayStack/range/start butterfly/map/TensorArrayStack/range/delta butterfly/map/TensorArrayStack/range butterfly/map/TensorArrayStack/TensorArrayGatherV3 butterfly/layer1/conv2d/kernel butterfly/layer1/conv2d/kernel/read butterfly/layer1/conv2d/bias butterfly/layer1/conv2d/bias/read butterfly/layer1/conv2d/Conv2D butterfly/layer1/conv2d/BiasAdd butterfly/layer1/conv2d/Relu butterfly/layer1/max_pooling2d/MaxPool butterfly/layer2/conv2d/kernel butterfly/layer2/conv2d/kernel/read butterfly/layer2/conv2d/bias butterfly/layer2/conv2d/bias/read butterfly/layer2/conv2d/Conv2D butterfly/layer2/conv2d/BiasAdd butterfly/layer2/conv2d/Relu butterfly/layer2/conv2d_1/kernel butterfly/layer2/conv2d_1/kernel/read butterfly/layer2/conv2d_1/bias butterfly/layer2/conv2d_1/bias/read butterfly/layer2/conv2d_1/Conv2D butterfly/layer2/conv2d_1/BiasAdd butterfly/layer2/conv2d_1/Relu butterfly/layer2/max_pooling2d/MaxPool butterfly/layer3/conv2d/kernel butterfly/layer3/conv2d/kernel/read butterfly/layer3/conv2d/bias butterfly/layer3/conv2d/bias/read butterfly/layer3/conv2d/Conv2D butterfly/layer3/conv2d/BiasAdd butterfly/layer3/conv2d/Relu butterfly/layer3/conv2d_1/kernel butterfly/layer3/conv2d_1/kernel/read butterfly/layer3/conv2d_1/bias butterfly/layer3/conv2d_1/bias/read butterfly/layer3/conv2d_1/Conv2D butterfly/layer3/conv2d_1/BiasAdd butterfly/layer3/conv2d_1/Relu butterfly/layer3/max_pooling2d/MaxPool butterfly/layer4/conv2d/kernel butterfly/layer4/conv2d/kernel/read butterfly/layer4/conv2d/bias butterfly/layer4/conv2d/bias/read butterfly/layer4/conv2d/Conv2D butterfly/layer4/conv2d/BiasAdd butterfly/layer4/conv2d/Relu butterfly/layer4/conv2d_1/kernel butterfly/layer4/conv2d_1/kernel/read butterfly/layer4/conv2d_1/bias butterfly/layer4/conv2d_1/bias/read butterfly/layer4/conv2d_1/Conv2D butterfly/layer4/conv2d_1/BiasAdd butterfly/layer4/conv2d_1/Relu butterfly/layer4/max_pooling2d/MaxPool butterfly/layer5/conv2d/kernel butterfly/layer5/conv2d/kernel/read butterfly/layer5/conv2d/bias butterfly/layer5/conv2d/bias/read butterfly/layer5/conv2d/Conv2D butterfly/layer5/conv2d/BiasAdd butterfly/layer5/conv2d/Relu butterfly/layer6/flatten/Shape butterfly/layer6/flatten/strided_slice/stack butterfly/layer6/flatten/strided_slice/stack_1 butterfly/layer6/flatten/strided_slice/stack_2 butterfly/layer6/flatten/strided_slice butterfly/layer6/flatten/Reshape/shape/1 butterfly/layer6/flatten/Reshape/shape butterfly/layer6/flatten/Reshape butterfly/layer6/dense/kernel butterfly/layer6/dense/kernel/read butterfly/layer6/dense/bias butterfly/layer6/dense/bias/read butterfly/layer6/dense/MatMul butterfly/layer6/dense/BiasAdd butterfly/layer6/dense/Relu butterfly/layer6/logits/kernel butterfly/layer6/logits/kernel/read butterfly/layer6/logits/bias butterfly/layer6/logits/bias/read butterfly/layer6/logits/MatMul butterfly/layer6/logits/BiasAdd butterfly/layer6/final_dense
It seems you have missed the --raw_input
argument while exporting the model.
Okay, that's the problem. Got it. Thanks. Now, I can successfully execute.
Hi, I want to train using my own dataset. I have images and pts files for it. So how to convert these files into tfrecord ?