Open hosnasattar opened 3 years ago
so I manage to run the inference code by doing several modification. First I have to move the inference.py in fashionpedia folder to detection folder otherwise it was trying to load the mode_keys from dataloader in fashionpedia folder and could not find it then.
2- I have to add init.py to projects and fashionpedia folders to make them as module, so the line from projects.fashionpedia.configs import factory as config_factory
works
3- there were somewhere in the code paramter params.eval.skip_eval_loss was not set to False and in code detection/modeling/base_model.py line 170 I had to change to
self._skip_eval_loss = params.eval.skip_eval_loss
to
try:
self._skip_eval_loss = params.eval.skip_eval_loss
pass
except KeyError as e:
self._skip_eval_loss = False
Hi. Just want to know what is your final command line to run this ...
I have something like this
export PYTHONPATH="$PYTHONPATH:./tpu/models/official/detection/dataloader/" export PYTHONPATH="$PYTHONPATH:./tpu/models/official/efficientnet/" export PYTHONPATH="$PYTHONPATH:./tpu/models/hyperparameters/"
python inference_fashionpedia.py --model="attribute_mask_rcnn" --image_size=${IMAGE_SIZE?} --checkpoint_path="/fashionpedia-spinenet-143/model.ckpt" --label_map_file="tpu/models/official/detection/projects/fashionpedia/dataset/fashionpedia_label_map.csv" --image_file_pattern="input.jpg" --output_html="${OUTPUT_HTML?}" --max_boxes_to_draw=8 --min_score_threshold=0.05 --config_file="/tpu/models/official/detection/projects/fashionpedia/configs/yaml/spinenet143_amrcnn.yaml --output_file="output.npy"
you need to clone the whole repo, it basically works as object detection code.
Thanks for your quick reply.
Where is the directory you run this command line?
I received error like: tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or directory
you need to go to here tpu/models/official/detection/projects/fashionpedia/ you should do what I also wrote as steps above, otherwise direct cloning of the repo here will not work. also look into tpu/models/official/detection/ and install all their requirement.
Finally I made it... It's so painful...Many thanks for your help!
Hi I' student who want to training Fashionpedia by attribute mask rcnn i want to know how to change Fashionpedia data to tfrecord file...
Ok I'm going to throw my 2c in for getting this to run. I followed the instructions by @hosnasattar and they worked decently. The stuff with my python path didn't work, so I added this to the top of the inference.py program
import sys site.addsitedir('full_path_to/tpu/models')
Then I needed to update some code in the efficentnet_builder.py from official.efficientnet import efficientnet_model
and I updated the efficentnet_model.py from official.efficientnet.condconv import condconv_layers
Finally I had to update models/hyperparameters/params_dict.py from using yaml.load to yaml.safe_load in two places (near 414 and 419)
Lastly I was running everything from the detection folder
Ok I'm going to throw my 2c in for getting this to run. I followed the instructions by @hosnasattar and they worked decently. The stuff with my python path didn't work, so I added this to the top of the inference.py program
import sys site.addsitedir('full_path_to/tpu/models')
Then I needed to update some code in the efficentnet_builder.py from official.efficientnet import efficientnet_model
and I updated the efficentnet_model.py from official.efficientnet.condconv import condconv_layers
Finally I had to update models/hyperparameters/params_dict.py from using yaml.load to yaml.safe_load in two places (near 414 and 419)
Lastly I was running everything from the detection folder
https://github.com/zoso95/tpu/commit/2befedd7b8ad58aa3e17710b27866bf328f1c630
Anyone who successfully got to extract attributes? I'm able to extract the main categories but I do not see the attributes.
Anyone who successfully got to extract attributes? I'm able to extract the main categories but I do not see the attributes.
yeah, but you need to hack around a little bit. There should be an 'attributes' in the return dictionary that is an unlabeled vector. But the categories are here https://github.com/KMnP/fashionpedia-api/blob/master/data/demo/category_attributes_descriptions.json
Thanks @zoso95 I tried accessing the attributes from the output but its giving me an array of floats instead of ints.
@frederick0291 I think this is correct, actually. To the best of my knowledge, it's predicting the probability that that particular object has that attribute. So if you want a binary number, you're going to need to pick some probability threshold and use that.
Thanks @zoso95 I see it now. I am detecting 294 possible attributes but the json file has 341 total attribute labels. The main categories are 46.
Ok I get it now. There were 47 labels removed from the attributes. The IDs were not updated so it looks like there are 341.
@frederick0291 That file has both the categories and attributes. You want to look at just the attributes.
@hosnasattar I want to run the inference code to segment garment parts, but there are many difficult things..
you said "tpu/models/official/detection/projects/fashionpedia/inference_fashionpedia.py", is it same inference_fashionpedia.py you said and inference.py in the fashionpedia folder? At inference.py in that folder, from dataloader import mode_keys is in 39 lines(not in 36 lines)
I followed your direction (1. move inference file from fashionpedia to detection folder 2. add init.py in projects and fashionpedia folders 3. change some codes in base_model.py)
and type the command line
export PYTHONPATH="$PYTHONPATH:./tpu/models/official/detection/dataloader/" export PYTHONPATH="$PYTHONPATH:./tpu/models/official/efficientnet/" export PYTHONPATH="$PYTHONPATH:./tpu/models/hyperparameters/"
and type below command line in tpu/models/official/detection (is it right?)
python inference.py (originally in fashionpedia) --model="attribute_mask_rcnn" --image_size=${IMAGE_SIZE?} --checkpoint_path="/fashionpedia-spinenet-143/model.ckpt" --label_map_file="tpu/models/official/detection/projects/fashionpedia/dataset/fashionpedia_label_map.csv" --image_file_pattern="input.jpg" --output_html="${OUTPUT_HTML?}" --max_boxes_to_draw=8 --min_score_threshold=0.05 --config_file="/tpu/models/official/detection/projects/fashionpedia/configs/yaml/spinenet143_amrcnn.yaml --output_file="output.npy"
and the error is
ModuleNotFoundError: No module named 'hyperparameters'
and I don't solve yet..
u can check out my try on it. if you are using docker this should be easy to run https://github.com/manuEbg/fashionpediaBenchmark/tree/getitrunning
I am a beginner and wanted to know how to apply this model inference to custom images. I have installed the fashionpedia API already, but am unable to understand how to evaluate the model on custom images
Has anyone managed to export this model as ONNX/tflite?
u can check out my try on it. if you are using docker this should be easy to run https://github.com/manuEbg/fashionpediaBenchmark/tree/getitrunning
You saved my day ! Thank you !
Anyone who successfully got to extract attributes? I'm able to extract the main categories but I do not see the attributes.
Still same issue
Anyone who successfully got to extract attributes? I'm able to extract the main categories but I do not see the attributes.
yeah, but you need to hack around a little bit. There should be an 'attributes' in the return dictionary that is an unlabeled vector. But the categories are here https://github.com/KMnP/fashionpedia-api/blob/master/data/demo/category_attributes_descriptions.json
Could you tell me the steps to solve this issue. I am able to only get main categories. There is nothing about attributes in out.npy
I am trying to run the inference code. If I run the code from detection folder I am getting
File "tpu/models/official/detection/projects/fashionpedia/inference_fashionpedia.py", line 36, in
from dataloader import mode_keys
ImportError: cannot import name mode_keys
and if I copy the code to detection folder and then run it I am getting
detection/modeling/base_model.py", line 171, in init self._skip_eval_loss = params.eval.skip_eval_loss File "/Users/hsattar/Desktop/tpu/models/hyperparameters/params_dict.py", line 120, in getattr raise KeyError('The key
{}
does not exist. '.format(k)) KeyError: 'The keyskip_eval_loss
does not exist. 'I am running the code as python tpu/models/official/detection/projects/fashionpedia/inference_fashionpedia.py --model="attribute_mask_rcnn" --checkpoint_path="/fashionpedia-spinenet-143/model.ckpt" --label_map_file="${LABEL_MAP_FILE?}" --image_file_pattern="${IMAGE_FILE_PATTERN?}" --output_html="${OUTPUT_HTML?}" --max_boxes_to_draw=10 --min_score_threshold=0.05 --config_file="/Users/hsattar/Desktop/tpu/models/official/detection/projects/fashionpedia/configs/yaml/spinenet143_amrcnn.yaml"
where LABEL_MAP_FILE is = tpu/models/official/detection/projects/fashionpedia/dataset/fashionpedia_label_map.csv
could you please let me now what is the problem here?
I also tried to import the dataloder in python to check my python path is working and it worked fine. Also the detection inference code is working fine.