xwjabc / hed

A PyTorch reimplementation of Holistically-Nested Edge Detection
170 stars 42 forks source link

load pretrained model : pickle.UnpicklingError: invalid load key, '\x0a' #27

Open CoachingJane opened 3 years ago

CoachingJane commented 3 years ago

File "hed.py", line 302, in main() File "hed.py", line 156, in main load_vgg16_caffe(net, args.vgg16_caffe) File "D:\Application\Object_Detection\hed-master\utils.py", line 87, in load_vgg16_caffe load_pretrained_caffe(net, path, only_vgg=True) File "D:\Application\Object_Detection\hed-master\utils.py", line 94, in load_pretrained_caffe pretrained_params = pickle.load(f) _pickle.UnpicklingError: invalid load key, '\x0a'.

xwjabc commented 3 years ago

Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.

CoachingJane commented 3 years ago

Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.

my python version is 3.7.7 , I will retry with py3.6, thankyou

CoachingJane commented 3 years ago

Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.

i changed my python version to 3.6.12 then it worked, but when i run (echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash it just pop up MATLAB Command Window with nothing ,what should i do next? image

xwjabc commented 3 years ago

Hi @CoachingJane, it seems you are evaluating the code on Windows platform, but the code repo is originally evaluated on Ubuntu. Thus, I feel the code (echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash is not running correctly.

However, it might be possible to run with MATLAB on Windows: You may need to modify the eval/eval_edge.m: Add one line data_dir = 'YOUR PATH/output-officialpretrain/test' at top of the eval_edge.m file and use MATLAB to run it.

CoachingJane commented 3 years ago

Hi @CoachingJane, it seems you are evaluating the code on Windows platform, but the code repo is originally evaluated on Ubuntu. Thus, I feel the code (echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash is not running correctly.

However, it might be possible to run with MATLAB on Windows: You may need to modify the eval/eval_edge.m: Add one line data_dir = 'YOUR PATH/output-officialpretrain/test' at top of the eval_edge.m file and use MATLAB to run it.

thankyou for your reply! it works! the code generates files like image but I don't know what the contents of these files represent,could you tell me what they mean? thankyou! image

xwjabc commented 3 years ago

Yes, those are evaluated scores (see https://github.com/xwjabc/hed/blob/master/eval/edges/edgesEvalDir.m for details). Ideally, in the output of your MATLAB, you should observe something like: ODS=... OIS=... AP=... R50=..., which are the evaluated performance.

CoachingJane commented 3 years ago

Yes, those are evaluated scores (see https://github.com/xwjabc/hed/blob/master/eval/edges/edgesEvalDir.m for details). Ideally, in the output of your MATLAB, you should observe something like: ODS=... OIS=... AP=... R50=..., which are the evaluated performance.

thankyou, and it also generates a picture : image i dont understand what the thick red line, thin green line and green point mean,could you tell me what they mean? thankyou!

xwjabc commented 3 years ago

@CoachingJane The figure shows a precision-recall curve (red). There are several online tutorials that explains the precision-recall curve well (e.g. link). The green curves are the contour lines for F-measure (i.e. the harmonic mean of precision and recall), which mean that for each point (recall, precision) on green curve, the F-measure of this (recall, precision) pair is the same.

CoachingJane commented 3 years ago

thankyou so much!!

CoachingJane commented 3 years ago

..it occurs again....and my python version is also 3.6.12.....i dont know why

xwjabc commented 3 years ago

Hi @CoachingJane , you mean that previously your problem is solved but it appears again? Could you find any difference this time?

CoachingJane commented 3 years ago

yes, and I download the code again, and use the envs: image image and only the pretrained model epoch-19-checkpoint.pt is ok, caffe model are both gei the error "_pickle.UnpicklingError: invalid load key, '\x0a'."

CoachingJane commented 3 years ago

my cuda version is 10.1, and i change the env to pytorch1.4, python 3.6.13, it also turns the error

xwjabc commented 3 years ago

Hi @CoachingJane , I just ran an evaluation by myself and it works fine (my environment now: PyTorch 1.6.0 and Python 3.8.3). I am wondering if you can show the size and sha256 checksum of the caffe model hed_pretrained_bsds.py36pickle you used? (for sha256 checksum, on Linux shell you can simply use sha256sum hed_pretrained_bsds.py36pickle to get it). The ideal size should be 58873022 and the checksum should be 89c0d990879ed42f334f9a9a1ea80275fb9fd0ad54e1ad632312f4ff2c3a1f59.

CoachingJane commented 3 years ago

well I know why this happen, actuall I use the pretrain model 5stage-vgg.caffemodel that I download from https://github.com/s9xie/hed I got the model wrong ,now its ok,sorry for that mistake to waste your time. so if i want to train my dataset, I juet use the 5stage-vgg.py36pickle right?

xwjabc commented 3 years ago

I am glad that you found out the reason! Yes, you can use 5stage-vgg.py36pickle as initial weights and then finetune the HED with your dataset. You can also try to use hed_pretrained_bsds.py36pickle or other HED models (already pretrained on BSDS dataset) and then finetune with your own dataset.

CoachingJane commented 3 years ago

image i use the vgg pretrained model to train voc obj-counter dataset but image its all white or black... do you know what problem will make it happen?

CoachingJane commented 3 years ago

I let the init lr to 1e-5(from 1e-4), but it looks like a little strange: image image image or maybe i should train for a while?

xwjabc commented 3 years ago

Hi @CoachingJane , I think you may try the following steps: (1) Train the model for a bit more epochs and see if the output can be meaningful. (2) Use hed_pretrained_bsds.py36pickle as initial weights and then finetune the model on your dataset. (3) If (2) still fails, you may need to check the loaded dataset (check the values of image and label and compare them with BSDS dataset)

CoachingJane commented 3 years ago

well, I think I found the reason, the lr ,I set the init lr to 1e-6(from 1e-4),then it worked. thank you!

xwjabc commented 3 years ago

No worries! I am happy that you solve this issue by yourself!

CoachingJane commented 3 years ago

Hello, do you know the function of this parameter? image in this paper Weakly Supervised Object Boundaries,the author set it to 0.01,and this operation seems can improve the final score image Does this parameter have a standard value? Can it be changed? thank you

xwjabc commented 3 years ago

Hi @CoachingJane, the maxDist stands for "localization tolerance, which controls the maximum allowed distance in matches between predicted edges and ground truth" (quoted from RCF paper). It can be changed depending on datasets (see RCF).

CoachingJane commented 3 years ago

there is a question that confused me a lot, that is when the code load the edge-GT, you do this op: edge[edge < 127.5] = 0.0 edge[edge >= 127.5] = 1.0 but I found that many pixel values were between 0 and 127.5 when you make them to 0, Won't that affect the training result? thank you

xwjabc commented 3 years ago

Hi @CoachingJane, for this setting I simply follow original HED's way. Feel free to try to utilize the information from the pixel values and perhaps it may generate better results.