Open CoachingJane opened 3 years ago
Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.
Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.
my python version is 3.7.7 , I will retry with py3.6, thankyou
Hi @CoachingJane, I wonder if you could provide your Python version? I feel this could be some compatibility issue due to Python version. This repo is running well on Python 3.6 and perhaps you can create a Python 3.6 environment to run the code using conda.
i changed my python version to 3.6.12 then it worked, but when i run (echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash it just pop up MATLAB Command Window with nothing ,what should i do next?
Hi @CoachingJane, it seems you are evaluating the code on Windows platform, but the code repo is originally evaluated on Ubuntu. Thus, I feel the code (echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash
is not running correctly.
However, it might be possible to run with MATLAB on Windows: You may need to modify the eval/eval_edge.m
: Add one line data_dir = 'YOUR PATH/output-officialpretrain/test'
at top of the eval_edge.m
file and use MATLAB to run it.
Hi @CoachingJane, it seems you are evaluating the code on Windows platform, but the code repo is originally evaluated on Ubuntu. Thus, I feel the code
(echo "data_dir = '../output-officialpretrain/test'"; cat eval_edge.m)|matlab -nodisplay -nodesktop -nosplash
is not running correctly.However, it might be possible to run with MATLAB on Windows: You may need to modify the
eval/eval_edge.m
: Add one linedata_dir = 'YOUR PATH/output-officialpretrain/test'
at top of theeval_edge.m
file and use MATLAB to run it.
thankyou for your reply! it works! the code generates files like but I don't know what the contents of these files represent,could you tell me what they mean? thankyou!
Yes, those are evaluated scores (see https://github.com/xwjabc/hed/blob/master/eval/edges/edgesEvalDir.m
for details). Ideally, in the output of your MATLAB, you should observe something like: ODS=... OIS=... AP=... R50=...
, which are the evaluated performance.
Yes, those are evaluated scores (see
https://github.com/xwjabc/hed/blob/master/eval/edges/edgesEvalDir.m
for details). Ideally, in the output of your MATLAB, you should observe something like:ODS=... OIS=... AP=... R50=...
, which are the evaluated performance.
thankyou, and it also generates a picture : i dont understand what the thick red line, thin green line and green point mean,could you tell me what they mean? thankyou!
@CoachingJane The figure shows a precision-recall curve (red). There are several online tutorials that explains the precision-recall curve well (e.g. link). The green curves are the contour lines for F-measure (i.e. the harmonic mean of precision and recall), which mean that for each point (recall, precision) on green curve, the F-measure of this (recall, precision) pair is the same.
thankyou so much!!
..it occurs again....and my python version is also 3.6.12.....i dont know why
Hi @CoachingJane , you mean that previously your problem is solved but it appears again? Could you find any difference this time?
yes, and I download the code again, and use the envs: and only the pretrained model epoch-19-checkpoint.pt is ok, caffe model are both gei the error "_pickle.UnpicklingError: invalid load key, '\x0a'."
my cuda version is 10.1, and i change the env to pytorch1.4, python 3.6.13, it also turns the error
Hi @CoachingJane , I just ran an evaluation by myself and it works fine (my environment now: PyTorch 1.6.0 and Python 3.8.3). I am wondering if you can show the size and sha256 checksum of the caffe model hed_pretrained_bsds.py36pickle
you used? (for sha256 checksum, on Linux shell you can simply use sha256sum hed_pretrained_bsds.py36pickle
to get it). The ideal size should be 58873022 and the checksum should be 89c0d990879ed42f334f9a9a1ea80275fb9fd0ad54e1ad632312f4ff2c3a1f59
.
well I know why this happen, actuall I use the pretrain model 5stage-vgg.caffemodel that I download from https://github.com/s9xie/hed I got the model wrong ,now its ok,sorry for that mistake to waste your time. so if i want to train my dataset, I juet use the 5stage-vgg.py36pickle right?
I am glad that you found out the reason! Yes, you can use 5stage-vgg.py36pickle
as initial weights and then finetune the HED with your dataset. You can also try to use hed_pretrained_bsds.py36pickle
or other HED models (already pretrained on BSDS dataset) and then finetune with your own dataset.
i use the vgg pretrained model to train voc obj-counter dataset but its all white or black... do you know what problem will make it happen?
I let the init lr to 1e-5(from 1e-4), but it looks like a little strange: or maybe i should train for a while?
Hi @CoachingJane , I think you may try the following steps:
(1) Train the model for a bit more epochs and see if the output can be meaningful.
(2) Use hed_pretrained_bsds.py36pickle
as initial weights and then finetune the model on your dataset.
(3) If (2) still fails, you may need to check the loaded dataset (check the values of image and label and compare them with BSDS dataset)
well, I think I found the reason, the lr ,I set the init lr to 1e-6(from 1e-4),then it worked. thank you!
No worries! I am happy that you solve this issue by yourself!
Hello, do you know the function of this parameter? in this paper Weakly Supervised Object Boundaries,the author set it to 0.01,and this operation seems can improve the final score Does this parameter have a standard value? Can it be changed? thank you
Hi @CoachingJane, the maxDist stands for "localization tolerance, which controls the maximum allowed distance in matches between predicted edges and ground truth" (quoted from RCF paper). It can be changed depending on datasets (see RCF).
there is a question that confused me a lot, that is when the code load the edge-GT, you do this op: edge[edge < 127.5] = 0.0 edge[edge >= 127.5] = 1.0 but I found that many pixel values were between 0 and 127.5 when you make them to 0, Won't that affect the training result? thank you
Hi @CoachingJane, for this setting I simply follow original HED's way. Feel free to try to utilize the information from the pixel values and perhaps it may generate better results.
File "hed.py", line 302, in
main()
File "hed.py", line 156, in main
load_vgg16_caffe(net, args.vgg16_caffe)
File "D:\Application\Object_Detection\hed-master\utils.py", line 87, in load_vgg16_caffe
load_pretrained_caffe(net, path, only_vgg=True)
File "D:\Application\Object_Detection\hed-master\utils.py", line 94, in load_pretrained_caffe
pretrained_params = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '\x0a'.