Open NarenBabuR opened 6 years ago
You can use the forward.py
script provided in pyffe.
python pyffe/forward.py
usage: forward.py [-h] [-mf MEAN_FILE] [-mp MEAN_PIXEL] [--nogpu]
[-rf ROOT_FOLDER]
deploy_file caffemodel image_list output_file
[...]
positional arguments:
deploy_file Path to the deploy file
caffemodel Path to a .caffemodel
image_list Path to an image list
output_file Name of output file
Ignore mean_file and mean_pixel arguments (they are not used in deep-parking experiments). You just need to provide:
Example:
python pyffe/forward.py path/to/deploy.prototxt path/to/snapshot_iter_xxx.caffemodel images.txt predictions.npy
where an example of images.txt
is:
/path/to/image1.png
/path/to/image2.png
...
Thank you very much for the DETAILED reply.
Can you just give an example for the above
Mainly I need to work with Video file as input (as your YouTube video sample). Can you please tell me how to proceed with this.
Since I'm new to Deep Learning., I don't know much of it. Thanxs in advance
I updated the first answer with an example. About videos, our model only works on pre-extracted image patches. The visualization part you see on YouTube use our model and is implemented in Java + OpenCV. Unfortunately, we were not responsible for that part, and we do not have any code to share. However, I think you can easily reimplement that with newer versions of OpenCV (>= 3.3), which added the support for caffe models in the DNN module.
Some guides for Python:
Can you just give the Exact command for testing the images with pretrained model.
Also can you give me an example of image_list file.
Sorry for the trouble and wasting your time! This is my last query :))
On Mon, Oct 8, 2018 at 6:42 PM fabiocarrara notifications@github.com wrote:
I updated the first answer with an example. About videos, our model only works on pre-extracted image patches. The visualization part you see on YouTube use our model and is implemented in Java + OpenCV. Unfortunately, we were not responsible for that part, and we do not have any code to share. However, I think you can easily reimplement that with newer versions of OpenCV (>= 3.3), which added the support for caffe models in the DNN module.
Some guides for Python:
- https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html#basic-ops
https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/fabiocarrara/deep-parking/issues/8#issuecomment-427827004, or mute the thread https://github.com/notifications/unsubscribe-auth/AhLxTZOOuKm9uySs_wrNDshKXZHkslcdks5ui08zgaJpZM4XMnxf .
-- Regards, Naren Babu R
I ran the pyffe/forward.py
python3 pyffe/forward.py ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/deploy.prototxt ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/snapshot_iter_16170.caffemodel images.txt prediction.npy
Here is the content of images.txt:
here is the output of prediction.npy:
I think the output is not expected. the prediction is not expected. Any help on this @fabiocarrara please.
Did you solve your problem @ahadafzal ? I'm also having the same issue.
@nikola310 nope. I didn't use this later. I opted for vgg16 model. Recent published a paper also in IEEE Scopus. 🙂
@ahadafzal I see. I'll have to check it out then :smiley:
In case anyone stumbles upon this problem, since I was trying to test on the same data sets used during training, my solution was to use the appropriate patched image for each model. So if you're trying to run model trained on CNRPark, you have to use CNRPark patched images.
Can you please tell the changes to be made in main.py to run only the trained model