Open jessiffmm opened 5 years ago
Hi, I am assuming that you are using generator.cpp and have both rgb and depth images in Recorder format, if not please inform me about your images format, and I will write a new reader for the same.
I would also like to see your config file to help you, passed into sampleGenerator to help you.
Also @jmplaza told me that you are using Darknet Inferencer yolo.
Also, please post the output after running this.
Hi,
I used dl-DetectionSuite/sampleGenerator/simpleSampleGenerator/main.py . How can I use sampleGenerator? Because I haven't found information about how to use it. I don't know what configuration file I have to use.
Yes, I'm trying to train a yolo network with my dataset.
Thanks!
Did you get any output after running it. Also, do you only have color images ? Or your dataset also contains depth images ?
Are they in a particular format like JdeRobot Recorder, etc. I think the tool you are looking for is dl-DetectionSuite/DeepLearningSuite/SampleGenerationApp/
On 02-Feb-2019, at 10:42 PM, jessiffmm notifications@github.com wrote:
Hi,
I used dl-DetectionSuite/sampleGenerator/simpleSampleGenerator/main.py . How can I use sampleGenerator? Because I haven't found information about how to use it. I don't know what configuration file I have to use.
Yes, I'm trying to train a yolo network with my dataset.
Thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Hi,
I get that I need config file, because I don't know what configuration file I have to use.
I want to label my images manually. My images have format rgb, I don't have depth images. And I want to get a xml file with the tags.
Hi @jessiffmm ,
I have pushed an hotfix to this functionality to the label
branch.
Below is the sample yml
config file you can use to run the SampleGeneratorApp
present in build/SamplegeneratorApp
.
outputPath: /opt/datasets/sample/output
detector: deepLearning
inferencerImplementation: yolo
inferencerNames: /opt/datasets/names/voc.names
inferencerConfig: /opt/datasets/cfg/yolo-voc.cfg
inferencerWeights: /opt/datasets/weights/yolo-voc.weights
reader: directory
dataPath: /opt/datasets/sample_images
Please edit outputPath
, inferencerNames
, inferencerConfig
, inferencerWeights
and dataPath
(directory containing all the images) according to your needs.
After running, it will automatically detect some objects, and you can add additional objects by clicking and dragging.
Then press space bar.
Now, the current object will be select in blue, and you can change it's position by dragging the edges. After that press the class number it belongs to like 1, 2, 3.
Fran wrote it specifically for person labelling and the following is hard coded currently like below
Also, it outputs json files in the output path provided.
I understand that it's not very user friendly currently, but making it user friendly is an entire project.
Hi @vinay0410
Perfect, I will try it. Other thing, do you know some web which have information about how can I build my own yolo netwok?
Thanks!!
Hi @vinay0410
I try to use SampleGenerationApp but I get some errors.
1-I have the config.yml: outputPath: /home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/SampleGenerationApp/annotations/ detector: deepLearning inferencerImplementation: yolo inferencerNames: /opt/datasets/names/label_yolo.names inferencerConfig: /opt/datasets/cfg/yolov3-voc.cfg inferencerWeights: /opt/datasets/weights/yolov3-voc_17000.weights reader: directory dataPath: /home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/SampleGenerationApp/images
I understand that I have to use directory recorder If I have the dataPath and I use recorder-rgbd If I have depth and rgb images. But it's in reverse:
(line 106, generator.cpp)
RecorderReaderPtr converter;
if (reader.as
2- In RecorderReader.cpp only admits png images: (line 40) _if (boost::filesystem::is_regular_file(*dir_itr) && diritr->path().extension() == ".png")
Also if I leave the next part does not read the images: (line 42) _if (not sufix.empty()) { std::string filename=dir_itr->path().stem().string(); if ( ! boost::algorithm::ends_with(filename, sufix)){ continue; } onlyIndexFilename=dir_itr->path().filename().stem().string(); boost::eraseall(onlyIndexFilename,sufix); }
3- I get the following error: terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(3.4.3-dev) /home/vanejessi/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
Abortado (`core' generado)
I think that it doesn't open the image correctly because the directory of images is wrong.
In the AutoEvaluator tool I need to change: (line 114 , ImageNetDatasetReader.cpp) _if (imagesRequired) { std::string imgPath = img_dir.string() + "/" + mfilename + ".JPEG"; sample.setColorImage(imgPath); }
And I put: _if (imagesRequired) { //std::string imgPath = img_dir.string() + "/" + m_filename + ".JPEG"; std::string imgPath = "/home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/Tools/AutoEvaluator/images/" + mfilename ; sample.setColorImage(imgPath); }
Because It didn't find the images.
Regards
Hi,
I'm trying to use the sampleGenerator tool for labelling traffic images. I have seen that a file with tags is automatically generated and an image is saved with the tagged object. But the results I get are pretty bad, because the vehicles are not labeled. I have other question, how is it indicated to which class does each object belong?
Regards!