Closed ChefterP closed 6 years ago
Then my first question is, what´s the task of the ssd_input_encoder?
SSD outputs a large tensor containing thousands of predictions for every image in the batch. During the training there needs to be ground truth for every one of those thousands of boxes. That's what the SSDInputEncoder
is for. It produces the large ground truth tensor that is the counter piece to the model's output tensor. It does so by matching the ground truth boxes to the model's anchor boxes. In addition to that, SSDInputEncoder
also performs some coordinate transformations on the ground truth boxes without which the model wouldn't be able to learn. That's in a nutshell.
I set the batch size to 16. Does it mean there are 16 images who train the ssd7 during one epoch?
No. It means that each weight update is being computed based on 16 input images, but those 16 images are different for every training step. This is a general machine learning question that would be better suited for StackExchange or some machine learning forum.
Does ssd7 check the training level after each epoch with an image from the validation data set by making a prediction?
Not with "an image", but with the entire validation dataset. This, too, is a general machine learning question.
If I youse the data generator and crate my own data chain, are there also 1500 images or creates the data generator new images. So I have as result the 1500 original images plus the modified ones from the data generator?
You can answer this one yourself if you think about it. Let's say you use a random image transformation that is being applied with probability 50%. Then, in roughly half of all cases, instead of including the original image from your dataset in the batch, the data generator will instead include the transformed version of this image. Is this transformed image part of your original dataset? Of course not. So, does the generator produce new images? Yes. You can't generally say how many images there are after the random image transformations. The images that end up being fed to the model are whatever the output of the random image transformations is. This is also a general machine learning question.
Ok, thank you very much.
Hey pierluigiferrari, you said my questions are general machine learning questions. Can you recommend some literature where I can read these basics easily?
Thanks a lot.
There is a countless number of resources online. Just to name a few:
There is a lot more. Whatever you want to learn, just Google it and you'll find a ton of stuff. Since some of your questions above were about how ad-hoc data augmentation, training on mini batches, or evaluation on the validation dataset works, I'd start out with something very basic like the first two resources above. And, of course, if you have questions, just Google them. Chances are someone else has already asked your question on the Data Science Stack Exchange or on StackOverflow, and if not, ask it there.
Great! Thank you.
Hello, at first thank you for your great work and the very good documentation. But there are some points I actually don´t understand. I try to explain it in my own words.
I work with your ssd7 training tutorial. To train the ssd7 I have my own image dataset consists of 2000 images.
At first the parser combines the images with the labels. Then my first question is, what´s the task of the ssd_input_encoder? As next step the training follows. I set the batch size to 16. Does it mean there are 16 images who train the ssd7 during one epoch? Does ssd7 check the training level after each epoch with an image from the validation data set by making a prediction? Another question is about the data generator. Totally I´ve got 2000 images. 1500 for training and 500 for validation. If I youse the data generator and crate my own data chain, are there also 1500 images or creates the data generator new images. So I have as result the 1500 original images plus the modified ones from the data generator?
Best regards ChefterP