facebookresearch / maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
MIT License
9.3k stars 2.49k forks source link

Step-by-step tutorial - How to train your own dataset #521

Open AdanMora opened 5 years ago

AdanMora commented 5 years ago

🚀 Feature: Training a custom dataset

In order to complete and unify all the issues about how to train a model using a custom dataset, you will find here the basic steps to do it. Taking into account that this implementation it's a recent release, this steps can change in the future. I'm requesting feedback, this was the steps that I followed for training models and it works perfectly.

In my case, the context of my problem is completely different to the COCO dataset, with only 4 classes.

Related issues -> #15, #297, #372, etc.

Steps

1) COCO format

The easier way to to use this model is labelling your dataset with the official coco format according with the problem you have. In my case, It's for instance segmentation, so I used the detection format.

2) Creating a Dataset class for your data

Following the example coco.py. Create a new class extending from torchvision.datasets.coco.CocoDetection (you can find another classes in the official docs), this class encapsulates the pycocoapi methods to manage your coco dataset.

This class has to be created in maskrcnn-benchmark/maskrcnn_benchmark/data/datasets folder, same as coco.py, and included in the __init__.py.

3) Adding dataset paths

This class will needs as parameters the paths for the JSON file, it contains the metadata of your dataset in coco format, and for the images, the folder where they are. The engine automatically searches in paths_catalog.py for these parameters, the easier way is including your paths to the dict DATASETS following the format of course, then include an elif statement in the get method.

4) Evaluation file

Here is the importance of use the coco format, if your dataset have the same structure then you can use the same evaluation file used for the class COCODataset, in the __init_.py file just add an if statement like the COCODataset statement.

This evaluation file follows the coco evaluation standard with the pycocoapi evaluation methods. You can create your own evaluation file, and you have to do it if your dataset have another structure.

5) Training script

Here you can find the standard training and testing scripts for the model, add your own arguments, change the output dir (this one is very important), etc.

6) Changing the hyper-parameters

The engine uses yacs config files, in the repo you can find different ways to change the hyper-parameters.

If you are using a Single GPU, look at the README, there is a section for this case, you have to change some hyper-parameters, the default hyper-parameters have been written for Multi-GPUs (8 GPUs). I trained my model using a single one, so no problem at all, just change SOLVER.IMS_PER_BATCH and adjust the other SOLVER params.

Update DATASETS.TRAIN and DATASETS.TEST with the name that you used in paths_catalog.py. Also consider in change the min/max input sizes hyper-parameters.

7) Finetuning the model

The issue #15 has all the explanation.

Now all it is ready for trainnig!!

This the general modifications to the code for a custom dataset, I made more changes according to my needs.

Visualizing the results

Once the model finishes the training, the weights are saved, you can use the Mask_R-CNN_demo.ipynb notebook to visualize the results of your model on the test dataset, but you have to change the class names in predictor.py, it has the coco classes by default, put them in the same order used for the annotations.

LeviViana commented 5 years ago

Related to #372 as well.

Idolized22 commented 5 years ago

Thank you very much for this guide.

2) Creating a Dataset class for your data

Following the example coco.py. Create a new class extending from torchvision.datasets.coco.CocoDetection (you can find another classes in the official docs), this class encapsulates the pycocoapi methods to manage your coco dataset.

This class has to be created in maskrcnn-benchmark/maskrcnn_benchmark/data/datasets folder, same as coco.py, and included in the init.py.

Regurading the above, I have used the already written coco class by adding my dataset to the path catalog my datasets with the following name: coco_mydatasetsName. I have placed it in datasets/coco/Mydataset .

5) Training script

Here you can find the standard training and testing scripts for the model, add your own arguments, change the output dir (this one is very important), etc.

How do you change the output dir ?
I tried to change the output dir in defaults.py and it failed, the output dir is the same as the environment dir and I fail to resume the training of my model. It does evaluation over and over again . did you just specified a different path in line47 in the train.py file ?

Visualizing the results

Once the model finishes the training, the weights are saved, you can use the Mask_R-CNN_demo.ipynb notebook to visualize the results of your model on the test dataset.`

I think you would have to change the categories in predictor.py to be the same as in your dataset in order to be able to see the predictions.

AdanMora commented 5 years ago

Regurading the above, I have used the already written coco class by adding my dataset to the path catalog my datasets with the following name: coco_mydatasetsName. I have placed it in datasets/coco/Mydataset .

Yes, the class can be created into the coco.py or a new py file (that's what I did).

5) Training script

Here you can find the standard training and testing scripts for the model, add your own arguments, change the output dir (this one is very important), etc.

How do you change the output dir ? I tried to change the output dir in defaults.py and it failed, the output dir is the same as the environment dir and I fail to resume the training of my model. It does evaluation over and over again . did you just specified a different path in line47 in the train.py file ?

Same process like the other parameters in the training or test script: options = ['OUTPUT_DIR', args.output_dir] cfg.merge_from_list(options)

I think you would have to change the categories in predictor.py to be the same as in your dataset in order to be able to see the predictions.

Oh yes, for sure! You have to personalize the predict.py, at least the class names, in my case I added parameters to control the thickness and color of the objects and other things. I'll put that in the tutorial, thanks.

maxsenh commented 5 years ago

@AdanMora Is it really necessary to create an own dataset-loader class, when your data and annotation files are already in COCO-format? As I understood, @Idolized22 did not write any new .py file in which he wrote a data set class

botcs commented 5 years ago

Hi @AdanMora,

This is a very helpful, valuable tutorial! Among the many things that needs to be modified to train the network on your dataset I think the most crucial things are the data loading and the evaluation. Both are quite strongly hardcoded at the moment to fit the requirements of training and validating on COCO, however I am now refactoring a few parts to make the custom data training more feasible / convenient. In a week or two I am going to push these as PRs.

Datasets

First of all, here is a Dataset with all the necessary fields and methods implemented, and documented. Also you could use this dataset to make unit-tests on your training / valiadtion: https://gist.github.com/botcs/72a221f8a95471155b25a9e655a654e1 Basically, for compatibility with the training script, you just need 4 things to be implemented, of which the first one takes most of the effort, the remaining are quite trivial to have:

  1. __getitem__ a function for returning the input image and the target BoxList with additional fields like masks and labels
  2. __len__ which returns the number of entries in your dataset
  3. classid_to_name a mapping between integerss and strings
  4. get_img_info which returns a dict of the input image metadata, at least the width and height

This could also help understanding what are the essential methods and fields which are used by the training (since the COCODataset has many convenience functions implemented as well).

Evaluation

Currently Pascal-VOC and COCO evaluation is supported only, and a cityscapes -> coco-style converter script is available. However making the evaluation work is way more trickier than you could have thought. Both evaluation script makes tremendous amounts of assumptions, which , while a simple mAP evaluation could be made with a dataset that only implements the bare minimum provided in the debugdataset.py.

COCO style

The major issue with the COCO evaluation implemented in the current version that it requires your dataset to have a field called coco that actually does all the dirty work behind, imported from the official pycocotools. The problem with this approach that this lib assumes that your data structure is the same as COCO, also your class labels will be ignored as well, etc. To avoid this I have implemented a COCOWrapper which handles all the requirements from COCOeval while works with a generic dataset like debugdataset. I have made attempts at validating this approach, which seems to be working fine, but has a few issues with the unit test. What remains is to find out if the original evaluation script is returning the same answers for this unit test.

CityScapes style

I would like to call the attention to the CityScapes instance level evaluation script which is quite amazing and way better documented than the COCO script. Similarly, it requires your data to be organized in the same directory structure the way the original dataset is available, also each of the predicted instance should go to a different binary mask file, which just causes a lot of headache again. However you can hijack the evaluation script from the point all the annotations and predictions are loaded. Thankfully, it has passed the perfect-score unit test (visualized in this notebook), in which I feed the annotations as predictions. I am currently validating this approach, if it gives the same / similar score to the COCOeval's scores.


My plan is to submit some Pull Requests as soon as I am finished with cleaning up these abstractions, which would help the customization of maskrcnn_benchmark lib (which is AFAIK the best performing implementation available).

Once it is out there, would you like to help making a step-by-step tutorial just like the Matterport Mask-RCNN implementation has, Splash of Colors?

AdanMora commented 5 years ago

@maxsenh It depends on the data, if you have to do extra steps to get the coco format for your data, well you have to create another class (I always do a file per class, clean code principles that I have). But you can use the same class.

@botcs That's a really nice work, I think that's the milestone, create a customizable framework for every context for this model. Yes, I have my doubts about use the same evaluation script for every dataset due to the hardcoding, so it is better to have a general script. I would take a look to the CityScapes style and the code that you have at this moment and give you some feedback.

Once it is out there, would you like to help making a step-by-step tutorial just like the Matterport Mask-RCNN implementation has, Splash of Colors?

Yeah for sure! Actually I was thinking in make this tutorial just like Splash of Color, but now with a more robust implementation it's better!! If you need or you want some help, let me know, it would a pleasure to help you.

Thanks.

botcs commented 5 years ago

@AdanMora Awesome! Currently I have a pending PR (#473) on which I am adding these new features and refactors. Please check it out, especially this short-term roadmap, and in the meantime I will open a new branch on my fork, on which I will push the new commits.

fmassa commented 5 years ago

Wow, this is very nice @botcs ! Looking forward to your PR improving Dataset support!

And having a tutorial explaining how to add new Dataset types would be awesome @AdanMora !

jerpint commented 5 years ago

@botcs @AdanMora I've personally found that converting my entire dataset to be as close to the original coco dataset was the least painful way to get training to work on this model. As you've mentioned, there are too many hard-coded assumptions to the coco dataset to do otherwise. I have been able to train end to end on my own data, but haven't yet proceeded to evaluating how well it has performed. Loss keeps on going down, so that's a good sign!

The coco dataset annotations are decent, and popular, so maybe more blog posts and utility tools to convert any dataset to the coco-style would be more useful. +1 to more blog posts on usage of the repo. Maybe I'll convince myself to write one up at some point as well!

Thanks for the amazing work!

AdanMora commented 5 years ago

@jerpint I agree, I converted my dataset to coco style and the training process is really good, but we can't have the assumption that every kind of data could be treated like coco, in coco you can have e.g. large and small objects and a general size of images, and not all the contexts have this characteristics that are hardcoded in the cocoapi. So I think we should have general way to adapt any kind of data.

But yes, for now the coco style works well, if your data have this behaviour, you should use it.

botcs commented 5 years ago

@jerpint I think a simple interface results in the flexibility of the lib, which attracts more and more people, finally it promotes world peace.

just a quick example: COCO annotations only allows polygons (AFAIK, but correct me), which does not involve masks with holes in them. If you have something like the GTA dataset, you will have binary masks only, with ridiculously non-smooth boundaries which translate to crazy complex polygons.

  1. So you should be first familiar with the COCO annotation format
  2. Write a script that translates binary masks to polygons
  3. Decide how would you treat corner cases, compromise
  4. Save the whole data in json (a few GB)

More on the simplicity, just a historical comparison: when I switched from TF to PyTorch, TF could do the multithread fetching only if the type was an image, or was in their weird .tfrecord format (I don't know if this is still the case). Now compare it with the PyTorch's Dataset and Dataloader, which is extremely simple: implement a __getitem__ and the __len__ function... and there you go. No need to convert from one format to another, no need to get familiar with an irrelevant task.

Idolized22 commented 5 years ago

@AdanMora
I have written a script for converting polygons labeled with labelme based on the repos labelme2coco.py script and I have uploaded it to github: Coco-llike-dataset-creator if you would like to use it while creating your guide you are welcome to .

AdanMora commented 5 years ago

@Idolized22 Thank you, actually I'm using other tools, I thought you were talking about the original LabelMe tool, I made a script to convert LabelMe annotation to coco format.

bernhardschaefer commented 5 years ago

7) Finetuning the model

The issue #15 has all the explanation.

  • Download the official weights for the model that you want to setup.
  • Change in the config file MODEL.ROI_BOX_HEAD.NUM_CLASSES = your_classes + background.
  • Use trim_detectron_model.py to remove those layers that are setup for the coco dataset, if you run the train model before this, there will be troubles with layers that expects the 81 classes (80 coco classes + background), those are the layers you have to remove.
  • This script will save the new weights, link the path to the MODEL.WEIGHT hyper-parameter.

Isn't it more straightforward to use a pretrained model from this repo's Model Zoo instead of taking them from the detectron repo? They also have slightly higher reported accuracies.

The approach for trimming those models is very similar: https://gist.github.com/bernhardschaefer/01905b0fe83615f79e2928a2a10b6f28

AdanMora commented 5 years ago

Yeah, you can use any, and yes, the pretrained models from the zoo are slightly better. Thanks.

bernhardschaefer commented 5 years ago

@botcs

just a quick example: COCO annotations only allows polygons (AFAIK, but correct me), which does not involve masks with holes in them. If you have something like the GTA dataset, you will have binary masks only, with ridiculously non-smooth boundaries which translate to crazy complex polygons.

The coco documentation states:

The segmentation format depends on whether the instance represents a single object (iscrowd=0 in which case polygons are used) or a collection of objects (iscrowd=1 in which case RLE is used)

So in theory COCO supports RLE, i.e. we could hijack the iscrowd flag to pass in datasets that require RLEs. However, in the current implementation they are actually not considered, see COCODataset implementation:

# filter crowd annotations
# TODO might be better to add an extra field
anno = [obj for obj in anno if obj["iscrowd"] == 0]

Maybe this filter could be removed with not too much effort once your PR #473 on supporting binary masks has landed. :-)

Although I'm not sure what other side effects using this iscrowd flag has, maybe there is a different evaluation path for those.

botcs commented 5 years ago

@bernhardschaefer if you look into the cocapi then you can see, that the is_crowd instances are ignored completely, which is outside the scope of this lib (hopefully :D)

About the PR I am waiting for @fmassa's approval.

bernhardschaefer commented 5 years ago

I see, thanks for the clarification. In that case I think it's even more important to be able to train your own dataset without using coco format.

Idolized22 commented 5 years ago

@Idolized22 Thank you, actually I'm using other tools, I thought you were talking about the original LabelMe tool, I made a script to convert LabelMe annotation to coco format.

@AdanMora

as you wish , The script is written in a way that it takes few more small changes so it can be used with any tool given the polygons dots as [x1y1x2y2...xnyn]. currently it uses labelme only to calculate the area and bbox which can be calculated using other segmentation library.

In other matter ,

Have you manged to create routed bounding boxes which are not necessarily in parallel to the image ?

AdanMora commented 5 years ago

@Idolized22

as you wish , The script is written in a way that it takes few more small changes so it can be used with any tool given the polygons dots as [x1y1x2y2...xnyn]. currently it uses labelme only to calculate the area and bbox which can be calculated using other segmentation library.

Excellent, good to know.

In other matter ,

Have you manged to create routed bounding boxes which are not necessarily in parallel to the image ?

I didn't get the question, can you explain further?

Idolized22 commented 5 years ago

@Idolized22 Have you manged to create routed bounding boxes which are not necessarily in parallel to the image ? @AdanMora I didn't get the question, can you explain further?

@AdanMora

I am wondering how to produce a bounding box for objects which are rotated in in the Image , therefore I need the bounding box to be rotated and not in parallel to the Image.

AdanMora commented 5 years ago

Mmmm sorry, I don´t know if I got it but using the polygons you can generate the bounding box no matter the position of the polygon, you have to play with the minimum and maximum values of the polygon coordinates that allow you to draw any bounding box.

madurner commented 5 years ago

Hey guys, first of all thanks to @AdanMora for this nice tutorial! I want to fine-tune the network on a challenging dataset (industrial objects, textureless, and all objects do look similar) containing around 80000 training samples. I went through all steps and trained the network. However, the results on my test data is not as I want it to be :P there are still samples in the test scenes where objects without any occlusion etc. are not even detected. I am still getting into the code, but I was wondering, if you have some advises concerning the hyperparameters .

6) Changing the hyper-parameters

The engine uses yacs config files, in the repo you can find different ways to change the hyper-parameters.

If you are using a Single GPU, look at the README, there is a section for this case, you have to change some hyper-parameters, the default hyper-parameters have been written for Multi-GPUs (8 GPUs). I trained my model using a single one, so no problem at all, just change SOLVER.IMS_PER_BATCH and adjust the other SOLVER params.

Update DATASETS.TRAIN and DATASETS.TEST with the name that you used in paths_catalog.py. Also consider in change the min/max input sizes hyper-parameters.

At the moment I am training on one GPU. Some question which came up: Do we really need the amount of iterations (720000) for fine-tuning? I mean the parameters are for training on coco dataset again right? What do the SOLVER.STEPS "(480000, 640000)" do?

thx for your help

AdanMora commented 5 years ago

@maedmaex Maybe you have to apply some augmentation techniques or change general hyperparameters:

There are other hyperparameters for bounding box and mask layers but I don't have the knowledge to tell you how they work and affect the model. Maybe the problem is the size of the objects in the images, e.g. too big or too small, so this hyperparameters could help.

About SOLVER.STEPS, Idk what they do, it's related to the training, not the architecture.

botcs commented 5 years ago

@maedmaex

SOLVER.STEPS is for reducing the base lr by x0.1 at the given timesteps, as far as I know.

madurner commented 5 years ago

@botcs @AdanMora thx for the hints :)

Maybe you have to apply some augmentation techniques or change general hyperparameters:

what kind of augmentation do you mean?

AdanMora commented 5 years ago

By default there is only one data augmentation step, Horizontal Flips, you can add more data aug steps like play with the image's saturation, contrast or brightness, make random (or not) crops, make any rotation, etc, these depend of your data and increase the robustness of your model. But, like I said, there are support only for the flip operation, because all this techniques leads to chance the coco annotation for every object in the image, so you have to do it, search about what the people did in this case, there are some tools to do it.

AdanMora commented 5 years ago

Now I'm working on it, so if I have a good implementation, I'll make a pull request with other augmentation steps.

Fenix0817 commented 5 years ago

Hello @AdanMora,

I am new in this topic, and I want to know if Is it possible only to use step seven, of your tutorial, to implement a detector with my categories in prediction.py?.

Thank you so much.

AdanMora commented 5 years ago

Hello @AdanMora,

I am new in this topic, and I want to know if Is it possible only to use step seven, of your tutorial, to implement a detector with my categories in prediction.py?.

Thank you so much.

@Fenix0817 Hi, that depends on your data, if you have 81 classes (including the background) so you can do it from the step 3, you have to hardcode the paths of your dataset, etc, in prediction.py you have to put your classes in the same order used in the annotation file.

Fenix0817 commented 5 years ago

Thank you for your fast answer @AdanMora.

I have no 81 classes. I only have six classes. In conclusion, is it better to build a dataset with own images and follow all your tutorial?

Best regard.

AdanMora commented 5 years ago

@Fenix0817 That's correct, you have to, the default configuration is for coco.

Fenix0817 commented 5 years ago

Thank you so much @AdanMora.

Have a good day.

VincentGogo commented 5 years ago

hello, I am a new beginer,I want ask wher the best model I could find when after training?

madurner commented 5 years ago

@VincentGogo The trained models for the (81 classes of COCO) can be found in the model zoo here: https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/MODEL_ZOO.md how to trim these models, for fine-tuning on your own dataset is explained in step 7) of this tutorial by @AdanMora.

VincentGogo commented 5 years ago

@maedmaex Thanks for reply,I mean there is no val loss when training,the model just save when every checkpoint_period,which model is best?,or I just take "model_final"? Another question, how to set the value of "SOLVER.MAX_ITER", is it relevant to the train dataset size?

    if iteration % checkpoint_period == 0:
        checkpointer.save("model_{:07d}".format(iteration), **arguments)
    if iteration == max_iter:
        checkpointer.save("model_final", **arguments)
AdanMora commented 5 years ago

@VincentGogo When you set checkpoints every 1000 iterations, the model writes its state until that moment, not the best one.

It's not relevant but you have to take into account that if your dataset is tiny and you set thousands of iterations, the model will see the same image plenty of times.

madurner commented 5 years ago

adding something to the post of @AdanMora: if I understand you @VincentGogo correctly, you want to see the loss on the validation data. This is AFAIK not implemented. As @fmassa states here https://github.com/facebookresearch/maskrcnn-benchmark/issues/445:

The total loss is a good proxy, but overall what you ultimately want is the mAP on the validation set.

tools/train_net.py outputs the losses in a text file, you'd need to parse it or adapt the code to plot straight away to TensorBoard / Visdom / etc.

tools/train_net.py will store model checkpoints every x iterations, you can evaluate the quality of the checkpoint on the validation set using tools/test_net.py. here https://github.com/facebookresearch/maskrcnn-benchmark/issues/348 you can read more about getting mAP on your validation set during training.

concerning your question about the SOLVER.MAX_ITER as @AdanMora stated earlier

you decide the number of iterations of your model, could be any amount, each iteration is a batch, so 10 iterations equals 10 batches to process.

So the number of Iterations you choose depends on your SOLVER.IMS_PER_BATCH as well. Furthermore as already said, if you have a small dataset a high number if iterations might lead to overfitting.

VincentGogo commented 5 years ago

@maedmaex @AdanMora Thanks a lot.

Fenix0817 commented 5 years ago

Hello @AdanMora,

Please, I need your help. I am training my dataset based on your tutorial, and at this moment, I have the following error: RuntimeError: Error(s) in loading state_dict for GeneralizedRCNN: size mismatch for roi_heads.box.feature_extractor.fc6.weight: copying a param with shape torch.Size([1024, 12544]) from checkpoint, the shape in current model is torch.Size([1024, 20736]).

Can you give me some idea?

Best regards.

madurner commented 5 years ago

@Fenix0817 could you please add which pre-trained model and config you are using?

Fenix0817 commented 5 years ago

Thank you for your fast answer @maedmaex. My pre-trained model is coco (mask_rcnn_R-50-FPN_1x_detectron_no_last_layers.pth) and my config is:

MODEL: META_ARCHITECTURE: "GeneralizedRCNN" BACKBONE: CONV_BODY: "R-50-FPN" RESNETS: BACKBONE_OUT_CHANNELS: 256 RPN: USE_FPN: True ANCHOR_STRIDE: (4, 8, 16, 32, 64) PRE_NMS_TOP_N_TRAIN: 2000 PRE_NMS_TOP_N_TEST: 1000 POST_NMS_TOP_N_TEST: 1000 FPN_POST_NMS_TOP_N_TEST: 1000 ROI_HEADS: USE_FPN: True ROI_BOX_HEAD: POOLER_RESOLUTION: 9 POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125) POOLER_SAMPLING_RATIO: 2 FEATURE_EXTRACTOR: "FPN2MLPFeatureExtractor" PREDICTOR: "FPNPredictor" NUM_CLASSES: 6 ROI_MASK_HEAD: POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125) FEATURE_EXTRACTOR: "MaskRCNNFPNFeatureExtractor" PREDICTOR: "MaskRCNNC4Predictor" POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 2 RESOLUTION: 28 SHARE_BOX_FEATURE_EXTRACTOR: False MASK_ON: True DATASETS: TEST: ("carla_val_cocostyle",) DATALOADER: SIZE_DIVISIBILITY: 32

Thanks a lot.

madurner commented 5 years ago

@Fenix0817 And when does the error occur? Training or Testing? Because you do not define a DATASETS.TRAIN in this config file. And you are missing to define your MODEL.WEIGHT. Or do you define them in the default.py or forward them via the command-line?

AdanMora commented 5 years ago

Maybe you are trying to load the wrong weights, verify that your weights file watches with your model config file, you cannot use any.

Fenix0817 commented 5 years ago

Good morning @maedmaex and @AdanMora,

Thank you for your fast answer and useful help. At this moment I found my error. I changed in the ROI_BOX_HEAD, the POOLER_RESOLUTION from 7 to 9. It was my fail, sorry.

Following the advice from @maedmaex. I defined DATASETS.TRAIN in the config file. My script ran without problems, but it no detects anything. I was looking for some problem in predictor.py, and in the outcome of the method compute_prediction(), I obtained this:

prediction BoxList(num_boxes=100, image_width=1280, image_height=720, mode=xyxy)

But in the method overlay_class_names(), when I printed the variables scores and labels, I obtained this:

scores [ ] labels [ ]

Please, Do you have some idea?

Thanks a lot.

Fenix0817 commented 5 years ago

At the moment, I think that my dataset is not training. How can I know if my dataset is training with the example of Mask_R-CNN_demo.ipynb?

AdanMora commented 5 years ago

You can use the test script and get the metrics of your model, if all the scores are zero, your model doesn't work, maybe you have mistakes in your annotations or the config file has something wrong.

Fenix0817 commented 5 years ago

It is alive........

Thank you so much @AdanMora and @maedmaex for your useful help and excellent contribution.

I want to make just two suggestions:

In the actual step 5, also it will be good to determine that you need to execute the train and the test before to run your application.

Best regards.

AdanMora commented 5 years ago

It is alive........

Thank you so much @AdanMora and @maedmaex for your useful help and excellent contribution.

Your welcome!

I want to make just two suggestions:

* For the first step in the tutorial, I want to suggest the following two links: http://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch/#coco-dataset-format
  https://github.com/jsbroks/coco-annotator

Yeah, I used the first one and I'm using the Coco Annotator, brilliant tool, I have some contributions on that repo.

* I think that it will be better to change the order of steps 5 and 7. In my opinion, step 7 has to be step number 5, and the current step 5 has to be the last step. It was my conclusion based on the previous answer of @AdanMora to my problem.

I disagree, the order is you get the training script, the shell, then you change the hyperparameters and set the finetuning, then you can test if there is more layers to be trimmed.

In the actual step 5, also it will be good to determine that you need to execute the train and the test before to run your application.

It's not necessary, e.g. you can test the checkpoints every 1000 iterations, it's up to you how to check if the model is working.

Fenix0817 commented 5 years ago

OK @AdanMora, Thank you so much for your feedback.

Best regards.