Closed pandamax closed 3 years ago
When using the model on a different dataset, make sure that the dataloader is modified accordingly. In particular, you need to ensure that the height and width of the image are divisible by 32 (required by DLA-34), and that the aspect ratio is maintained. You can see examples of this in datasets/tusimple.py
and datasets/culane.py
.
To change the backbone of the model, you can replace DLA-34 with your backbone of choice in the train_*.py
and infer_*.py
scripts. Just make sure that you have outputs corresponding to the binary heatmap, HAF and VAF respectively. Also, make sure to change self.output_scale
in your dataloader depending on what ratio the backbone downsamples the output by, for example, DLA-34 downsamples the output by a factor of 4, hence self.output_scale = 0.25
. To keep things simple, I would highly suggest you maintain the same downsampling factor as DLA-34.
UPDATE: I have recently implemented support for two new backbones - ENet
and ERFNet
. ENet is a very lightweight model with 10x fewer MACs when compared to DLA-34, ResNet-34 and ERFNet. You can train and test these models by following the updated instructions.
Thanks for your sharing idea and wonderful job! the idea about using HAF and VAF for clustering lanes is impressive and enlightening. I have used your model on my own datasets and got a not a good results. Meanwhile I am thinking about how to reduce the FLOPs the entire model. Do you have any detailed suggestions,such as a detailed modification about backbone or other method? thanks again.