Open oke-aditya opened 4 years ago
Hi @oke-aditya , Recently, I refactor ultralytics's yolov5 following the philosophy of torchvision's here. Now the inference part looks similar to torchvision's retinanet. The inference part is done. And I plan to use quickvision's API (train_step
, val_step
, fit
and etc.) to implement the training part.
If we totally use torch.hub
to implement ultralytic's in quickvision
, it's difficult to obtain the intermediate features.
True, but that reduces codebase significantly to maintain. I would love a custom implementation though, it would need some testing and weights etc.
Just saw your implementation. That's great. Really nice and so complete.
And the difference of the model weights between my implementation and ultralytics's is just some keyname, you can check the model weights converting script here.
That's absoltely fine!.
Just one small doubt. Will using GeneralizedRCNNTransform
detoriate the results ?
P.S> I'm not super expert in these models or scratch implementing them.
Just one small doubt. Will using
GeneralizedRCNNTransform
detoriate the results ?
True, Ultralytics use the letterbox
to do this function, and it reply on opencv
here, we should refactor this part using something in torch
to make it consistently comparing to ultralytics.
And this is the only one difference between ultralytics's and my.
I will try once locally by using torch.hub
without opencv dependency and try to produce a training script, which we can use.
There are few problems which I forsee while doing using hub, but I want to verify them once.
Some are these opencv hardbindings and other might be that inference is attached to produce some custom BBoX
class which I really don't want in this repo.
Though GeneralizedRCNNTransform
worked actually fine while torchvision adopted RetinaNet, there seemed to be no issue that time. I'm not sure how super flexible it is.
I will try once locally by using
torch.hub
without opencv dependency and try to produce a training script, which we can use.
Hopefully to see your experiment result here.
Sure, I will share a Colab so that we can dig into their implementation. This will enable us to compare ours as well later !
Just a doubt. FRCNN and RetinaNet are not differentiable in eval
model.
E.g.
model = FRCNN()
model.eval()
opt = Adam(lr=1e-3, model.parameters())
opt.step()
Will not work.
But it will work for Detr. Which is differentiable in both train
and eval
mode.
I would like to know For YOLO what are your thoughts?
The frcnn
and retinanet
in eval
mode containing the PostProcess
module which shouldn't be learnt. In other words, there are no parameters to be learn in PostProcess
.
But in detr
, they split PostProcess
out here, its output is same with train
mode except for something like BatchNormlization
.
For yolov5, we can choose either ways, No rules of thumb I think?
No Thumb rule here.
For Detr, I seperated out the PostProcess
as I think it is something optional and leave it to best of users.
That's what Detr too did in their paper and demo.
Best to leave it out, as I think it will be great to keep same interface in train
and eval
mode, this offers flexibility to users.
E.g. For CNNs and Detr, we have almost identical outputs, loss calculations in val
mode,
The idea here is training
and validation
is really almost something similar with small changes.
While inference
is something where we don't care about loss and do PostProcess
etc.
Inference is something which one would do in real-time, just visualize the outputs.
Sadly we can't do this for FRCNN and RetinaNet as you mentioned the PostProcess
is combined.
Let me know your thoughts !
Here is a discussion of the frcnn
output in torchvision: https://github.com/pytorch/vision/issues/1775
Both loss_dict
and detections
is fine too, but currently torchvision models don't do that.
This is acutally good idea, as users can slowly track how the detection is improving with respect to losses.
Sometimes dataset is huge and detectoins
are simply high which can cause memory burden to save them though.
I think for detection users should simple do in a inference
and not a train_step
or val_step
Yep, the mechanisms in detr
is more flexible, but it will influence the recovery of image shape in the PostProcess
procedure. This fixes is minor, I will test it in my yolov5-rt-track repo.
One advantage of doing PostProcess is that it enables to convert from YOLO to Pascal VOC. from which we can compute IoU
and other metrics from torchvision.
I think if we too do PostProcess internally and are able to compute these metrics, it wold be nice.
Detr PostProcess returns
{'scores': s, 'labels': l, 'boxes': b}
If we can stick with these output formats for all models in both train_step
and val_step
it would be standard.
Plus this allows us to compute metrics as well which we return in both the places train_step
and val_step
Sadly we can't do this for torchvision models as this is coupled with the model defintion. Hence the train_step
differs, we don't get the detection outputs only the loss_dict
.
So we can return
{"loss1" : avg_loss, "loss2": avg_loss, ... , "iou" : avg_iou, "giou" : avg_giou}
as a standard for train_step
and val_step
What I would propose is in train_step and val_step
we do this PostProcess. But not attach with the model.
So the user gets best of both. If he uses train_step or val_step
everything is handled by us.
Also, if the user would like very specific post processing, he can simply instantiate the model and continue on his own. This is what Detr did and I think was very good thought by them.
Thoughts?
Just a doubt here, Compared to torchvision
using ImageList
, detr
adopts NestedTensor
. The difference between these two is relatively small, ImageList
contains image_sizes
in initialization while NestedTensor
not. If we seperate out the PostProcess
, we should use NestedTensor
instead?
I think Nested Tensor Utils can be re-used from our utils.
ImageList
is really torchvision specific, and Nested Tensor
is probably going to be better supported and is slightly better representation.
Hi @oke-aditya Here is a slightly abstract model graph visualization comparing yolov5
to retinanet
, their struct is almost same now:
vs
Check for more details in my notebooks
Was bit away for this weekend, I will have a look definitely.
Just a quick update @zhiqwang .
After release 0.1 we will continue on this. (From next week). I am super eager to have your YOLO v5 here.
It will definitely be there is 0.2
no doubts π
@oke-aditya Sure, and we can follow detr
from the release 0.1
.
Hi @oke-aditya , It seems that 0.1
will be released soon, I'm currently working for make yolov5-rt-stack
to support training now.
Yes, release is today :smile: We will come back to this.
Hi @oke-aditya , there are some bugs in loss computation in my yolov5
implementation https://github.com/zhiqwang/yolov5-rt-stack/issues/16. When this was fixed, it should be loaded as DETR
or retinanet
.
Very nice. Till then I will make layers API which will make porting YOLO easier.
Hi @oke-aditya , The yolov5 now can be used for training, checkout https://github.com/zhiqwang/yolov5-rt-stack/pull/25 to see the details, but there are some hidden bugs in the master branch, I will add more unittest soon.
Superb. I have been really busy this month. I will get back to working at end of month.
π Feature
Implement YOLO v5 from
torch.hub
. This library removes suchdataset
abstraction and aims to provides a clean modular interface to models.Some key points to note: -
train_step
,val_step
,fit
API and a lightning trainer. Datasets, augmentations, transforms are not needed.Note that none of quickvision models can achieve SOTA, limitations being torchvision's implementations and not using transforms/datasets. But they are faster, easier and flexible to train. Something which torchvision too does.
With this context, we can start adding YOLO v5.
Depedencies: -
opencv-python
at all costs. Opencv is not like PIL a library for image reading. It is huge and has lot of subdependencies. Keeping library light will enable us to usePyTorch
Docker containers and directly infer usingtorchserve
.Evaluation mode: -
. We don't need
.fuse()`.eval()
such methods. We only need to load the model from `torch.hub. Currently, we do not have inference scripts for any models, but surely in future #2 . So right now let's focus on training.