Closed HonzaCuhel closed 7 months ago
Have you compared clip-vit-large-patch14
and clip-vit-base-patch32
?
current status: β
Overall Coverage
Lines Covered Coverage Threshold Status 1015 499 49% 0% π’ New Files
File Coverage Status datadreamer/dataset_annotation/clip_annotator.py 52% π’ TOTAL 52% π’ Modified Files
File Coverage Status datadreamer/dataset_annotation/init.py 100% π’ datadreamer/pipelines/generate_dataset_from_scratch.py 41% π’ TOTAL 71% π’ updated for commit:
5cf88ee
by actionπ
ββ6 filesββββ6 suitesβββ46m 19s :stopwatch: β85 testsββ36 :white_check_mark:ββ49 :zzz:β0 :x: 510 runsββ216 :white_check_mark:β294 :zzz:β0 :x:
Results for commit 5cf88ee8.
:recycle: This comment has been updated with latest results.
No, I haven't, but I can test it.
I've manually compared clip-vit-large-patch14 and clip-vit-base-patch32 annotation on 100 generated images on JP using L4 GPU, I've used batch size 8 for annotation and these are the results:
clip-vit-large-patch14
: 3sclip-vit-base-patch32
: 1sclip-vit-large-patch14
's annotation betterclip-vit-base-patch32
's annotation betterSwitched to using clip-vit-base-patch32
for the speed as the performance gap isn't big.
This PR includes the following changes: