luxonis / datadreamer

Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision models
Apache License 2.0
77 stars 5 forks source link

Feat/clip annotation #45

Closed HonzaCuhel closed 7 months ago

HonzaCuhel commented 8 months ago

This PR includes the following changes:

sokovninn commented 8 months ago

Have you compared clip-vit-large-patch14 and clip-vit-base-patch32?

github-actions[bot] commented 8 months ago

β˜‚οΈ Python Coverage

current status: βœ…

Overall Coverage

Lines Covered Coverage Threshold Status
1015 499 49% 0% 🟒

New Files

File Coverage Status
datadreamer/dataset_annotation/clip_annotator.py 52% 🟒
TOTAL 52% 🟒

Modified Files

File Coverage Status
datadreamer/dataset_annotation/init.py 100% 🟒
datadreamer/pipelines/generate_dataset_from_scratch.py 41% 🟒
TOTAL 71% 🟒

updated for commit: 5cf88ee by action🐍

github-actions[bot] commented 8 months ago

Test Results

  6 files    6 suites   46m 19s :stopwatch:  85 tests  36 :white_check_mark:  49 :zzz: 0 :x: 510 runsβ€Šβ€ƒ216 :white_check_mark: 294 :zzz: 0 :x:

Results for commit 5cf88ee8.

:recycle: This comment has been updated with latest results.

HonzaCuhel commented 8 months ago

No, I haven't, but I can test it.

HonzaCuhel commented 7 months ago

I've manually compared clip-vit-large-patch14 and clip-vit-base-patch32 annotation on 100 generated images on JP using L4 GPU, I've used batch size 8 for annotation and these are the results:

Latency:

Comparison:

Conclusion

Switched to using clip-vit-base-patch32 for the speed as the performance gap isn't big.