Closed tschaffter closed 9 months ago
LinkedIn organization API model includes information about how to crop the (original) image.
@rrchai Can you please save the result of your exploration here before closing this ticket?
This ticket has been timed box at the end of the Sprint 23.09.
Thumbor's Smart Cropping uses OpenCV
to detect an image's object and define the focal point for cropping based on the center of the object.
I have encountered the error in the Thumbor container (Cannot find opencv module
) while applying smart
to the image url during my initial testings, so I updated the thumbor image to install opencv
(see #2229). However, I am now able to load the "smart" image, even if the Thumbor container doesn't have opencv
installed.
Anyways, I tested both feature and facial detection/cropping using Thumbor's smart, here are some results:
Original size is 66 x 79.
http://localhost:8000/img/unsafe/200x200/logo/fda.svg
http://localhost:8000/img/unsafe/200x200/smart/logo/fda.svg
http://localhost:8000/img/unsafe/200x50/smart/logo/fda.svg
http://localhost:8000/img/unsafe/debug/200x200/logo/fda.png
Original size is 206 x 205.
http://localhost:8000/img/unsafe/200x200/team/rong.png
http://localhost:8000/img/unsafe/200x50/smart/team/rong.png
http://localhost:8000/img/unsafe/debug/200x200/team/rong.png
smart
is not always able to successfully detect the faces. @rrchai Thumbor applies operation from left to right, so in the case of your photo, it will crop (or fit?) the image into 200x50, then apply smart detection. Instead, you should probably start with smart detection, and only then apply operations to change the size of the image. The easiest way to test that is to apply smart detection without including resize operation, e.g. http://localhost:8000/img/unsafe/smart/team/rong.png
The manual cropping is executed by defining the viewpoint position in this format: {left}x{top}:{right}x{bottom}
.
For example, if we like to move the 200x200 logo into the center manually, we can define the viewpoint window like:
http://localhost:8000/img/unsafe/0x30:200x100/fit-in/200x200/logo/fda.svg
Or moving my face to center in a 50 height:
http://localhost:8000/img/unsafe/0x30:200x80/fit-in/200x200/team/rong.png
The fit-in
here is to stop Thumbor auto cropping, otherwise it will crop based on the focal point.
Based on the focal points in the debugging result, my face is not fully recognized.
We should stick to example images when exploring tech, e.g. the images used on https://thumbor.readthedocs.io/en/latest/detection_algorithms.html. :-)
fill()
https://dev.openchallenges.io/img/unsafe/200x200/filters:fill(red,1)/logo/fda.svg
All in all, the features that would be useful to our use cases in this format:
.../meta/debug/trim/{manual-cropping}/(adaptive-)(full-)fit-in/{size}/{filters}/...
meta
: Check the metadata of Thumbor image, e.g. https://dev.openchallenges.io/img/unsafe/meta/200x200/logo/fda.svg
debug
: Check the focal points that smart
has detected, e.g. http://localhost:8000/img/unsafe/debug/200x200/team/rong.png
trim
: "Removes surrounding space in images using top-left pixel color unless specified otherwise"{manual-cropping}
: Using two points of viewpoint window to manually crop the image in this format {left}x{top}:{right}x{bottom}, e.g. http://localhost:8000/img/unsafe/0x30:200x80/fit-in/200x200/team/rong.png
fit-in
: Disable auto-cropping and fit in an imaginary box specified by {size}
. "If a full fit-in is specified, then the largest size is used for cropping (width instead of height, or the other way around). If adaptive fit-in is specified, it inverts requested width and height if it would get a better image definition".{size}
: size to display the image, {width}x{height}{filters}
: Apply filter sequentially, filters:filter1:filter2:..., e.g. /200x200/filters:blur(7):fill(red,1):upscale()/example.jpg
More details about Thumbor usgae: https://thumbor.readthedocs.io/en/latest/usage.html
We should stick to example images when exploring tech, e.g. the images used on https://thumbor.readthedocs.io/en/latest/detection_algorithms.html. :-)
Right~ Love to see if the example image works. @tschaffter How can we view the external image using url? https://dev.openchallenges.io/img/unsafe/https://thumbor.readthedocs.io/en/latest/_images/face_detection_original.jpg
is not working, or do we have to upload it to s3 bucket?
The image needs to be uploaded to the S3 bucket (e.g. in a tmp/
folder). I explored the opportunity to use two sources of data but the Thumbor container we use only supports one image source (S3).
@tschaffter Thanks for pointing it out. Considering this task has been timed box, we can save the curiosity when we return to improve avatars.
My thought is we could wait until we develop the feature, which enables to adjust image position in the avatar when users upload or edit avatars. So we could go back to fix the logos by moving the logo to center of avatar in the app :).
Please feel free to close the task or move it to backlog~
Thanks for exploring and documenting so well your findings!
What projects is this feature for?
OpenChallenges
Description
We have org logos that are cropped by the circular mask of the avatar component, while others look fine.
Example:
Logos should ideally not be cropped as this amounts to modifying them.
For the logos that are cropped, we could
Option 2 and 3 would require us to capture additional information in the OC DB (e.g. padding value). However, this should be easier to update than updating the original image file (Option 1).
Anything else?
No response
Code of Conduct