Open O-J1 opened 1 month ago
Hey there! @O-J1,
Thank you for your feedback and feature suggestion. I'd like to address the points you've raised:
Q1: Panning when holding the middle mouse button down is standard UI in many programs, not being able to pan makes the application borderline unusable for larger amounts of images
A1: In edit mode, you can move objects by selecting the target and dragging it while holding the left mouse button. If objects are obscured, try using the 's' and 'w' hotkeys to hide or show layers. For more details, please refer to our user manual.
Q2: Zooming in and out should be centered on the cursor (and rechecked each scroll step)
A2: Currently, the zooming strategy of X-AnyLabeling is already centered on the cursor position.
For situations involving multiple small objects that need inspection: We strongly recommend updating to the latest version to experience our quick inspection feature. This could significantly improve your workflow efficiency. For more information, please check this discussion thread.
If you have any further questions or suggestions, please don't hesitate to let us know. We're continuously working to improve our product to meet user needs.
Hi @CVHub520
A1: In edit mode, you can move objects by selecting the target and dragging it while holding the left mouse button. If objects are obscured, try using the 's' and 'w' hotkeys to hide or show layers. For more details, please refer to our user manual.
You have misunderstood me/I wasnt clear enough, to be able to annotate my dataset for an eventual FOSS release (using this tool) I need to be able to pan the canvas. I annotate manually since neither YOLO nor SAM2 (or anything else) is capable of annotating correctly yet. Im able to pan the canvas in other annotation tools i.e Roboflow and CVAT, as well as graphics software like Photoshop, Figma and Blender. Ive used figma to demonstrate: Panning-canvas.webm
Q2: Zooming in and out should be centered on the cursor (and rechecked each scroll step)
A2: Currently, the zooming strategy of X-AnyLabeling is already centered on the cursor position.
Please see this next video, its not and genuinely wish it was 😞 zooming-centered.webm
For situations involving multiple small objects that need inspection: We strongly recommend updating to the latest version ...
I built from source today. I am on the very latest version. I manually annotate as nothing on the market, YOLO, SAM2 or otherwise works for my dataset. Whilst this is great for automated, this isnt for me. Ive checked both the source code and the user guide, there is no panning functionality
After selecting an object, you can use the arrow keys on the keyboard to pan up, down, left, and right.
After selecting an object, you can use the arrow keys on the keyboard to pan up, down, left, and right.
Sir, please understand that I am not referring to selecting an object. In order to manually annotate the image smoothly, the entire relevant section must be within frame. Having to repeatedly scroll and zoom significantly slows down the process, as it is much less efficient. Additionally, as you saw in the video, there are bugs with the current functionality when zooming in and out, its not actually centered and it loses focus when zooming out.
https://stackoverflow.com/questions/12750901/panning-exact-definition
Oh, I see. Thank you for your detailed explanation. Your insights have truly helped me understand the issue more clearly. You've pinpointed a significant area where X-AnyLabeling's user experience could indeed be improved.
I want to be transparent about my current situation and plans:
If you're interested and have the time, we would greatly appreciate it if you could help refine this feature and submit a PR. It would be an immense contribution to the community.
Your feedback is extremely valuable to me. Let's work together to make it even better. If you need any assistance or guidance during the development process, please don't hesitate to reach out.
Once again, thank you for your valuable input and potential contribution!
Search before asking
Description
Use case
Users with objects that are somewhat small and need to perform segmentation work with a moderate amount of images (>100)
Additional
I attempted to add this quickly in 30mins myself (and got it working but poorly). Unfortunately this codebase is too much for me.
Are you willing to submit a PR?