talmolab / sleap

A deep learning framework for multi-animal pose tracking.
https://sleap.ai
Other
435 stars 97 forks source link

Add `normalized_instance_similarity` method #1939

Closed gitttt-1234 closed 1 month ago

gitttt-1234 commented 2 months ago

Description

Currently, we are facing ID switches in tracking because of very low similarity scores (~/= 0) with instance matching similarity function as it doesn't apply normalization. To address this issue, we add a new normalized_instance_similarity function which normalizes the keypoints based on the image size.

Types of changes

Does this address any currently open issues?

1815

Outside contributors checklist

Thank you for contributing to SLEAP!

:heart:

Summary by CodeRabbit

coderabbitai[bot] commented 2 months ago

Walkthrough

The pull request introduces enhancements to the tracking feature by adding new similarity metrics, "normalized_instance" and "object_keypoint," across various components. These changes are reflected in the command-line interface documentation, configuration files, and implementation of new functions. The updates improve the methods available for measuring similarity, thereby refining the tracking and analysis processes.

Changes

File Change Summary
docs/guides/cli.md, docs/guides/proofreading.md Added "normalized_instance" and "object_keypoint" to similarity options and described their functionality.
sleap/config/pipeline_form.yaml Updated similarity method options to include "normalized_instance."
sleap/nn/inference.py, sleap/nn/tracking.py Introduced img_hw parameter to the track method for improved tracking accuracy.
sleap/nn/tracker/components.py Introduced normalized_instance_similarity function for normalized keypoint comparison.
tests/nn/test_inference.py, tests/nn/test_tracker_components.py, tests/nn/test_tracking_integration.py Expanded test coverage by adding "normalized_instance" and "object_keypoint" to various test functions.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant Config
    participant Tracker
    participant Test

    User->>CLI: Request tracking with normalized_instance
    CLI->>Config: Fetch similarity options
    Config->>Tracker: Set normalized_instance for tracking
    Tracker->>Tracker: Compute similarity using normalized_instance
    Tracker->>Test: Validate tracking with new similarity metric

🐰 In the meadow, hopping with glee,
A new metric joins, oh what a spree!
"Normalized_instance," a friend so bright,
Enhancing our tracking, making it right.
With options aplenty, we leap and bound,
In the world of data, joy can be found! 🌼✨

Possibly related PRs

Suggested labels

MultiView Stack


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: -- `I pushed a fix in commit , please review it.` -- `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: -- `@coderabbitai generate unit testing code for this file.` -- `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: -- `@coderabbitai generate interesting stats about this repository and render them as a table.` -- `@coderabbitai read src/utils.ts and generate unit testing code.` -- `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` -- `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (Invoked using PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. ### Other keywords and placeholders - Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. - Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description. - Add `@coderabbitai` anywhere in the PR title to generate the title automatically. ### CodeRabbit Configuration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

[!TIP]

Early access features: enabled We are currently testing the following features in early access: - **OpenAI `o1` for code reviews**: OpenAI's new `o1` model is being tested for code reviews. This model has advanced reasoning capabilities and can provide more nuanced feedback on your code. Note: - You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file. - - Please join our [Discord Community](https://discord.com/invite/GsXnASn26c) to provide feedback and report issues on the [discussion post](https://discordapp.com/channels/1134356397673414807/1279579842131787838).
codecov[bot] commented 2 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 75.37%. Comparing base (7ed1229) to head (e1ee966). Report is 51 commits behind head on develop.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## develop #1939 +/- ## =========================================== + Coverage 73.30% 75.37% +2.06% =========================================== Files 134 133 -1 Lines 24087 24480 +393 =========================================== + Hits 17658 18452 +794 + Misses 6429 6028 -401 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

getzze commented 2 months ago

I'm just seeing this PR and it seems similar in spirit to the "object keypoint" similarity (https://github.com/talmolab/sleap/pull/1003) that was recently merged, but not correctly documented I'm afraid (I am the author so I am to blame!).

talmo commented 2 months ago

I'm just seeing this PR and it seems similar in spirit to the "object keypoint" similarity (#1003) that was recently merged, but not correctly documented I'm afraid (I am the author so I am to blame!).

Hi @getzze,

I think part of the problem is that we thought your method was not being hooked up to the GUI at all. Here is where we map the string names of the similarity methods to the actual functions:

https://github.com/talmolab/sleap/blob/e4bb4449ee4907f8315ef9f64511a7aaa0c79155/sleap/nn/tracking.py#L494-L499

The object_keypoint key is mapping to the same instance_similarity, though I see now upon closer inspection that it's getting special-cased here:

https://github.com/talmolab/sleap/blob/e4bb4449ee4907f8315ef9f64511a7aaa0c79155/sleap/nn/tracking.py#L886-L893

Docs would definitely help, but either way, I think what we're doing here is a bit different as it's basically just normalizing the keypoints by the image size rather than changing how we account for number of keypoints or confidence.

I suppose the best thing would be to have something more unified that can (optionally?) use all three factors.

For the moment, the easiest might just be to have separate methods as we'll be refactoring a lot of the tracking backend in the coming months anyway as part of our transition to PyTorch.

If you're curious or would like to weigh in on the design of the new backend, check out: https://github.com/talmolab/sleap-nn/issues/53 and the current state of the implementation in this source tree.

getzze commented 2 months ago

Docs would definitely help, but either way, I think what we're doing here is a bit different as it's basically just normalizing the keypoints by the image size rather than changing how we account for number of keypoints or confidence.

There is that, but the key point of object_keypoint_similarity is actually to normalize the distances before taking the exponential.

https://github.com/talmolab/sleap/blob/e4bb4449ee4907f8315ef9f64511a7aaa0c79155/sleap/nn/tracker/components.py#L123-L128

You can specify the normalization factor for each node, but what makes more sense is to use the standard error of infering the nodes (say 5 pixels). I didn't find a way to get this error simply though, so the extra parameter (but it's also more flexible like that).

Quoting what I wrote in the original PR #1003 :

  1. Adding an scale to the distance between the reference and query keypoint. Otherwise, if the ref and query keypoints are 3 pixels apart, they contribute to 0.0001 to the similarity score, versus 0.36 if they are 1 pixel apart. This is very sensitive to single pixel fluctuations. Instead, the distance is divided by a user-defined pixel scale before applying the gaussian function. The scale can be chosen to be the error for each keypoint found during training of the model with the validation set. Ideally this could be retrieved automatically, it is now hidden in the metrics.val.npz file of the model. This is what they use in this paper.

I should have put some explanation in the docs, really sorry about that...

So I think instance_similarity, object_keypoint_similarity and normalized_instance_similarity could be unified, with options to select one or the other.

Thanks for linking to the refactoring of tracking, the roadmap looks very exciting!