soft-matter / trackpy

Python particle tracking toolkit
http://soft-matter.github.io/trackpy
Other
441 stars 131 forks source link

No effect of prediction method #690

Open roxhannes opened 2 years ago

roxhannes commented 2 years ago

Hello, thanks for the wonderful trackpy. I am tracking rising bubbles using trackpy and I would like to optimize the tracking accuracy. As input I have a pd.DataFrame (called bubble_data) with the columns x, y, radius, mean gray bubble, frame. The parameters were calculated using own image processing algorithms (so mean gray bubble is the mean value of the cropped grayscale image of the single bubble). Currently I am using following code to link the bubbles:

pred = tp.predict.NearestVelocityPredict()
tracked_bubbles = pred.link_df(bubble_data, search_range=(20, 200, 10, 100), pos_columns=['x', 'y', 'radius', 'mean gray bubble'], memory=5, adaptive_stop=10)

I have chosen different search range for especially fo x,y (since the velocities are directed upwards in y-direction), but there are still sometimes bubbles wrongly linked, especially if the bubble disappears for one or two images (this is the reason for memory=5). Therefore I thought using the NearestVelocityPredict but I do not get any different results if I change the prediction (e.g. from NullPredict to NearestVelocityPredict or DriftPredict). Do I have to add more information before the prediction is working. As I understood it from the docstrings, it is optional not mandatory to add e.g. an intial velocity field. Is this correct?

Maybe the NullPredict is also just working fine and there is nothing further to improve, but I am curious if I should see differences. And if you have any ideas how to optimize the tracking, I am glad to hear about them. Thanks, Hannes

nkeim commented 2 years ago

Hi! It's great to see another use case for prediction, n-dimensional tracking, and adaptive search!

It looks like you are using prediction correctly, so it's likely that something else is not working as you expect. One thing to keep in mind when you specify search_range as a vector, with non-spatial coordinates, is that it merely rescales the coordinates; linking candidates are still identified using a Euclidean distance metric (i.e. adding the coordinate differences in quadrature). So for example, if a candidate is a very close match for radius and mean gray bubble, it can be relatively farther away in x and y. This is different from the way we imagine search_range working when the coordinates are only spatial, i.e. as a well-defined circle or ellipse that reliably excludes bad matches. If you want complete control over the way non-spatial coordinates are treated, you can introduce a custom metric via the dist_func argument.

Finally, if you want to peek at what prediction is really doing (and whether it is doing anything), see the tp.predict.instrumented decorator.

roxhannes commented 2 years ago

Thank you very much for your answer and sorry for my late reply. Is there any example for using the dist_func?

nkeim commented 2 years ago

Sorry about the long wait. Searching the source code for dist_func shows that we use it to find the distance between two points in arbitrary coordinates. There's an instructive example in the tests, where it is used to allow particle tracking in radial coordinates. More specialized uses are currently up to the imagination (contributed examples are welcome!). For instance, if your distribution of size is very bimodal, you could use size as an extra coordinate, and add a large distance penalty between a particle a that is one size, and a candidate b that is a clearly different size (and zero penalty if a and b are similar sizes).

There's a nice pull request #692 that has the potential to greatly improve performance for a non-Cartesian metric.