Open eunjoy21 opened 2 months ago
Hi,
It is correct. The direction you draw is "anterior", just like the north on the map. Then other directions can be assigned automatically.
You didn't miss anything, no velocity and distance features are outputted in the current version. Propagation is a feature that can be difficult to describe. In our output features, we only show the propagation features like the figure you show, they represent the degree of variation in four directions. To meet your needs, we can calculate a score to represent velocity, similar to measuring the distance between two frames. However, since propagation may change direction, propagation distance may be meaningless.
Hi Xuelong,
Thank you so much for your prompt response!
I expected the output to display all four directions, similar to the interpretation files (exported features). However, the output file only shows one direction ("right"). I thought there might be an issue. How can I obtain the degree of variation in all four directions? Also, how do I calculate the score representing velocity as you mentioned above?
I noticed that the function for area increase/decrease per frame and the average xy position is mentioned in the AQuA paper (Nat Neuroscience, 2019). Does this require an additional script?
Thank you!
Thanks for reminding me. That's a real bug, the degrees of all directions were calculated, but they were not outputted correctly. Now I have updated the GitHub code so that you can obtain the degree in all four directions.
About the speed, area increase/decrease per frame, and average xy position, you need to load the outputted ".mat" file of AQuA2 in MATLAB. In the workspace, there is a structure named "res". (abbreviation of result)
Velocity is in "res.fts1.propagation.maxPropSpeed" or "res.fts1.propagation.avgPropSpeed". ("fts" is the abbreviation of feature) Area increase/decrease per frame is in "res.fts1.propagation.areaChange". The average xy position is in "res.fts1.basic.center". It is saved as [average X, average Y, average Z] (z is not time, this is set for 3D).
Each feature is like an array or cell array, each element in it corresponds to one event. For example, if you want to check the 5th event, you can check the 5th element in the array.
Best Wishes
Hi Xuelong,
Thank you so much for your explanation. I will look into the mat file! Thanks!
I have a quick question regarding the interpretation of events moving toward or away from a landmark. From the output parameter file, it seems to represent a score for propagation toward or away from a landmark, measured in µm³. I got these results, but I'm unsure why some values are identical, others are slightly different, or zero. Could you help me understand this? I'll include a part of the results.
Additionally, I found the res.fts1.propagation.popGrowOverall result. Could you explain what the four columns represent?
Thank you!
This part of the calculation is inherited from AQuA1. I'm not quite familiar with this part.
When the score is 0, it indicates that the events overlap with the landmark, resulting in no "toward" or "away" score.
Regarding the identical scores, I checked the code and found that this is due to a simplified calculation for acceleration purposes. For each event, a box is drawn that encompasses only the event itself. If a landmark falls outside this box, the calculation uses several boundary pixels of the box to represent the landmark. Therefore, if you notice many scores that are nearly the same, it likely means that those landmarks are in the same direction relative to the event.
Thank you for your prompt response!
1) It is clear that When the score is 0, it indicates that the events overlap with the landmark. Regarding many identical cores in the toward/away from the landmark, it is still hard to understand. If I understand correctly, does this only reflect the direction of propagation (whether it's the same or different) without indicating how far the propagation events move toward or away from the soma?
2) I found the res.fts1.propagation.popGrowOverall result. Could you explain what the four columns represent?
3) Regarding the propagation onset and offset values, I expected the offset values to be higher than the onset values because propagation shrinks from the peak. By comparing onset and offset values, I thought I could determine the area covered by calcium event propagation. However, this is not the case in my output. Could you explain this as well?
Sorry to bother you to keep asking several questions. I just want to figure out how much information I can get from your software and utilize it for my analysis. Thank you!
No. The score represents how much the event moves toward a specific landmark. A smaller score indicates a smaller movement toward the landmark. For the same event, identical scores suggest that the event moves equally toward the landmarks, possibly due to the landmarks being in the same direction relative to the event. For a specific landmark, the scores of different events reflect the extent of their movement toward it.
Four columns should represent the propagation score in four directions: 'Anterior', 'Posterior', 'Left', and 'Right', indicating how much the event grows or moves in each direction.
I don't fully understand your question, but here is what I know: the propagation scores are metrics used to describe how an event propagates in four directions. These scores are calculated by comparing how the pixels of the event expand away from the center/source (growth/onset score) or contract towards the center/source (shrink/offset score) relative to the previous frame. If the center/source of the event darkens first, the shrink score will not be high since it does not represent a shrinking process. However, if the outer pixels darken first, making the event appear to contract toward the center, the offset values could be high. Additionally, the scores are not related to the area.
Hi Xuelong,
I would like some feedback on threshold application for event detection. When using the same threshold across different images, I’m encountering issues depending on the background brightness:
If the background is too bright, the analysis detects 65,535 active regions and then stops detecting events after a certain frame. If the background is too dim, events are not detected accurately. Do you have any recommendations for adjusting the threshold to account for varying background brightness in images?
Thanks!
Hi,
For any data, it will first normalize the data into the range [0, 1], then do the analysis. So for the same data, if you add some constant to the image, then the results should be the same (unless some values exceed the maximum value and result in saturation). I don't know what your data looks like, but I can have a guess.
After the normalization, then the tool will estimate the background (background is not 0 after normalization, because different pixels may have different intensities) and the noise level. To select active regions, it will pick all the regions meeting requirements (intensity > background + threshold * noise level). So shifting the image intensity has the same impact on the estimated background and the intensities of pixels. The only possible reason I can imagine is that the noise level changes.
If the background is too bright, I guess the noise estimated in your image is somehow too small (maybe the brightness is too close to the saturation brightness), then the analysis can detect too many active regions. The number 65535 is because we use int16 to save the event ID. We don't expect the event number to be larger than 65535, so in this case, maybe you should increase the threshold.
When the background is too dim, the noise estimated is different. or normal. In this case, I recommend setting the threshold accordingly.
In general, I would think these two images are of different types, and may set different parameters.
Hi Xuelong,
Quick question! You said that four columns in the propagation represent 'Anterior', 'Posterior', 'Left', and 'Right', indicating how much the event grows or moves in each direction. What I understood is anterior is up (North), posterior is down (south), Left is west, and right is east direction. Is it right? After reaching your AQuA paper, I was a little bit confused. If you don't mind, could you confirm it? Thank you!
Hi,
I can confirm. If you don't specify the direction, 'Anterior' is north, 'Posterior' is south, 'Left' is west, 'Right' is east.
And in the GUI, top-left corner, you can specify the 'Anterior' direction by yourself, and other directions will change accordingly.
Hi Xuelong,
I drag it left to the right. Then the anterior is up (North), posterior is down (south), left is west, and right is east direction?
If you drag it left to the right, then the arrow point to the east. So now, 'Anterior' is east, 'Posterior' is west, 'Left' is north, 'Right' is south.
Thank you for your confirmation! :)
Hi Xuelong,
I would like to clarify my previous question. Earlier, I asked about identical values in the "away from" or "toward the soma" results. In the field of view, there are multiple cells, and a calcium event might be associated with one or a few specific cells. How can I identify the correlated events and landmarks in the data? Is it only possible to analyze this when one or two cells are present in the image?
Sorry, that's one bug in the code about landmark features that will make identical values for multiple landmarks. Previously I didn't check the case with multiple landmarks, . Now it has been repaired and you can download the newest code from GitHub. After using the new code, you can load your previous results and rerun the feature extraction without running all the steps.
And there will be a digit label on the GUI to show the index of events and landmark, you can check these labels in the output feature table to see the correlation. It is possible to analyze multiple landmarks.
Besides, that bug will also impact the case of only one landmark. For the landmark features, they all should be rerun.
Sorry again for the mistake, in the initial version of AQuA2, the landmark features is simplified to be compatible with 3D data. Some user requested the features of the original AQuA, then the bug is made due to the incompatible input variables for original code.
Hi Xuelong,
Thank you for fixing the bug and your prompt response! I will reinstall the new version and check the data again. Thank you.
Hi Xuelong,
I have a quick question. I want to analyze the signal in the soma and territories separately. I found that I could confine the region by using the cell boundary function, but I could not find how to subtract the soma to analyze territories. I am wondering whether the AQuA2 have this function.
I have updated some mask-related functions and uploaded them to Github. You can replace the AQuA2/src/+mask folder with the new folder from the GitHub repository.
About how to subtract soma from the territory:
Create the mask with only soma: Click "Region-SelfCH1" button again, and create a mask with only soma.
Subtract soma and apply the mask: In the saving panel, select the "Segment region" option and choose "XOR." Click "Apply & back" to perform the XOR operation between the two masks (territory and soma), it will do operations like subtraction. Then it will apply the modified mask to your data.
Hi Xuelong,
Thank you! I will follow your instruction!
Hi Xuelong,
First, I replaced the folder that you recently updated. Then, I created the mask with the territories. Using the same way, I created the another mask with the several somas. I also completed the subtraction step.
However, this is what I got. The soma was not subtracted as expected.
Could you briefly check if I missed something from your instruction?
I always appreciate your prompt response and support!
It looks like your operations are correct, but I'm not sure what went wrong. Did you update the entire folder "src/+ui/+msk"? (Apologies for the confusion in my previous message—I typedthe wrong path.) I modified two files in that folder: one is "addCon_wkflMsk.m", which relates to the GUI and seems to have been updated. The other is "saveMsk.m", which implements the function. Could you double-check if "saveMsk.m" has been updated as well?
See, it will load the first mask, then do XOR operations with other masks. On my end, the function is working properly. If it still doesn't work for you, you might try completely updating the entire AQuA2 repository to see if that resolves the issue.
Additionally, you don't need to run all the steps to verify whether the somas are subtracted. After performing the operations I mentioned, you should be able to see the somas' positions as holes in the mask.
Hi Xuelong,
I only replaced one file that I marked below.
I will replace the entire msk folder and try it again. Thank you!
Hi Xuelong,
I completely updated the entire AQuA2 repository and the issue was resolved. Thank you!
2. In our output features, we only show the propagation features like the figure you show, they represent the degree of variation in four directions. To meet your needs, we can calculate a score to represent velocity, similar to measuring the distance between two frames. However, since propagation may change direction, propagation distance may be meaningless.
Hi Xuelong,
I just want to know how I can get the calculated score to represent velocity by reviewing your comments. Could you explain how I cant get the information?
Thank you.
After loading the saved '.mat' file, the speed values are stored as variables in 'res.fts1.propagation.avgPropSpeed' or 'res.fts1.propagation.maxPropSpeed'.
To determine the speed at each time point, we calculate the maximum distance between each pixel on the border at the current time point and its corresponding pixel on the border from the previous time point. The corresponding pixel is defined as the one closest to a given pixel in the previous time point's border. This process is represented mathematically as $max_i min_j dist(pixel_i, pixel_j)$, where i is the index of a pixel on the border of the current time point, and j is the index of a pixel on the border of the previous time point.
Hi,
I have a quick question regarding the circularity (basic). From the documentation, it says that "Closeness to a circle / sphere, 1 is the closest, the larger, the less circular". What I understood is the circularity values range from 0 to 1, and the values close to the 1 means the more circular. However, there are some values more than 1. Could you verify whether what I understood is right?
Thank you.
Best, Eunjoo
You are right. When circularity equals 1, it should indicate a perfect circle (or sphere in 3D).
Circularity is calculated using the area and perimeter measurements. There are two possible reasons that make the values larger than 1. (1) The perimeters are obtained through MATLAB function "regionprops", it could have some approximation. (2) There could be holes inside the region, which could make the perimeter measurement problematic.
Hi,
I've been using AQuA2, and it's much faster than the previous version. I really like it! While using the latest version to analyze my data, I have a few questions:
1) For drawing directions, it seems I can only draw one direction, is that correct? If so, how do I assign other directions like posterior, anterior, lateral, and medial? 2) Regarding the propagation functions, I can't find the velocity and propagation distance outputs after running it. Is there an additional step that I might have missed?
This is the output that I got.
I look forward to hearing your feedback!
Thank you.