Closed gabyx closed 9 years ago
need to update the figures to the new version in the repository above...
Hi @gabyx,
I'm away for the weekend at a conference but I'd be happy to take a look at your project once I get back on Tuesday.
In the mean time, let me see if I can answer your questions. I'm going to cover each topic in a separate response as I have time.
First, outliers: I am sorry that this hasn't made its way into the documentation. I will add it when I have time.
Does Acciv have a similar feature for outlier detection as the PIVLab? Sort of, see below. I would like to have some stddev outlier detection or something similar
The easiest way to do what you want may be just to run the data through your own python or Matlab code (or whatever you like to use to work with your data) that removes outliers based on your favorite criteria. It wouldn't be hard to modify ACCIV to do this internally if that is what you prefer.
Currently, ACCIV removes outliers, but only as part of its internal, iterative process for finding a smooth fit to the velocity data on a regular grid. By default, it removes points that deviate from the previous iteration of the smooth fit by more than 8 times the median deviation. Here is the comment and parameter value from one of the defaultParams.ascii files:
# the number of outer iterations of the overall function fitting and outlier removal
# process
smoothFitOutlierRemovalIterationCount = 1
# the threshold constant for defining outliers: a point x_i is defined to be an outlier
# if R_i > outlierThresholdConstant*median(R_i), where R_i = ||v_m(x_i) - v_s(x_i)||^2
# is the root-mean-squared difference between the measured velocity and the smooth
# velocity interpolated to the same location, and where the median is taken over all
# measurements in the data set
outlierThresholdConstant = 8
Outliers are not removed in final "scattered" velocities produced by each pass because it was decided that the user should decide what is "real" and what is not using criteria of their choice. If your goal is to try create a better smooth velocity for the next pass, that should not be an issue (see below).
and also it would be good if I could classfiy all vectors which are not within a certain range min,max as outliers?
The easiest way to limit the velocities that ACCIV can produce is to limit the search range. The search range is in pixels between the images separated by the longest time (assuming you're using more than 2 images), so the search range is also the min/max velocity calculated in units of pixels/maxTimeBetweenImages. You can have a different search range for the two directions in your image, which can be useful if motion is much larger along one axis than the other. Since the search range is composed of integers, it doesn't give you a lot of precision in setting the min and max.
If you need more precision, you would need to add an additional parameter and modify the code to remove outliers accordingly, which shouldn't be too difficult if you have c++ experience. I can help out if you decide to go down this road.
Can I do this after each pass? Which files and how should I modify the hdf5 data, can I set additional outliers to zero/NaN or something?
It's not very useful to remove outliers after a pass. Each pass only makes use of the smoothed, gridded velocity field from the previous pass, so there is no benefit in removing outliers from the scattered data before applying the next pass. Instead, you would probably want to remove outliers as part of the process of contstucting the gridded velocity, as I described above. Try modifying the search range first, and then the outlier threshold second. If that doesn't give you good results, we can talk about making some tweaks to the code to allow you to set a min/max velociy or use some other criterion for removing outliers.
Hi again @gabyx,
I forgot to say last time that I'm excited that you're trying out ACCIV and I'm very happy to help you get it up and running for your project.
Now to your second topic, the masks:
The mask feature, how does that work? I have generated a binary mask, but the outGridVelocity always interpolates the data also when the input had a mask?
The way the mask works in ACCIV is that scattered velocity vectors are only computed where the entire correlation box is unmasked in both the source and destination images at every possible shift over the search range. The velocity vectors can move somewhat during the iterative process for finding curved feature paths and constructing a smooth, gridded velocity, but they should only very rarely end up in masked regions.
The same is not true of the smoothed, gridded velocity, which is what I suspect you're looking at. The gridded velocity is used to advect the images around and to determine the curved paths of individual features. Our idea with masking images was always that there was some object occluding our view of the flow. (For us this is usually either haze in the upper atmosphere or the shadow of one of Jupiter's moons.) Even if this is the case, we wanted ACCIV to make its best guess at what the flow would have been in the masked region if it hadn't been blocked out. ACCIV does this by interpolating or extrapolating based on the neighboring data. If necessary, it makes use of data that is rather far away and the result is a very smooth field.
It sounds like your needs are different, so we should discuss them a bit more.
Is it possible to clip the "outGridVelocity.h5" data to the mask? or do I have to do this manually?
It should be possible to simply multiply the fields in outGridVelocity.h5 by the mask if this is what you need. Yes, you would need to write your own (external) code to do this. If you were to mask the outGridVelocity.h5 from one pass and attempt to use it in the next pass of ACCIV, I suspect that the results would not be very good because the advection algorithm would have a hard time with the abrupt changes in velocity between masked and unmasked regions.
Thanks for the nice answer, I will try to reply in the next few days =)!
Hi xylar :-)
Thanks for the answer again! :-)
Maybe I first quickly describ my setting of data: I have a granular (glass spheres) media running down a slope (contourVideo.mp4 in the test folder) and the vector field should represent this movement as close as possible. Meaning that a velocity is non-zero only where granules are and otherwise everywhere it is zero ... (thats what I have in mind)
deltaT between images = 0.001 seconds
I am trying our different CIV softwars, so far I ve been using openpiv (decent results, no multipass stuff, that needs to be done my self, which is to much work..., then the PIVLab gives quite good results, which uses multipass FFT, and then there is your software which does also multipass FFT but with advection which might be even better I think, what was your experiences with the already existing softwares?, at the end of the day I just need a reliable good velocity field, (hehe thats probably what everybody wants) the best possible fit ;)
I was thinking about the following stuff in ACCIV:
May be we could also introduce the outlier filter which acts in serie with the already implemented one
R_i > mean(R_i) + outlierStdDevMult * stdDev(R_i)
Would that make sense? This assumes obviously that the root-mean squared error R_i is somewhat gaussian distributed over the whole data, i could implement this at the right place if you could show me where (I am quite c++ experienced, but not really experienced with the CIV field ...) ?
I am tring out some more with your suggested idea!
Is that correct your algorithm produces a smoothed grided velocity field for each direction? v_x and v_y? or is that done together? You use a 2d b-spline patch for that? Maybe I should first read your paper instead of asking stupid questions :)!
Instead, you would probably want to remove outliers as part of the process of contstucting the gridded velocity, as I described above. Thats exactly what I try to do i think, so the workflow would be as follows maybe: So I probably just extract the gridded velocity field at the last pass ov ACCIV and mask it with an extracted mask in python I think, and set every grid point not in the mask to zero, or I try to use the zero-ed gridded vel. field in the next pass, you think this is bad for the advection? Because the gradient gets really bad at this hard drops to zero?
What is the tie point fraction? Tie point is the time point to where the images are advected right? Is that correct: ACCIV uses a set of input images (say long term or short term) an advects the image pairs to a common time point and searches for correlation features (CIV with FFT?) and the result is a bunch of extracted matched image correlations...?
Thanks a lot :-)!
Hi @gabyx,
what was your experiences with the already existing softwares?
I haven't worked much with PIV/CIV software in the last 6 years since I switched to working on climate science. I'm not sure how they've improved. For our particular application, the problem was that the images were sparse in time so we needed a really sophisticated way of finding features separated by long periods of time in a flow with strong rotation and deformation. That's why ACCIV focuses on advecting the images in a multi-pass process. For good image quality and significant motion between images, I think ACCIV does about as good a job as one can.
For your application, it sounds like you have a lot of images separated by pretty short times. I'll have to take a look at your movie later on today or tomorrow when I have some time to see, though. It may be that advection isn't really going to give you any major improvement in your results and you may be better off with a standard PIV or CIV software.
May be we could also introduce the outlier filter which acts in serie with the already implemented one
R_i > mean(R_i) + outlierStdDevMult * stdDev(R_i)
Would that make sense? This assumes obviously that the root-mean squared error R_i is somewhat > gaussian distributed over the whole data,
No, I'm afraid I don't think that would make sense. It would work, but the results wouldn't be different from adjusting the outlierThresholdConstant
parameter.
The residual R_i
is a root-mean-squared quantity already so it won't have a Gaussian distribution. The signed components of the residual might be Gaussian (in pseudo-code, R_x[i] = v_x_scattered[i] - v_x_smooth[i]
) but the mean of the Gaussian should be close to zero unless there's some kind of systematic problem in how the smooth fit is performed. What ACCIV is doing to remove outliers is looking for points with residuals (which are always positive) that are much larger than a typical residual, as defined by the median (which is less sensitive to outliers). Your methods above would do the same---it would look for a residual that is larger than average, and then make sure it is also some number of standard deviations larger than the mean. I'm sure you can achieve the same effect by just adjusting outlierThresholdConstant
. If that isn't working, it's because this method for identifying outliers isn't appropriate for your problem.
Is that correct your algorithm produces a smoothed grided velocity field for each direction? v_x and v_y? or is that done together? You use a 2d b-spline patch for that? Maybe I should first read your paper instead of asking stupid questions :)!
That's correct, v_x
and v_y
are computed separately by the b-spline algorithm. That algorithm isn't covered in the original ACCIV paper (we weren't using it back then) but a similar method is covered in this paper (mentioned in the documentation):
Lee, S., Wolberg, G., & Shin, S. (1997). Scattered data interpolation with multilevel B-splines. IEEE Transactions on Visualization and Computer Graphics, 3(3), 228–244.
So I probably just extract the gridded velocity field at the last pass ov ACCIV and mask it with an extracted mask in python I think, and set every grid point not in the mask to zero, or I try to use the zero-ed gridded vel. field in the next pass, you think this is bad for the advection? Because the gradient gets really bad at this hard drops to zero?
You can definitely do this at the end, once you've finished all your ACCIV passes. I don't think you are going to get improved results, though, by zeroing out the velocity in the masked region after each ACCIV pass. As long as your images are all masked, it shouldn't matter how what velocity is present in the masked regions (and how these regions are advected). However, ACCIV is not prepared to handle a velocity field with discontinuities (e.g. non-zero where there are particles but zero where there are not) so it will perform much better if you allow it to smoothly extrapolate the velocity field through masked regions during the intermediate steps of the multi-pass algorithm.
What is the tie point fraction? Tie point is the time point to where the images are advected right?
A tie point is a vector pointing from a feature in one (earlier) image to the same feature in a later image. Tie points are related to velocity vectors but they aren't quite the same because a feature may follow a curved path to get from the tail to the head of the tie-point vector.
However, I'm afraid I have no idea what you're referring to with the "tie point fraction". Could you give me a bit of context? Is this in the paper or the documentation or the code or a parameter file?
Is that correct: ACCIV uses a set of input images (say long term or short term) an advects the image pairs to a common time point and searches for correlation features (CIV with FFT?) and the result is a bunch of extracted matched image correlations...?
Yep, that's correct. The extra steps that you're missing is that the correlations in the advected images aren't all that useful, so ACCIV has to figure out where those features were in the original, unadvected images by "unadvecting" the tie points. We also do some sort of "fancy" stuff to follow curved paths that the features followed, and we construct a smoothed velocity field on a grid along the way.
Hi @gabyx,
I realized when I took a look at the contourVideo.mp4 that I left one question unanswered from before:
The settings seem to look good, to you have any advice (see contourVideo.mp4 and maybe the figures int the pass folders) I used some CLAHE (histogram equalization) for the images (maybe this is not so good?)
You don't need to do any equalization (though it probably doesn't do any harm either). As part of its feature comparison, ACCIV removes the mean intensity from the feature window in each image and divides by the standard deviation in intensity to equalize the contract.
Hi @gabyx,
Cool! I finally had a chance to look at your setup. The movie you're working with is really cool, and I don't see any reason that ACCIV won't be able to work with it (though other CIV/PIV software might work fine, too, since the flow seems to be pretty translational between frames).
The main thing I noticed right away is that the masks in both image000.h5 and image001.h5 are all ones. Can you point me to the code you used to generate your HDF5 files? Maybe something went weird with the masks. I looked at the code in the testAcciv.ipynb notebook and it looks like code for actually replacing the mask had been commented out. Is that on purpose?
I can see from both the movie and your example output why it's so important to get the mask right in your case. You essentially have two sets of features---the grains and the stationary pattern below them---and any CIV-based software is going to pick up on the stationary background just as easily as the grains. From your contour video, it looks like you've figured out a good way to find a mask for where you actually think there are grains and want the flow. So I think the only problem is getting that contour to actually apply to the images you're wanting to apply ACCIV to. Once you do, you should find that the vectors in the masked regions disappear, which is what you wanted. The smoothed field will make a guess at the velocity in the masked regions, but that guess won't be zero like it currently is.
If you figure out what the problem is with the masks, but you're still having trouble, let me know. I'm always happy to help.
Thanks for the answer :-) sorry about the mask, I did not use the mask, so far, I could have used it, jeah, I will provide an updated commit with the relevant stuff in some minutes :-). What I experienced, is that when I use the mask, ACCIV only fills part of that mask with vectors, (because I think each window needs to be fully contained in the positive part of the mask) I will upload some stuff :-)
And give you some infos where to find what :-). The code in python works like a charm to genegrate the HDF5 :-)
Thanks a lot!
Hi @gabyx,
Two things you could try:
If you give me an updated version of your script for generating the masked images, I'd be happy to play around both with the details of the mask and the ACCIV parameters to see what I can do.
You are so fast in replying, (which is great) , I first have to keep up the folders, then show you the cleaned ,proper stuff, such that you get an idea (python script and so on) :-) I am on the way doing that :-)
Fast Marching Method sounds great :-)
I wanted to say, I am really glad somebody with more knowledge in this field can give me some hints and tips ! :-) Really thanksfull :-), if I will be able to finish this research, you will be surely acknowledged if not already cited for ACCIV :-)
Hi xylar,
I have done two test so far: https://github.com/gabyx/ChuteExperiments/commits/master the last two commits:
Without mask https://github.com/gabyx/ChuteExperiments/tree/3a2655e76e735c877f446979c6ebe3efb0b08755/scripts/testACCIV
Lets speak primarly about the test without mask. However, the mask test where ACCIV uses the mask , is not yet good, I think it needs some marching cube mask enhancing before doing more, what do you think when looking at the output pictures?
The test without the mask (where I cropped the velocities to zero of the gridded output in figure 10) seems to be a bit noisy, but maybe thats just how it is, because the images are not that sharp And it seems that going smaller then 32 pixels in pass increases the noise quite much...?
Maybe you just see some "stupid" settings or so which could improve the results :-)
Do you think using a third frame would help? Meaning I setup ACCIV to use three sequential frames A,B;C for each pass I would then iterate over the whole frame set with this "frame window size 3" with 3 passes (48,32,24 corr. box sizes, step size = 4 for each pass)
Thanks for the input!
Great, I've had a look at both branches (with and without the mask). It seems like the version with the mask should work better, but you will have to decide based on your own experience.
I would agree that there's a lot of noise. As I said in the other thread, the parameter smoothFitMinControlPointScatteredNeighbors
can be used to control smoothness of the velocity field. There is only so much information you can get out of a pair of images that's very close together in time. Making the correlation box too small will tend to produce noise (as you saw). It helps sometimes to decrease the stride so you just get more velocity vectors (at the expense of more computing time and bigger files). But you also typically have to increase how many neighbors are used in the smooth fit when you do that. Don't hesitate to increase smoothFitMinControlPointScatteredNeighbors
to smooth out noise if that's what you need to do.
I definitely think a third frame would be good. I would try a 4th, too (and even a 5th if you're ambitious).
Keep me posted. I think the results could be really cool if this works out.
Hey xylar :-)
I am about to write the cluster parallel batch version for my acciv task:
I am a bit stuck with the plot from your plotVelocity.py output https://github.com/gabyx/ChuteExperiments/blob/master/scripts/testACCIV/test/pass3/fig011.jpg https://github.com/gabyx/ChuteExperiments/blob/master/scripts/testACCIV/test/pass3/fig012.jpg
Which plot should I primaraly use in my setup, to increase the accuracy, such that I know this test(with several passes) is better than another one ... The tutorial says one should look that the uncertainties (which are basically offsets from the advected image ) have a sharp peak in the histogram... is there anything else to take into account
I am struggling abit about finding, a result which is pleasing (I mean I could take any test, but they always result in the same velocity range but more or less noisy, which makes things hard :-))
I should not go to low with the correlation box size right? rather increase the step size to get more vectors?
Hi Gabriel,
I'm afraid what your finding is consistent with my experience with image velocimetry methods: It's not clear what the best choice of parameter values is, and there's an aspect of intuition involved that isn't as precise as we would like.
The most important thing with the correlation uncertainties that you're looking at is that they may actually tell you more about the uncertainty in velocity from the previous pass than for the current pass. That's because what you're getting is the amount of mismatch between the images advected using the velocity field from the last pass.
If the correlation uncertainties that you are plotting (particularly the peak value) aren't changing much with different choices of parameters, that probably means that the noise you're getting is basically within the uncertainty in the velocity. If reducing the correlation box size doesn't improve the uncertainties (or makes them worse) that probably means you've hit the limit beyond which features can't be reliably identified. Reducing the stride (step size) will help by increasing the number of vectors, but at some point there just isn't any more information in your images, so that isn't going to help. You will see that because you have to increase the smoothing (by using more and more "neighbors" in the smooth fit algorithm) to kill off noise as you increase the number of vectors.
The best way to reduce the uncertainty is to make use of more images or images that are farther apart in time. (For the latter, an error of about the same number of pixels corresponds to a much smaller error in velocity.) The downside is that you get smoothing in time, but often that can still give preferable results to having too much noise or smoothing in space. Have you tried either of those yet?
Hi Xylar :-)
Thanks for the profund answer, really helped. Thats what I thought, I probably hit the limit of what I could get out from the images already after pass2 :-)
I am currently using image indices 1,2,3 or 1,3,5 or , 1,2,3,4,5 with configurations:
currently, I think 3 and more images are pretty good, which time differences are best 1,2,3 oder larger 1,3,5,7,9 (skipping one frame) i have not found out yet what is better, i realized somehow, that as further apart the images the more outliers i get in the scattered velocity field (holes, which get filled up by the grid fit)
1,2,3 and 1,3,5 give almost the same information…
I think I will decrease the correlation box size as much as possible till I get reasonable little noise , then I stop, (increasing smoothing from step to step, search range is symmetric during advection as you suggested, I think also the noise introduced during later passes when decreasing corr. box size is mainly due to blur and small particle flickering, I realized that a step size lower than (6,6) is not gonna help improve accuracy
I have only tested the passes with the same frames, I did not do any video analysis yet, so maybe I should try this and see how the sliced velocity profiles change over time
The iterationCount variable for the outlier removale (smoothFitOutlierRemovalIterationCount) in the parameters file, means that after every fit (output is after the lines : “leve 1, leve2 ….. level N” in the smoothing step) and outlier removal is done and this variable tells till which iteration this removal takes place right?
streamlineFollowingSmoothFitIterationCount what that variable does, i did not yet understand? when do you do stream line following? what is the outer and inner iteration loop? could you give some simple pseudo code what that would look like?
Thanks for your effort!
BR from Zürich Gabriel
Am 10.07.2015 um 20:59 schrieb Xylar Asay-Davis notifications@github.com:
Hi Gabriel,
I'm afraid what your finding is consistent with my experience with image velocimetry methods: It's not clear what the best choice of parameter values is, and there's an aspect of intuition involved that isn't as precise as we would like.
The most important thing with the correlation uncertainties that you're looking at is that they may actually tell you more about the uncertainty in velocity from the previous pass than for the current pass. That's because what you're getting is the amount of mismatch between the images advected using the velocity field from the last pass.
If the correlation uncertainties that you are plotting (particularly the peak value) aren't changing much with different choices of parameters, that probably means that the noise you're getting is basically within the uncertainty in the velocity. If reducing the correlation box size doesn't improve the uncertainties (or makes them worse) that probably means you've hit the limit beyond which features can't be reliably identified. Reducing the stride (step size) will help by increasing the number of vectors, but at some point there just isn't any more information in your images, so that isn't going to help. You will see that because you have to increase the smoothing (by using more and more "neighbors" in the smooth fit algorithm) to kill off noise as you increase the number of vectors.
The best way to reduce the uncertainty is to make use of more images or images that are farther apart in time. (For the latter, an error of about the same number of pixels corresponds to a much smaller error in velocity.) The downside is that you get smoothing in time, but often that can still give preferable results to having too much noise or smoothing in space. Have you tried either of those yet?
— Reply to this email directly or view it on GitHub https://github.com/xylar/acciv/issues/1#issuecomment-120494697.
The iterationCount variable for the outlier removale (smoothFitOutlierRemovalIterationCount) in the parameters file, means that after every fit (output is after the lines : “leve 1, leve2 ….. level N” in the smoothing step) and outlier removal is done and this variable tells till which iteration this removal takes place right?
That's not quite the meaning of smoothFitOutlierRemovalIterationCount
. If outlier removal is turned on (smoothFitOutlierRemovalIterationCount > 0
), there are 3 outer iterations of smooth fitting, and outlier removal is used only on the second of the 3 iterations. Within this second outer iteration, there can be multiple iterations of outlier removal. I put in that option to test it out but I have found that more than 1 iteration just removes vectors at the edges of "bald" patches rather than removing "true" outliers, so I don't recommend a value other than 0 or 1.
streamlineFollowingSmoothFitIterationCount what that variable does, i did not yet understand? when do you do stream line following? what is the outer and inner iteration loop? could you give some simple pseudo code what that would look like?
Sure, I'll give a pseudo code (python-style) for the whole smooth fitting process:
def doOuterLoop():
if(smoothFitOutlierRemovalIterationCount == 0):
doInnerLoop(doOutlierRemoval=False)
else:
doInnerLoop(doOutlierRemoval=False)
doInnerLoop(doOutlierRemoval=True)
doInnerLoop(doOutlierRemoval=False) # this should probably be removed, see below
def doInnerLoop(doOutlierRemoval):
if(doOutlierRemoval):
loopCount = smoothFitOutlierRemovalIterationCount
else:
loopCount = streamlineFollowingSmoothFitIterationCount
for innerIndex in range(loopCount):
if(doOutlierRemoval):
removeOutliers()
constructVelocityAlongCurvedPaths()
constructGridVelocityFromScatteredVelocity()
if(residualChange < 0.01):
return
So each inner loop runs either until the residual doesn't change by very much (< 1%) or until the loopCount is reached. For the first and last outer iterations, this loop count is streamlineFollowingSmoothFitIterationCount
, and for the middle iteration it is smoothFitOutlierRemovalIterationCount
. I realize this is confusing and the looping could be rewritten to make more sense. It is kind of a legacy of how the process was developed.
I should also explain why it only does outlier removal for the middle outer iteration. For the first iteration, there is not a smooth velocity yet for determining which vectors might be outliers. For the final iteration, I decided I didn't like the idea of outliers being removed from the final set of scattered data.
However, looking at the code again, I think there is a bit of a glitch in my thinking. The final gridded velocity also does not use outlier removal, which was not the plan. I think this would be fixed by simply taking out the third iteration without outlier removal, as the final scattered velocity is constructed outside the outer loop in any case. Look for a push in the next hour or so.
Hey, Xylar :-)
I made good progress with your tool, for my ChuteFlow, I am still analysing data, what I can say so far is that even with my bad , shitty blurred data, the velocities are quite in range with the simulation, and the range of the velocity stays in range with other noisier settings for acciv.
I will report back in some weeks, I will citing your work in my thesis and maybee you are interessted in reading the part about ACCIV (I wont go into details and it wont be a lot but maybe you are interessted in adding some imported things I forgot :-)). And I will probably put my general Cluster Job Generator online (which also has a CIV plugin which in my case configures an ACCIV job for parallel execution with MPI :-) it works awesome, I am executing 552 processes with 3 passes over 4 consecutive images on the cluster right now :-). Its smooth and organized :-). :-)
BR :-)
Hi @gabyx,
Again, glad to hear you've found ACCIV to be useful.
Great, I'd definitely be interested in seeing how you've got ACCIV jobs running in parallel. This is something I've wanted to do for some time, but haven't had the time to implement.
I'll be glad to take a look at the relevant section of your thesis (as long as it's not too long). Keep me posted.
-Xylar
Hi, again :-)
I've written only 1 A4 page with some explanations on how acciv works approximately, i did my best to make it as accurate as possible :-). any feedback is very welcome :-).Which email address should I use? :-)
I also explained on this page shortly how the stuff is parallelized, its extremely simple, but the job configurator stuff which I probably will put online later, (then you can quickly configure your CIV example with pictures ) and see how everything works.
I have done some README file already which is all work in progress: https://github.com/gabyx/HPClusterJobConfigurator/blob/master/README.md
BR
Hello, :-)
I am using ACCIV for my research : http://www.zfm.ethz.ch/~nuetzig/?page=research
Does Acciv have a similar feature for outlier detection as the PIVLab?
I would like to have some stddev outlier detection or something similar , and also it would be good if I could classfiy all vectors which are not within a certain range [min,max](either norm or components) as outliers?
Can I do this after each pass? Which files and how should I modify the hdf5 data, can I set additional outliers to zero/NaN or something?
The mask feature, how does that work? I have generated a binary mask, but the outGridVelocity always interpolates the data also when the input had a mask? Is it possible to clip the "outGridVelocity.h5" data to the mask? or do I have to do this manually?
Maybe you could have alook at :
https://github.com/gabyx/ChuteExperiments/tree/master/scripts/testACCIV
which is an ACCIV test folder, with tests/pass1 tests/pass2 and tests/pass3The settings seem to look good, to you have any advice (see contourVideo.mp4 and maybe the figures int the pass folders) I used some CLAHE (histogram equalization) for the images (maybe this is not so good?)
This software seems to be pretty elaborate :-)! :+1:
Thanks a lot for your effort and help :-)