py4dstem / py4DSTEM

GNU General Public License v3.0
202 stars 140 forks source link

Questions about strain mapping #648

Open sbachu6812 opened 6 months ago

sbachu6812 commented 6 months ago

Hello,

Thanks a lot for creating the py4DSTEM package. I have been using it for strain mapping for a while and it's user-friendly. I have a couple of questions I wanted to ask. I am sorry if I am using the wrong channel to ask questions:

  1. For calibrating the center positions of the braggpeaks, I used the "measure_origin" followed by "fit_origin" functions but I see strong residuals after the fitting. I tried all three fitting options ('plane', 'parabola' and 'bezier_two') and I get strong residuals with all three of them. I am wondering if it is ok to have large residuals and I can just continue with the process. Or should I try to create a custom function for every data depending on how the data looks? Could you please advise me on this?

  2. I collected 4D STEM data with two different C2 aperture sizes (50 and 20). C2 50 gives more of diffraction disks and C2 20 gives diffraction spots. I am wondering if I should change the negative trench size for creating probe kernel for C2 50 vs C2 20? Is there a guideline as to how to decide the size of the negative trench for creating the probe kernel? Also, in general, is the bragg peak finding routine affected by diffraction spots vs disks? I am asking these questions because our strain values look vastly different for C2 20 vs C2 50. All other experimental parameters were kept the same.

Thanks in advance.

sezelt commented 6 months ago

Can you attach a photo of what the problem with large residuals looks like?

The size of the diffraction disks impacts strain mapping in a number of ways. There is some discussion of these effects in this paper.

bsavitzky commented 6 months ago

Hi @sbachu6812 - thanks for the question. I can try to expand a bit - it's true as @sezelt said that it's a bit hard to comment without knowing what your data/results look like, but a few additional general comments might help to start :)

  1. Calibrating the center position is important, and if not done correctly, you may get incorrect results. This is where seeing your results might help most - large residuals could mean a few things, and doesn't necessarily mean the measurement/fit is wrong....but it may very well. Large residuals which correspond to beam deflection from the sample potential (which might look like, e.g., high intensity in your residuals near edge features) are expected and are not a problem. Other large residuals probably mean something is wrong. There are a few ways that origin measurement/fitting can go wrong so again hard to say without data - that said, my preferred way to get a good fitted origin position for tricky data is to pass the mask input to the .fit_origin method to specify which pixels to use. You may need to update to the latest version as this functionality has changed once or twice. You can also try using robust fitting by setting the robust input to True.

  2. You should definitely use a different probe for the two different apertures. This isn't only a matter of the trench size, however - you'll want to create a new Probe instance and make a kernel for it for each dataset with a distinct convergence angle. If your aperture is small enough that its really just spots and not disks at all, you may be able to skip cross correlation entirely and just find maxima, which you can try by passing template = None. In terms of guidelines for probe generation, including trench size and so on - this is a really good question, and we should provide better guidelines here, sorry! The most important consideration here is how close your disks are - you don't want your trench to be so large that when you're aligned with one disk the trench is touching another one. In a sample with lots of overlapping disks, you may not want to do it this way at all - we have several different methods for dealing with these sorts of cases, but unfortunately we're still working on making them accessible, so please do stay tuned and thanks for you patience...

Hope this is helpful, and please feel free to share some images if you'd like / can.

sbachu6812 commented 6 months ago

Sorry for my late response. Thanks a lot for your answers @sezelt and @bsavitzky. I found the paper you attached very helpful. I use normal apertures without any patterns. So, if I keep a constant counts per pixel, the disk position error should be less for large disks, i.e., large C2 size, right? Asking just to double check that I understood correct.

  1. I am checking with my supervisors about sharing the map of residuals I see. I will upload it here once I clear it with them (Sorry). But yeah, I haven't tried using mask or robust arguments. I will try using them and see how my residuals change. One question though: if I use the mask argument, will the function only use those specific pixels to fit to a function, say parabola, and still calibrate the entire data or will it only calibrate the pixels specified with mask?

  2. Yes, I have been doing that so far. I created a new probe kernel for every combination of C2 size and CL values I used. However, I kept the negative trench size the same for all those kernels irrespective of whether my disks are 8-10 pixels in size or 2-3 pixels in size (essentially just spots). The trench size I used is "radii = (alpha_pr, 2*alpha_pr)" where alpha_pr is my probe radius. Is that ok or should I change it depending on the disk size? And my experimental conditions are such that the disks are not overlapping (they are like 20-25 pixels away from each other). I haven't tried setting "template = None" for disk detection yet. I will try that next. But this reminded me of a different question. I always use "corrPower = 1.0" which means pure phase correlation right? I tried changing it to 0 and it wouldn't detect any disk at all. If I try a fraction value between 0 and 1, it detects less disks than 1.0. So, I ended up sticking to 1.0. Is it ok to use 1.0 value?

sezelt commented 6 months ago

A larger convergence angle helps finding the center when it's uniformly illuminated (and when you keep constant counter per pixel, since increasing convergence angle then also entails increasing the total signal), but it also leads to more dynamical contrast inside the diffraction disks which can make things worse when the sample is thick or bending a lot.

If you specify a mask, that is used for the fit only, and the resulting centers will be applied to the whole dataset.

1.0 is the recommended value for corrPower, and gives a pure cross correlation. Sometimes it can help to reduce this to around 0.8, though I don't have a precise rule to follow. When you change this, the magnitude of the correlation values can change drastically so you may have to adjust minAbsoluteIntensity to still detect disks.