pcdshub / Bug-Reports-and-Requests

Issue Tracking for PCDS Software
BSD 3-Clause "New" or "Revised" License
1 stars 1 forks source link

Fixed target scan in XPP #59

Open cristinasewell opened 3 years ago

cristinasewell commented 3 years ago

Feature Request

The task is to combine 2 motors to-2d scan a target, using the sequencer to ‘shoot’ each target. We have a grid pattern with samples mounted on a sample holder, the samples have to be moved because they get damaged when the pulse picker hits them.

The idea is to have a plan that will do 3 main steps repeatedly, depending on how many samples we want to hit:

We want to be able to hit the samples at about 1 - 2 Hz? The sample is aligned manually offline one time, then an algorithm should be used to compensate for the sample alignment corrections? There should be code to build the scan motors out from physical motors using the images obtained from an area-detector camera (it is ok when this required user input, fancy image processing stuff is down the line). The camera will be used to see where the x-ray is, and make sure the 'dots' are moved exactly to the right location on the dot each time. We also need the camera to find the 'starting position' or dot (0,0), as well as the other corner/boundary dots.

It would be great if it would be written in a way that it will fit later into a bigger tool box.

Setup

Additional info xy_grid

Possible solutions/ideas

Target grid specifications These specifications might not be necessarily true (need double checking): specs

Things to consider

cristinasewell commented 3 years ago

@silkenelson and @dlzhu please feel free to correct or add to the description of this Issue request - I think it will help with understanding exactly what needs to be done.

Here is also a question that Ken suggested: Is the complexity coming in because we want it to be pixel-perfect centered each frame, and slight misalignments are unacceptable? So, per-sample: realign to camera, take shots, go to next sample?

dlzhu commented 3 years ago

Hi Cristina, your notes looks pretty complete. As mentioned, various version to do this has been written/kludged in the past and I understand the long term desire to build something more general and elegant.

For the time being, our interest is probably main to figure out all the latencies in the chain and see how fast we can run, since we know the basic grid calibration with some 'user input' should work. A good interface to keep multiple grid calibrations for different samples on the sample holder would be useful since we will likely be hopping back/forth between samples.

Hutch will be open starting tomorrow for the rest of the week. If you'd like to come in a bit for some test, let me know.

silkenelson commented 3 years ago

@dlzhu should comment: I do not think that we are after misalignments in each target: we expect the target to be mostly properly fabricated, but we assume as cannot move motor x and motor Y, but that for each step we need to both both motors. We have to determine the position for 00 and NM with the camera. We'd rather end up with something that has manual steps at the beginning (to get to 0,0 and N,M) but shots the targets in between as far as possible with little overhead then end up with something that will cost time for each shot. I would have assumed that a human will move the motors so that 0,0 is at the desired pixel position on the camera and then we note that position & and motor position. Then the human moves the motors until NM is at the same pixel position, with that we should be able to move to any point. I believe doing centroid, machine vision is something I would only like to think about if the rest of the scan works in real life. And yes, this is not vey much coding as we should have most everything, it is putting it together in a way where it can used with a few simple steps.

ZLLentz commented 3 years ago

for each step we need to move both motors

This is an important thing to highlight because it affects the implementation quite a bit

dlzhu commented 3 years ago

Yup, each sample 'wafer' are made by lithography, so the grid for each sample is almost perfect. However, the mapping may not be exactly 'rectilinear' but rather an affine or projective transformation, because the sample surface may not be perfectly parallel to the motion direction, and the camera can have parallax.

dlzhu commented 3 years ago

btw, future sample grids can be honeycomb or triangular patterns, if not more strange, but the assumption that for each sample, there is a predetermined accurate 'map' that needs to be projected to motor positions, is pretty safe .

silkenelson commented 3 years ago

A Alignment Functions: A1 Function that will start to ask for movement to a list of ‘alignment points’ A11 For now, require user to go to three points & execute ’save’ step: model this after the Be-Lens focus motor creation A2 Map of sample positions will be filled, those should be in 2-d coordinates

B Scan function: B1 This should be grid scan w/ snake or a list-scan where the list is filled in a step after the alignment. For each shot, all sample motor will move to wherever that sample is. B2 Do this should be optionally using the DAQ and the sequencer (optionally including the PP). B3 Understand speed & what limits the shooting speed.

Extensions: After A2, create map of positions and & remember which positions have been shot. Display this in a little window. If a run gets terminated, remember where to continue Make the A1 alignment a less manual process by becoming fancy: centroid, image vision,….

cristinasewell commented 3 years ago

Ok, thanks a lot for all the details