Closed rwb27 closed 4 years ago
Hi Rich, I tabulated the bugs I've found so far. I've fixed most of them but not that last one. I'll have another look at everything when I get back (next week)
Cool, sounds like you've done some very nice work :) Sorry the progress bar got confusing; I think it's a useful feature but it's not very well tested so it does need tweaking. Probably it should be a separate pull request, as it's useful separately...
As regards the one in 344, the camera_with_location doesn't hold a 4x4 matrix, because it constructs it (each time the property is read) from a 3x3 matrix (which only handles scaling/rotation) and the stage position (which gives the extra row/column by reading the current position). CameraWithLocation.pixel_to_sample_matrix
is a property, and should be 4x4 when it's read. CameraWithLocation.pixel_to_sample_displacement
is a (poorly named, sorry) 3x3 matrix that doesn't include position information, and is used by the above property.
When you take an image with the CameraWithLocation
, you can initialise the resulting ImageWithLocation
's pixel_to_sample_matrix
property with the current value of the property of the same name on the CWL. You shouldn't use the 3x3 matrix directly anywhere, I should probably have prefixed its name with an underscore. Maybe let's do this... Does that make more sense?
That makes much more sense, I will sort that and test it asap.
Hi Rich, I modified the location to pixel function from image with location to fix the problem with the the singular matrix. (this is just a brief bug description) So the Original code looked at follows: def location_to_pixel(self, location, check_bounds=False, z_tolerance=np.infty): l = np.concatenate([ensure_3d(location), [1]]) p = np.dot(l, np.linalg.inv(self.pixel_to_sample_matrix))
However, np.linalg.inv(self.pixel_to_sample_matrix) raised a singular matrix error (an example of the matrix is printed below). Example matrix: [[ 4.12711821e-03 8.88822299e-02 0.00000000e+00 0.00000000e+00] [ -8.86398765e-02 4.27176379e-03 0.00000000e+00 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00] [ -4.36379027e+04 5.94839809e+03 1.81648600e+04 0.00000000e+00]]
My solution was to just deal with the offset first and then just do the linear maths on the 2x2 singular matrix, however, this isnt the prettiest solution (code shown below). def location_to_pixel(self, location, check_bounds=False, z_tolerance=np.infty): """... """ l = ensure_2d(location) l = l[:2]-self.pixel_to_sample_matrix[3,:2] p = np.dot(l, np.linalg.inv(self.pixel_to_sample_matrix[:2,:2]))
Is there any reason this could cause problems in the future?
Seperate issue: When the reconstruct_tiled_image function (from within camera with loc) calculates displacements it occasionaly gets NaN values which inherently causes problems down the line. What do you think the best solution to this is?
Hmm, assuming it's 2x2 + 3D translation may cause problems if anyone uses a fully 3D matrix in the future. A workaround would be to add an assertion in front of your code so it throws an exception if the relevant matrix elements ([2,:2] and [:2,2]) are nonzero or [3,3] is not one.
The CameraWithLocation code is pretty much ready for testing with some hardware :) Someone in Cambridge probably ought to do this...