nion-software / nionswift-instrumentation-kit

Base classes for Nion Swift STEM microscope instrumentation
GNU General Public License v3.0
1 stars 12 forks source link

Possibility of hyperspectral [3D] scan channel with calibrations #86

Open yvesauad opened 3 years ago

yvesauad commented 3 years ago

In def update_scan_data_element(data_element, scan_frame_parameters, data_shape, channel_name, channel_id, scan_properties) (line 161 @ scan_base)

We can only have calibrations for 2D channels. The simple modification

    else:
        data_element["spatial_calibrations"] = [
            {"offset": -center_y_nm - pixel_size_nm * data_shape[0] * 0.5, "scale": pixel_size_nm, "units": "nm"},
            {"offset": -center_x_nm - pixel_size_nm * data_shape[1] * 0.5, "scale": pixel_size_nm, "units": "nm"}
        ]
        if len(data_shape) == 3: #3D acquisitions. This is typically a hyperspectral image.
            eels_dispersion = float(scan_properties.get("eels_dispersion", 1.0))
            eels_offset = float(scan_properties.get("eels_offset", 0.0))
            data_element["spatial_calibrations"].append(
                {"offset": eels_offset, "scale": eels_dispersion, "units": "eV"}
            )

Allow one to perform 3D images. In this case, one must pass "eels_dispersion" and "eels_offset" to calibrate the energy channel. The image below show three channels (ADF and BF 2D and TPX3 is 3D - with the associated eels spectrum). If no eels settings is put into properties, there is no issue.

ps: images are dummy.

image

If you agree, i can do a pull request so you can judge the modification afterwards.

Thanks!!

cmeyer commented 3 years ago

On the Nion devices, the 3rd dimension of data always comes from the camera as a sequence of single images/spectra and gets assembled into a higher dimensional data set within the instrumentation kit. The camera typically triggers the next scan position with an electronic signal. Importantly, we do the assembly of the data as it arrives as individual spectra, rather than waiting for the whole data set to be complete.

On your devices, does it work in a similar way? What is the nature of the data produced by your devices? Is it the EELS spectra + other scalar signals such as HAADF and MAADF? Can it produce 2D data at each scan position? Is your API capable of sending data for a subset of scan positions until it finishes the entire frame (partial acquisition)?

The reason I'd like to understand your devices better is because I think it may be possible to have the instrumentation kit do this automatically already. But doing that may require a slightly different way of looking at how the data comes into the instrumentation kit - probably by a combination of a "scan device" and a "camera device" - but maybe there is a new kind of "scan and camera device" that we can define which produces both simultaneously. Ideally, I'd like to have a simulator that produces data in the same way as your devices (in nionswift-usim) and then integrate that style of acquisition into the code directly.

Also note that I've recently added support for doing acquisition in a flexible way that makes it easier to configure complex processing - for example acquiring 2D data at each scan position, summing different regions of the 2D data, and assembling it into multiple images. Traditional spectrum imaging (acquiring a spectrum for each scan position) is only one specific use case. But to support this flexibility requires very specific requirements on how the scan and camera devices produce data.