beam-tracing / Scotty

Beam tracing code for diagnostics
https://scotty.readthedocs.io/en/latest/
GNU General Public License v3.0
7 stars 4 forks source link

Relativistic corrections #103

Closed fongkenrui closed 1 year ago

fongkenrui commented 1 year ago

Ran into some conflicts with pytest test_simple_golden. Culprit seems to be under test_integrated line 449. Issue likely propagated from renaming of DensityFit to ProfileFit as well as the poloidal_flux_zero_density attribute being renamed -> some of the pytest modules importing from DensityFit have to be renamed. Not sure how it lead to the npz file being unable to be read without pickling enabled.


Error message dump:

        with np.load(tmp_path / "data_output_Bpa0.10.npz") as f:
>           output = dict(f)

tests\test_simple_golden.py:450:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _  
C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\npyio.py:253: in __getitem__
    return format.read_array(bytes,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _  

fp = <zipfile.ZipExtFile name='temperature_output.npy' mode='r'>, allow_pickle = False, pickle_kwargs = {'encoding': 'ASCII', 'fix_imports': True}

    def read_array(fp, allow_pickle=False, pickle_kwargs=None, *,
                   max_header_size=_MAX_HEADER_SIZE):
        """
        Read an array from an NPY file.

        Parameters
        ----------
        fp : file_like object
            If this is not a real file object, then this may take extra memory
            and time.
        allow_pickle : bool, optional
            Whether to allow writing pickled data. Default: False

            .. versionchanged:: 1.16.3
                Made default False in response to CVE-2019-6446.

        pickle_kwargs : dict
            Additional keyword arguments to pass to pickle.load. These are only
            useful when loading object arrays saved on Python 2 when using
            Python 3.
        max_header_size : int, optional
            Maximum allowed size of the header.  Large headers may not be safe
            to load securely and thus require explicitly passing a larger value.
            See :py:meth:`ast.literal_eval()` for details.
            This option is ignored when `allow_pickle` is passed.  In that case
            the file is by definition trusted and the limit is unnecessary.

        Returns
        -------
        array : ndarray
            The array from the data on disk.

        Raises
        ------
        ValueError
            If the data is invalid, or allow_pickle=False and the file contains
            an object array.

        """
        if allow_pickle:
            # Effectively ignore max_header_size, since `allow_pickle` indicates
            # that the input is fully trusted.
            max_header_size = 2**64

        version = read_magic(fp)
        _check_version(version)
        shape, fortran_order, dtype = _read_array_header(
                fp, version, max_header_size=max_header_size)
        if len(shape) == 0:
            count = 1
        else:
            count = numpy.multiply.reduce(shape, dtype=numpy.int64)

        # Now read the actual data.
        if dtype.hasobject:
            # The array contained Python objects. We need to unpickle the data.
            if not allow_pickle:
>               raise ValueError("Object arrays cannot be loaded when "
                                 "allow_pickle=False")
E               ValueError: Object arrays cannot be loaded when allow_pickle=False
valerian-chen commented 1 year ago

The changes look good. @ZedThree, do you think it's better to pass electron_mass or electron_temperature to functions like find_epsilon_para? As an optional argument which is set to None by default. This will mean we don't have to explicitly define a flag for relativistic corrections. I'm not sure which approach is best from a software engineering perspective.

Also, I've been thinking about ProfileFit. Do you think 1DProfileFit would be clearer? I'm on the fence about this too.

I wonder if your problems with pickle is a result of allclose complaining about types. [Edit] I've looked at my old errors and it seems not.

ZedThree commented 1 year ago

Not had chance to look at this properly, but I'll try to do so tomorrow.

I suspect the error is due to something like trying to save an object to the npz file. I didn't see any likely candidates at first glance, but I'll dig into it tomorrow.

@valerian-chen Probably passing in temperature will be a bit cleaner

valerian-chen commented 1 year ago

This looks great to me. @ZedThree, any final comments before we merge?