Open p3trus opened 9 years ago
Eventually, I hope so!
Unfortunately, doing this in a feasible and maintainable way will probably require upstream fixes in NumPy. In particular, better support for duck-array types (https://github.com/numpy/numpy/issues/4164) and/or the ability to write units as a custom NumPy dtypes. Both of these are on the NumPy roadmap, though they don't have a timeframe for when that will happen.
Astropy has pretty good units support: http://astropy.readthedocs.org/en/latest/units/ Would it be possible to copy what they do?
Unfortunately, the astropy approach uses a numpy.ndarray subclass, which means it's mutually exclusive with dask.array. Otherwise, it does look very nice, though.
On Mon, Aug 17, 2015 at 8:38 AM, Ryan Abernathey notifications@github.com wrote:
Astropy has pretty good units support: http://astropy.readthedocs.org/en/latest/units/
Would it be possible to copy what they do?
Reply to this email directly or view it on GitHub: https://github.com/xray/xray/issues/525#issuecomment-131866203
@shoyer - as one who thinks unit support is probably the single best thing astropy has (and is co-maintainer of astropy.units
), I thought I'd pipe in: why would it be a problem that astropy's Quantity
is an ndarray
subclass? I must admit not having used dask arrays, but since they use ndarray
internally for the pieces, shouldn't the fact that Quantity
has the same interface/methods, make it relatively easy to swap ndarray
for Quantity
internally? I'd be quite happy to help think about this (surely it cannot be as bad as it is for MaskedArray
;-).
Alternatively, maybe it is easier to tag on the outside rather than the inside. This would also not seem to be that hard, given that astropy's Quantity
is really just a wrapper around ndarray
that carries a Unit
instance. I think the parts that truly wrap might be separated from those that override ndarray
methods, and would be willing to implement that if there is a good reason (like making dask quantities possible...). It may be that in this case one would not use Quantity
proper, but rather just the parts of units
where the real magic happens: in the Unit
class (which does the unit conversion) and in quantity_helpers.py
(which tells what unit conversion is necessary for a given operation/function).
@mhvk It would certainly be possible to extend dask.array to handle units, in either of the ways you suggest.
Although you could allow Quantity
objects inside dask.arrays, I don't like that approach, because static checks like units really should be done only once when arrays are constructed (akin to dtype checks) rather than at evaluation time, and for every chunk. This suggests that tagging on the outside is the better approach.
So far, so good -- but with the current state of duck array typing in NumPy, it's really hard to be happy with this. Until __numpy_ufunc__
lands, we can't override operations like np.sqrt
in a way that is remotely feasible for dask.arrays (we can't afford to load big arrays into memory). Likewise, we need overrides for standard numpy array utility functions like concatenate
. But the worst part is that the lack of standard interfaces means that we lose the possibility of composing different arrays backends with your Quantity
type -- it will only be able to wrap dask or numpy arrays, not sparse matrices or bolt arrays or some other type yet to be invented.
Once we have all that duck-array stuff, then yes, you certainly could write all a duck-array Quantity
type that can wrap generic duck-arrays. But something like Quantity
really only needs to override compute operations so that they can propagate dtypes -- there shouldn't be a need to override methods like concatenate
. If you had an actual (parametric) dtype for units (e.g., Quantity[float64, 'meters']
), then you would get all those dtype agnostic methods for free, which would make your life as an implementer much easier. Hence why I think custom dtypes would really be the ideal solution.
@shoyer - fair enough, and sad we don't have __numpy_ufunc__
yet... I agree that with Quantity inside, one would end up duplicating work for every chunk, which makes it less than ideal even though it would probably be the easier approach to implement.
For the outside method, from the dask perspective, it would indeed be easiest if units were done as a dtype, since then you can punt all the decisions to helper routines. My guess, though, is that it will be a while before numpy will include what is required to tell, e.g., that if I add something in m
to something in cm
, the second argument has to be multiplied by 0.01
. But astropy does provide something just like that: quantity_helpers
exposes a dict keyed by operation, which holds functions that return the required converters given the units. E.g., in the above example, internally what happens is
converters, result_unit = UFUNC_HELPERS[np.add](np.add, *units)
result_unit
# Unit("m")
converters[0]
# None
converters[1]
# <function astropy.units.quantity_helper.get_converter.<locals>.<lambda>>
converters[1](1.)
# 0.01
In dask
, you could run the converters on your individual chunks, though obviously I don't know how easy it is to add an extra step like this without slowing down other aspects too much.
p.s. For concatenate
, you need unit conversion as well, so sadly Quantity
does need to override that too (and currently cannot, which is rather annoying).
I am one of the authors of Pint and I was just pointed here by @arsenovic
Pint does not subclass ndarray, it rathers wrap any numerical type dispatching to the wrapped value any attribute access that does not understand. By defining __array_prepare__
and __array_wraps__
most numpy functions and array attributes work as expected without monkey patching or having a specialized math module. For example:
>>> import numpy as np
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> [1., 4., 9.] * ureg.meter # a list is interpreted as an ndarray
<Quantity([1. 4. 9.], 'meter')>
>>> np.sqrt(_)
<Quantity([ 1. 2. 3.], 'meter ** 0.5')>
>>> _.sum()
<Quantity(6.0, 'meter ** 0.5')>
I think something similar can be done for xarray.
@hgrecco - for astropy's Quantity
, we currently also rely on __array_prepare__
and __array_wrap__
. The main annoyances are (1) one cannot change the input before a numpy ufunc is called, and therefore often has no choice but to let a wrong calculation proceed; (2) proper recognition in non-ufunc functions is sparse (e.g., np.dot
, etc.; see http://docs.astropy.org/en/latest/known_issues.html#quantity-issues)
Aside: at some point I'd hope to get the various implementations of units to talk together: it would be good to have an API that works such that units are inter-operable.
@hgrecco Are you suggesting that pint could wrap xarray objects, or that xarray could wrap pint? Either is certainly possible, though I'm a bit pessimistic that we can come up with a complete solution without the numpy fixes we've been discussing.
Also, just to note, xarray contains a Dataset
type for representing collections of variables that often have different units (e.g., temperature and pressure). That suggests to me that it could make more sense to put pint and/or astropy.Quantity objects inside xarray arrays rather than the other way around.
id just like to chime in and say that this feature would really be sweet. i always find myself doing a lot work to handle/convert different units. it seems that adding units to labeled axes does a lot to describe a set of data.
@shoyer When we prototyped Pint we tried putting Quantity objects inside numpy array. It was was working fine but the performance and memory hit was too large. We were convinced that our current design was right when we wrote the first code using it. The case might be different with xarray. It would be nice to see some code using xarray and units (as if this was an already implemented feature).
@mhvk I do agree with your views. We also mention these limitations in the Pint documentation. Wrapping (instead of subclassing) adds another issue: some Numpy functions do not recognize a Quantity object as an array. Therefore any function that call numpy.asanyarray
will erase the information that this is a quantity (See my issue here numpy/numpy#4072).
In any case, as was mentioned before in the thread Custom dtypes and Duck typing will be great for this.
In spite of this limitations, we chose wrapping because we want to support quantities even if NumPy is not installed. It has worked really nice for us, working in most of the common cases even for numpy arrays.
Regarding interoperating, it will be great. It will be even better if we can move into one, blessed, solution under the pydata umbrella (or similar).
Not to be pedantic, but just one more :+1: on ultimately implementing units support within xarray -- that would be huge.
If anyone is excited about working on the NumPy improvement we need to make this more feasible (namely, custom dtypes and duck typing) at BIDS, you should talk to @njsmith.
I agree that custom dtypes is the right solution (and I'll go dig some more there). In the meantime, I'm not sure why you couldn't wrap an xarray DataArray
in one of pint's Quantity
instances. With the exception of also wanting units on coordinates, this seems like a straightforward way to get at least some unit functionality.
+1 for units support. I agree, parametrised dtypes would be the preferred solution, but I don't want to wait that long (I would be willing to contribute to that end, but I'm afraid that would exceed my knowledge of numpy).
I have never used dask. I understand that the support for dask arrays is a central feature for xarray. However, the way I see it, if one would put a (unit-aware) ndarray subclass into an xarray, then units should work out of the box. As you discussed, this seems not so easy to make work together with dask (particularly in a generic way). However, shouldn't that be an issue that the dask community anyway has to solve (i.e.: currently there is no way to use any units package together with dask, right)? In that sense, allowing such arrays inside xarrays would force users to choose between dask and units, which is something they have to do anyway. But for a big part of users, that would be a very quick way to units!
Or am I missing something here? I'll just try to monkeypatch xarray to that end, and see how far I get...
@burnpanck Take a look at the approach described in #988 and let me know if you think that sounds viable.
NumPy subclasses inside xarray objects would probably mostly work, if we changed some internal uses of np.asarray
to np.asanyarray
. But it's also a pretty big rabbit hole. I'm still not sure there are any good ways to do operations like concatenate
.
What I don't like with that approach is the fact that I essentially end up with a separate distinct implementation of units. I am afraid that I will either have to re-implement many of the helpers that I wrote to work with physical quantities to be xarray aware. Furthermore, one important aspect of units packages is that it prevents you from doing conversion mistakes. But that only works as long as you don't forget to carry the units with you. Having units just as attributes to xarray makes it as simple as forgetting to read the attributes when accessing the data to lose the units.
The units inside xarray approach would have the advantage that whenever you end up accessing the data inside xarray, you automatically have the units with you.
From a conceptual point of view, the units are really an integral part of the data, so they should sit right there with the data. Whenever you do something with the data, you have to deal with the units. That is true no matter if it is implemented as an attribute handler or directly on the data array. My fear is, attributes leave the impression of "optional" metadata which are too easily lost. E.g. xarray doesn't call it's _ufunchook for some operation where it should, and you silently lose units. My hope is that with nested arrays that carry units, you would instead fail verbosely. Of course, np.concatenate
is precisely one of these cases where unit packages struggle with to get their hook in (and where units on dtypes would help). So they fight the same problem. Nonetheless, these problems are known and solved as well as possible in the units packages, but in xarray, one would have to deal with them all over again.
Or another way to put it: While typical metadata/attributes are only relevant if you eventually read them (which is where you will notice if they were lost on the way), units are different: They work silently behind the scene at all times, even if you do not explicitly look for them. You want an addition to fail if units don't match, without having to explicitly first test if the operands have units. So what should the ufunc_hook do if it finds two Variables that don't seem to carry units, raise an exception? Most probably not, as that would prevent to use xarray at the same time without units. So if the units are lost on the way, you might never notice, but end up with wrong data. To me, that is just not unlikely enough to happen given the damage it can do (e.g. the time it takes to find out what's going on once you realise you get wrong data).
So for now, I'm hunting for np.asarray
.
@burnpanck - thanks for the very well-posed description of why units are so useful not as some meta-data, but as an integral property. Of course, this is also why making them part of a new dtype is a great idea! But failing that, I'd agree that it has to be part of something like an ndarray
subclass; this is indeed what we do in astropy.units.Quantity
(and concatenate
does not work for us either...).
Now, off-topic but still: what is a little less wonderful is that there seem to be quite a few independent units implementations around (even just in astronomy, there is that of amuse
; ours is based on things initially developped by pynbody
). It may well be hard to merge them at this stage, but it would be good to think how we could at least interoperate...
In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here; otherwise it will be marked as closed automatically
This is still relevant. Hopefully the advent of __array_function__
in NumPy will make this easier/possible.
@rabernat recent post inspired me to check out this issue. What would this issue entail now that __array_function__
is in numpy? Is there some reason this is more complicated than adding an appropriate __array_function__
to pint
's quantity class?
Three things will need to change internally in xarray:
.data
is currently required to return a NumPy or dask array. This will need to be relaxed to include "any duck array type". (For now, we can store an explicit list of these types.)xarray/core/duck_array_ops.py
, to use NumPy's API when __array_function__
is enabled instead of our ad-hoc checks. Eventually (once our minimum required numpy version is 1.17), we should be able to delete most of duck_array_ops
entirely!DataArray.units
should be redirected to pull out DataArray.data.units
Probably worth pinging @dopplershift again. He has wrestled with this a lot.
2. once our minimum required numpy version is 1.17
@shoyer - what would be an approximate time frame for this?
(I just added a third bullet to my list above)
- once our minimum required numpy version is 1.17
@shoyer - what would be an approximate time frame for this?
First, we'll need to wait for NumPy 1.17 to be released :). But more seriously, if we do a breaking release of xarray we can probably with bumping the required NumPy version significantly.
It's definitely a smoother experience for users if we allow at least slightly older versions of NumPy (e.g., so they can use newer xarray with a version of NumPy pre-packaged with their system), but if keeping existing things working with the current version of NumPy is a pain, then it may be worth upgrading the minimum required version.
One additional issue. It seems like pint
has some odd behavior with dask. Multiplication (and I assume addition) is not commutative:
In [42]: da.ones((10,)) * ureg.m
Out[42]: dask.array<mul, shape=(10,), dtype=float64, chunksize=(10,)>
In [43]: ureg.m * da.ones((10,))
Out[43]: dask.array<mul, shape=(10,), dtype=float64, chunksize=(10,)> <Unit('meter')>
I would really like to see units support in xarray, and I'm just wondering what the barrier to contribution to this issue is? Like is this a "leave @shoyer to it" kind of task? Or is it something which less experienced developers (such as myself) can help with?
One additional issue. It seems like pint has some odd behavior with dask. Multiplication (and I assume addition) is not commutative:
This would be a good issue to fix upstream, by resolving whether dask should wrap pint or pint should wrap dask :).
I would really like to see units support in xarray, and I'm just wondering what the barrier to contribution to this issue is? Like is this a "leave @shoyer to it" kind of task? Or is it something which less experienced developers (such as myself) can help with?
I don't think there's a huge barrier to entry here and I would encourage someone else to dive into this. We could start by adding an experimental flag to xarray to disable coercing everything to numpy arrays, and do some experiments to see what works.
This would be a good issue to fix upstream, by resolving whether dask should wrap pint or pint should wrap dask :).
For what it's worth, I suspect the optimal wrapping order is: xarrays > pint > dask > numpy. This is because it's useful to get unit compatibility errors at "graph construction time" rather than "runtime".
@shoyer I agree with that wrapping order. I think I'd also be in favor of starting with an experiment to disable coercing to arrays.
@nbren12 The non-communicative multiplication is a consequence of operator dispatch in Python, and the reason why we want __array_function__
from numpy. Your first example dispatches to dask.array.__mul__
, which doesn't know anything about pint and doesn't know how to compose its operations because there are no hooks--the pint array just gets coerced to a numpy array. The second goes to pint.Quantity.__mul__
, which assumes it can wrap the dask.array
(because it duck typing) and seems to succeed in doing so.
Your first example dispatches to
dask.array.__mul__
, which doesn't know anything about pint and doesn't know how to compose its operations because there are no hooks--the pint array just gets coerced to a numpy array. The second goes topint.Quantity.__mul__
, which assumes it can wrap thedask.array
(because it duck typing) and seems to succeed in doing so.
Unfortunately I don't think either dask or pint handle this properly right now.
There is a protocol for Python's *
operator, which involves calling __mul__
and __rmul__
methods. But if both dask and pint always returns the result instead of NotImplemented
, it is impossible to ensure a consistent result for a * b
and b * a
if a
and b
are different types. (This exact same issue exists for __array_function__
, too, because the dispatching protocol is copied from Python.)
Dask and pint need some system -- either opt-in or opt-out -- for recognizing that they cannot handle operations with some argument types.
Another place to get started with this would be implementing __array_function__
in pint: https://github.com/hgrecco/pint/issues/790
Would __array_function__
solve the problem with operator precedence? I thought they are separate issues because __mul__
and __rmul__
need not call any numpy
functions, and will therefore not necessary dispatch to __array_function__
.
__array_function__
and __array_ufunc__
have the exact same operator precedence issues as __mul__
/__rmul__
. In all cases, properly written methods should return NotImplemented
in some cases.
There's a whole section in NEP 13 about this: http://www.numpy.org/neps/nep-0013-ufunc-overrides.html#type-casting-hierarchy
Would
__array_function__
solve the problem with operator precedence? I thought they are separate issues because__mul__
and__rmul__
need not call anynumpy
functions, and will therefore not necessary dispatch to__array_function__
.
Let me try to answer this more clearly: these are independent examples of the same problem.
Indeed, all of us over-riders have to start to return NotImplemented
if we don't know what the other class is - and I write this having recently realized that in astropy's Quantity
I have caused similar problems by trying to make a guess (despite being admonished to refuse to do that). Of course, numpy's ndarray
is the worst culprit here, just coercing everything to array.
By defining
__array_prepare__
and__array_wraps__
most numpy functions and array attributes work as expected without monkey patching or having a specialized math module.
@hgrecco "Most numpy functions" is a bit of an overstatement. Really important functions like np.dot
do not work with Pint
.
In light of the recent activity with __array_function__
in #3117, I took a quick look to see if it worked with Pint as modified in https://github.com/hgrecco/pint/pull/764. The basics of sticking a Pint Quantity
in a DataArray
seem to work well, and the perhaps the greatest issues are on Pint's end...right now https://github.com/hgrecco/pint/pull/764 is limited in the functions it handles through __array_function__
, and there are some quirks with operator precedence.
However, the other main problem was that coordinates did not work with Quantity
's. Looking again at https://github.com/pydata/xarray/issues/1938#issuecomment-510953379 and #2956, this is not surprising. I'm curious though about what it would take to let indexing work with Pint (or other unit arrays)? For most of my use cases (meteorological analysis as in MetPy), having units with coordinates is just as important as having units with the data itself. I'd be interested in helping implement it, but I would greatly appreciate some initial direction, since I'm new to that part of the xarray codebase.
Also, cc @keewis, since I saw in #2956 you have a unit-support
branch that looks like it attempts to extend NumpyIndexingAdapter
to work with unit arrays, but still has the coordinates-with-units tests marked as xfail.
In that branch I left it as xfail because I came to the conclusion that there was nothing I could do (directly at least): when creating a DataArray
, the coords get
as_variable()
, which puts the coord array in Variable
and for dimensions at leastVariable.to_index_variable()
. In there the variable is converted to an IndexVariable
and the array isPandasIndexAdapter
where the array isnp.asarray
(this would probably have to be removed/changed) and then to pandas.Index
, which is where the units get stripped -- which can be verified by directly passing a unit array to it.The units of coordinates that are not dimensions are not stripped:
>>> ureg = pint.UnitRegistry()
>>> v = np.arange(10 * 20).reshape(10, 20) * ureg.m / ureg.s
>>> d = np.arange(10) * ureg.m
>>> d2 = d.to(ureg.cm)
>>> t = np.arange(20) * ureg.s
>>> array = xr.DataArray(data=v, dims=('d', 't'), coords={'d': d, 'd2': ('d', d2), 't': t})
>>> array.d.data
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> array.d2.data
<Quantity([ 0. 100. 200. 300. 400. 500. 600. 700. 800. 900.], 'centimeter')>
However, that branch is a quick hack, and I would suspect that supporting duck arrays has a similar effect.
Thank you for the insight!
So if I'm understanding things correctly as they stand now, dimension coordinates store their values internally as a pandas.Index
, which would mean, to implement this directly, this becomes an upstream issue in pandas to allow a ndarray-like unit array inside a pandas.Index
? Based on what I've seen on the pandas side, this looks far from straightforward.
With that in mind, would "dimension coordinates with units" (or more generally "dimension coordinates with __array_function__
implementers") be another use case that best falls under flexible indices (#1603)?
(In the mean time, I would guess that the best workaround is using an accessor interface to handle unit-related operations on coordinates, since the attrs
are preserved.)
I think we could do basic indexes with units after steps (1) and (2) in the big index refactor plan: https://github.com/pydata/xarray/issues/1603#issuecomment-511126208
At that point, indexes will be something that are propagated entirely separately from arrays. So even if the index will get cast into a pandas.index, the corresponding coordinate array will stick around.
The next level of support would be "unit array indexing", e.g., ds.sel(x=1000*u.meters)
. This will require an API for letting you define your own index classes -- something that we definitely want in the long term but will take more work to realize.
With the progress being made with https://github.com/pydata/xarray/pull/2956, https://github.com/pydata/xarray/pull/3238, and https://github.com/hgrecco/pint/pull/764, I was thinking that now might be a good time to work out the details of the "minimal units layer" mentioned by @shoyer in https://github.com/pydata/xarray/issues/525#issuecomment-482641808 and https://github.com/pydata/xarray/issues/988#issuecomment-413732471?
I'd be glad to try putting together a PR that could follow up on https://github.com/pydata/xarray/pull/3238 for it, but I would want to ask for some guidance:
(For reference, below is the action list from https://github.com/pydata/xarray/issues/988#issuecomment-413732471)
- The
DataArray.units
property could forward toDataArray.data.units
.- A
DataArray.to
orDataArray.convert
method could call the relevant method on data and re-wrap it in a DataArray.- A minimal layer on top of xarray's netCDF IO could handle unit attributes by wrapping/unwrapping arrays with pint.
DataArray.units
Having DataArray.units
forward to DataArray.data.units
should work for pint
, unyt
, and quantities
, but should a fallback to DataArray.data.unit
be added for astropy.units
? Also, how should DataArray.units
behave if DataArray.data
does not have a "units" or "unit" attribute, but DataArray.attrs['units']
exists?
DataArray.to()
/DataArray.convert()
DataArray.to()
would be consistent with the methods for pint
, unyt
, and astropy.units
(the relevant method for quantities
looks to be .rescale()
), however, it is very similar to the numerous output-related DataArray.to_*()
methods. Is this okay, or would DataArray.convert()
or some other method name be better to avoid confusion?
Units and IO
While wrapping and unwrapping arrays with pint
itself should be straightforward, I really don't know what the best API for it should be, especially for input.
Some possibilities that came to mind (by no means an exhaustive list):
open_dataset()
that accepts a quantity constructor (like ureg.Quantity
in pint) that is applied within each variableWith any of these, tests for lazy-loading would be crucial (I don't know yet how pint will handle that).
Output may be easier: I was thinking that unwrapping could be done implicitly by automatically putting str(DataArray.units)
as the "units" attribute and replacing the unit array with its magnitude/value?
Extra questions based on sparse implementation
Will a set of repr functions for each unit array type need to be added like they were for sparse in https://github.com/pydata/xarray/pull/3211? Or should there be some more general system implemented because of all of the possible combinations that would arise with other duck array types?
to_dense()
/.to_numpy_data()
/.to_numpy()
What is the expected behavior with unit arrays with regards to this soon-to-be-implemented conversion method?
For the general xarray method, I think we would probably want something like DataArray.units_convert
or DataArray.units_to
. Or potentially this could use an accessor, e.g., DataArray.pint.to
(which will always be a fallback option).
For the Dataset repr, it would probably be nice to print the units along with some of the array values. So yes, this could probably use some custom logic for recognizing quantity types, among other duck array types. If the number of distinct array types starts to get burdensomely large, we could expose an interface for registering new ones, e.g., xarray.register_inline_array_repr(array_type, inline_repr)
.
For rolling out a new units attribute and/or IO integration, we will need to be careful to preserve backwards compatibility for now (at least with a warning). I’m sure there is lots of code that expects array.attrs
to be a string attribute today, so we should consider our options carefully before breaking all that code. The conservative choice would be to keep existing uses working for now unchanged (as a fallback), and save breaking changes for later once we are confident we know the right solution.
@shoyer Thank you for the reply!
That sounds good about the repr custom logic.
With the units attribute, I was presuming based on the past comments that DataArray.units
would be a new property; I forgot that DataArray.<attrname>
passes along to DataArray.attrs.<attrname>
, so that implementing something new for DataArray.units
would be a breaking change! In trying to avoid such a change, though, I think it would be confusing to have a DataArray-level DataArray.units_convert
method and not a corresponding DataArray-level way of getting at the units. So, would it be okay to just implement this unit interface (unit access, unit conversion, and IO) through an accessor, and start out with just a pint accessor? If so, where should it be implemented?
Possible ideas I had:
With the units attribute, I was presuming based on the past comments that
DataArray.units
would be a new property; I forgot thatDataArray.<attrname>
passes along toDataArray.attrs.<attrname>
, so that implementing something new forDataArray.units
would be a breaking change!
I think the new property is still an option, even if we want to preserve accessing "units"
from attrs
, e.g.,
@property
def units(self):
if hasattr(self.data, 'units'):
# data is an array with units
return self.data.units
elif 'units' in self.attrs:
# consider issuing a FutureWarning here?
return self.attrs['units']
else:
raise AttributeError('units')
One thing that bother me alot is that pandas lacks good unit support (e.g. quantities, pint, ...)
Is there a chance that xray will support it?