Closed floscha closed 3 years ago
Hello Florian, what use case do you have to interpret the different between NaN and None in pandas? They seem to be aligned in recent versions of pandas to express the same notion of missingness. According to the warning in this section: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#values-considered-missing None will behave as NaN inside pandas dataframe (as opposed to other places in Python).
Tim
On Wed, May 8, 2019 at 10:29 AM Florian Schäfer notifications@github.com wrote:
When converting a DataFrame/Series of type object (i.e. Strings) with np.nan values to Koalas DataFrames and back, the former np.nan values are replaced with None as can be seen below:
ks.Series(['a', np.nan]).to_pandas()0 a1 None Name: 0, dtype: object
However, the following output would be expected instead:
0 a1 NaN Name: 0, dtype: object
I assume this behavior is caused by the fact that Spark does not support NaN values for String columns but uses None instead. Consequently, there is probably no definite way to decide whether a None value in Spark should be converted to a Python NaN or None at the time when the conversion from Spark to pandas happens. However, I would argue that in doubt converting to NaN makes more sense in most cases than None and should thus be the default.
What is your opinion on this?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/databricks/koalas/issues/263, or mute the thread https://github.com/notifications/unsubscribe-auth/ABZ6GAJZHLBHXQI7DFB4SW3PUKFOBANCNFSM4HLPQ2LQ .
Hi Timothy,
I agree that in most cases the difference between NaN and None is negligible. One example where it matters, however, is #264 where I have to expect None values from Koalas merge()
operation for the unit tests to pass. While this technically works, it feels awkward since the respective pandas method would return NaN instead of None. For the sake of consistency, I would prefer if Koalas would also return NaN in that case.
There's a pretty good discussion of this in this post.
In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, see NA type promotions.
Thanks for the input @tdliu! Wouldn't this be a good reason for Koalas to also use NaN @thunterdb?
Note that performance is not going to be an issue here in any case: the internal representation in spark of None is a bit mask and the scalar values are always allocated.
Here is what I suggest to do when we convert the data between pandas and spark:
@floscha does that cover all the use cases that you are thinking of? Can you write a couple of test cases to validate that this is the expected behaviour.
Also, be aware that the nullability support in spark can be brittle, some corner cases especially after UDFs or joins will forget nullability info.
Currently the conversion works like follows:
>>> ks.from_pandas(pd.DataFrame({'float':[0.2, np.nan], 'int':[1, np.nan], 'str':['a', np.nan]})).to_pandas()
float int str
0 0.2 1.0 a
1 NaN NaN None
Should the result not be
float int str
0 0.2 1.0 a
1 NaN NaN NaN
instead?
note: We should now consider NA in pandas 1.0.0 too.
Let me close this for now, since It can't be supported because Spark can't tell if the null value of object
type should be None
or NaN
when converting to pandas anyway.
When converting a DataFrame/Series of type
object
(i.e. Strings) withnp.nan
values to Koalas DataFrames and back, the formernp.nan
values are replaced withNone
as can be seen below:However, the following output would be expected instead:
I assume this behavior is caused by the fact that Spark does not support NaN values for String columns but uses None instead. Consequently, there is probably no definite way to decide whether a None value in Spark should be converted to a Python NaN or None at the time when the conversion from Spark to pandas happens. However, I would argue that in doubt converting to NaN makes more sense in most cases than None and should thus be the default.
What is your opinion on this?