NPE can happens when converting a CH Block to a Spark UnsafeRow, if the block contains a map field with a size of 0.
The root cause is that when converting the map field, the memory layout does not conform to UnsafeMapData when the map size is 0.
Caused by: java.lang.NullPointerException
at scala.math.LowPriorityOrderingImplicits$$anon$2.compare(Ordering.scala:150)
at scala.math.Ordering.equiv(Ordering.scala:103)
at scala.math.Ordering.equiv$(Ordering.scala:103)
at scala.math.LowPriorityOrderingImplicits$$anon$2.equiv(Ordering.scala:149)
at org.apache.spark.sql.catalyst.expressions.GetMapValueUtil.getValueEval(complexTypeExtractors.scala:362)
at org.apache.spark.sql.catalyst.expressions.GetMapValueUtil.getValueEval$(complexTypeExtractors.scala:342)
at org.apache.spark.sql.catalyst.expressions.GetMapValue.getValueEval(complexTypeExtractors.scala:442)
at org.apache.spark.sql.catalyst.expressions.GetMapValue.nullSafeEval(complexTypeExtractors.scala:484)
at org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:574)
at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:168)
at org.apache.spark.sql.catalyst.expressions.InterpretedUnsafeProjection.apply(InterpretedUnsafeProjection.scala:90)
at org.apache.spark.sql.catalyst.expressions.InterpretedUnsafeProjection.apply(InterpretedUnsafeProjection.scala:34)
Describe what's wrong
NPE can happens when converting a CH Block to a Spark UnsafeRow, if the block contains a map field with a size of 0.
The root cause is that when converting the map field, the memory layout does not conform to UnsafeMapData when the map size is 0.