In Spark any object can be used as column type. While the build-in column types have ordered domains and thus can be bounded through ranges, this is not the case for some arbitrary UDTs. We should fall back to boolean attribute-level annotations for such data types. This requires changes to CaveatRangeExpression to deal with both types as well as code that assumes to know the structure of annotations as in CaveatRangePlan and CaveatRangeEncoding.
In Spark any object can be used as column type. While the build-in column types have ordered domains and thus can be bounded through ranges, this is not the case for some arbitrary UDTs. We should fall back to boolean attribute-level annotations for such data types. This requires changes to
CaveatRangeExpression
to deal with both types as well as code that assumes to know the structure of annotations as inCaveatRangePlan
andCaveatRangeEncoding
.