As checkita operates using Spark RDD API then it deals with Spark Row instances to process data. Row stores data of various types and therefore, we have to guess (pattern match) actual type of the value prior its conversion. We use Spark SQL to JVM types mapping as per Spark documentation.
For set-based (distinctValues and duplicateValues) metrics it is crucial to have unique string representation for unique tuples of columns. Recursive sequence to string casting method is implemented for that.
In addition, added some debug logs for metric error collection.