Closed nevillelyh closed 4 years ago
I'm thinking maybe an extra featureStatistics
method on FeatureExtractor
, so this is done post transformation. We can build one Algebord Moment
per column easily.
Optionally we can also let user opt-in a subset of transformers, but that's extra complexity.
OTOH not sure if we can compute stats pre-transformation though, since it doesn't make sense for all inputs, e.g. strings, vector.
FWIW here are a few things I've commonly checked in the past:
@yonromai questions:
So the pre-transform stats are potentially doable in the same reduce
pass for feature settings, the post-transform stats definitely requires another reduce
pass. Both should be opt-in obviously. We could also warn in cases of high dimensional features like *hot encoders.
@richwhitjr do you think it's worth doing the pre-transform stats in the same reduce
as feature settings? IMO it's complex and probably not worth since the user is most likely doing it ad-hoc to explore data.
Seems complex and would be hard to mix monoids as needed. For example the transformation may want a QTree but for stats you will need a Moment Monoid. An adhoc "analysis" phase sounds promising though. I wonder though if this will require another type of Spec or if uses will expect the same type of stats for the same transformers.
Could be nice to have it output the protobuf format that is required by Facets so that we get the feature visualizations for free.
See: https://github.com/PAIR-code/facets/blob/master/facets_overview/proto/feature_statistics.proto
@marcromeyn Facets seems to support a lot more things than we discussed here. Just checking if we can drop some to narrow the scope.
Array[Double]
) fit in here?Seems it could be a lot of work to replicate all the logic in facets. I'm wondering if it's easier and better to just sample in featran and do the statistics summarization in facets?
@marcromeyn @yonromai @richwhitjr thoughts?
I like the idea of keeping the statistic summarization internal but make it easy for someone to take the stats and dump it ot something like Facets. In the future we could have a sub project to help do this in one step.
I just worry about serialization and dependency issues we may run into when introducing a new library since Featran has to support a lot of different distributed systems,
It turns out that Facets has a the ability to import TfRecord
files and computes stats on it. I tried it on my data and it seems to work fine, although quite slow on big files.
So it should be really easy to sample TFRecord files from the featran output and import them inside Facets.
It's easier to do this in TFDV, closing.
Could be useful for debugging. A couple of thoughts