Open lucamattiazzi opened 7 months ago
@lucamattiazzi yep seems like a reasonable feature -- there might be a bit of surgery required to pass that information through (haven't scoped it). @elijahbenizzy might also have thoughts.
Some questions to get your input on:
Reason I ask is there are other implementation options:
@check_output_custom
-- but we'd need to expose passing in function metadata information. @tag
on features requiring validation.So just figuring out which of the three approaches would be better to invest in / look at.
Thank you for the quick answer!
1 - it does not matter, it's nice but it would not change a lot for us 2 - yes, we capture all of the failures in the validations and use the results as warning for the end user 3 - we use GreatExpectations suites and a single validator class that uses the specific suite for each feature (that's why it would be useful for us to have the name of the feature at init or execute time)
the lifecycle API seems great, and I think we might make it work with our current implementation, but the results would be stored elsewhere than the driver, which looks a little bit worse in my opinion. moreover, we would like in the future to be able to skip invalid rows from the computations down the DAG, and since this method only allows for side effects it wouldn't work I'm afraid (that's not an issue for us now)
the current structure of DataValidator does not seem to allow the current_node value to be passed easily without breaking current child classes
Thanks for the responses, makes sense. Some follow ups:
the lifecycle API seems great, and I think we might make it work with our current implementation, but the results would be stored elsewhere than the driver, which looks a little bit worse in my opinion.
Yes, that lifecycle adapter could house the results that you would then inspect. How are you getting the results now? (or how would like to get them?)
moreover, we would like in the future to be able to skip invalid rows from the computations down the DAG, and since this method only allows for side effects it wouldn't work I'm afraid (that's not an issue for us now)
None of the approaches would directly enable this I don't think. My Hamilton philosophy here is that this should then be explicitly part of the DAG -- or if part of a decorator maybe a different one or a flag to enable it.
the current structure of DataValidator does not seem to allow the current_node value to be passed easily without breaking current child classes
Yep. But nothing that can't be changed since I think we can do it in a backwards compatible way. :)
Chiming in -- so yes, this does make inherent sense. We could expose HamiltonNode
to be the node it decorates, we'd just have to make it backwards compatible. Could be as simpole as checking if the class implements validate_with_metadata
, which would have a default implementation that calls to validate
or something like that.
It's interesting -- usually we recommend that users put the configuration closer to their code (I.E. in the parameters themselves), but if its a lot this can get tricky. So I think this is a reasonable approach. Your workaround is pretty good for now, but it's reasonable to have the name of the node as the input.
@lucamattiazzi did you take a look at the lifecycle API? Are you still blocked here? Or?
hello, we're using the decorator
check_output_custom
to add custom validation to our features, but while the validator class is defined only once, each feature has its own rules for validationthe problem is that the validator class does not receive any information on the feature it will be validating, only the resulting pd.Series, and we would need at least the name of the feature to be able to validate it correctly without explicitly using the name of the feature on the validator
the
DataValidator
child class could receive either in the__init__
or in thevalidate
method the function whose results it should validateI'll glady open a PR with the change
I'd like to be possible to do this:
instead of needing to do this: