Open JulienVig opened 8 months ago
781 may allow more flexibility for preprocessing
it should, it should. answering based on a comment you made there and some other comments
choosing how to handle missing values [in tabular] should be the responsibility of the data owner so we could throw an error or drop the whole row as default behavior.
totally! for now, how I see the usage of discojs is as follow:
Disco
and let it transform data via the preprocess
function
Task
Trainer
and process the data yourself
in the Disco case, I'm thinking of throwing now, letting the user know that the dataset is missing important values rather than choosing ourself how to handle it (currently, titanic's is missing some values).
- [ ] This functional preprocessing doesn't allow for "stateful" preprocessing. Because of its streaming nature, it is currently not possible to normalize a tabular column since we can't compute general aggregations (e.g. mean and standard deviation of a feature). In other words, if we want to implement new preprocessing functions we can only add functions with one dataset row as sole argument which is very constraining.
I got scared a while back with non-streaming algorithms as the size of dataset can be quite huge. but with #781, it's not an issue anymore! one can compute whatever they want on a dataset
fillEmptyString(dataset, await computeMean(dataset, 'column'))
note that it requires one full passage over the dataset for computeMean
which I find costly.
- [ ] The preprocessing state learned during training should be saved to be re-used for test and inference. For example, standardizing the testing set should be done with the training set's mean and standard deviation, not the test's statistics. Therefore, the preprocessing state should be saved and this is currently not supported.
that kinda ties to the one before: we can do wathever now with processing, it's simply function applied on top of datasets.
also, tell me if I missed smth but IMO finding the distribution (mean/stddev) of a dataset is inherently skewed, one would need to know the parameters over the whole population rather than on a specific slice, no? (maybe a new parameters to Task in this case?)
also, tell me if I missed smth but IMO finding the distribution (mean/stddev) of a dataset is inherently skewed, one would need to know the parameters over the whole population rather than on a specific slice, no?
Yes ideally features are normalized according to the overall population statistics but in practice it's almost never known so it is common to use the empirical estimates.
One thing important though is that in collaborative learning, each data owner has a different subset of the data so normalizing according to the local subset may yield very different results than normalizing using all the data.
@rabbanitw do you know if there are some common mechanisms for feature preprocessing in federated/decentralized learning? Or is there simply no preprocessing?
FedProx (#802) alleviates the problem of data heterogeneity at the network level but in the case of tabular data for example, having features with very different magnitudes can make the local training diverge (e.g. #615)
Edit: it has been an issue in Disco for a long time #32
A recent refactoring enforced tasks' preprocessing in a lazy and streaming fashion. The preprocessing first defines stateless preprocessing functions (e.g.
resize
andnormalize
for images) and then apply them successively on the dataset one row at a time:Limitations to address: