Open fchollet opened 1 year ago
We need to reimplement all of these, right? I can start working on some of these, probably with the image preprocessing layer if that's the case
All the layers that are currently wrapping tf.keras layers, yes. We can just copy over the code from tf.keras (which uses TF), but I'm not entirely sure what to do with custom ops (which are used for RandomZoom and RandomRotation, maybe RandomTranslation as well). For this reason it might be easier to start with the structured data preprocessing layers. Thank you :)
Okay. I will start working on some of these. Will update the issue accordingly. Thank you
@fchollet I have two questions regarding this:
tf.keras
the base_preprocessing_layer
abstract class is defined in the engine
module. For keras_core
, are we going to put in the layers
module or somewhere else?In tf.keras the base_preprocessing_layer abstract class is defined in the engine module. For keras_core, are we going to put in the layers module or somewhere else?
I'm not 100% sure we'll need a base class, but if we do, then it should be in layers/preprocessing/
.
Should the abstract class for preprocessing layer implement all the abstract methods defined?
My take is, let's try to do it without a base class. And we only need to implement the methods currently exposed by the Keras Core wrapped implementations. If it turns out there's a lot of redundancy in the code, we can introduce a shared base class to address it.
Sure. Thanks for the clarification
@fchollet so I almost finished porting the CategoryEncoding
layer but here are a few issues that I ran into:
sparse=True
kwarg in the Input
layer in keras_core
. So, training with sparse inputs and CategoryEncoding as first layer isn't possible as of now. Any suggestions?call
a layer, it doesn't work with a list/tuple because we use nest
to flatten out the positional arguments for any layer. I am okay with this design choice though because expecting a tensor/ndarray can save us from rewriting a lot of redundant code. Though we should make that clear in the layer docs.Apparently we don't support sparse=True kwarg in the Input layer in keras_core. So, training with sparse inputs and CategoryEncoding as first layer isn't possible as of now. Any suggestions?
We're going to support it (for backends where that is possible, like TF) in the future. This is a TODO.
I am surprised that I didn't notice this earlier but when we call a layer, it doesn't work with a list/tuple because we use nest to flatten out the positional arguments for any layer.
It should work -- we have a number of layers that accept tuples, e.g. HashedCrossing.
I will send a PR by tonight. I will mark it as WIP but it will give me a fair idea for all other preprocessing layers to be ported. Thank you for all the help and the support 🙏
In the future Keras Core will become tf.keras -- as such it should not depend on tf.keras in any way. Currently we depend on tf.keras in two ways: