Open skythomp16 opened 2 years ago
Hello @skythomp16! We currently don't support that functionality in the API. Right now, the implementation is the SparkModel
wraps an existing, compiled Keras model i.e;
model = ...
spark_model = SparkModel(model)
And we don't copy the input
and output
fields over to the SparkModel
. To reference those fields when constructing the encoder
in your example, that would be fairly simple - we could just assign input
and output
fields in the initializer of SparkModel
i.e; add
self.input = model.input
self.output = model.output
somewhere around here, then in the example:
encoder = Model(spark_model.input, spark_model.output)
But if we wanted to use model.input
and model.output
in the initializer of the SparkModel
, that will require a bit more work, as we would need to add some logic in __init__
to determine what was supplied and how to construct a model from those inputs. It's certainly doable, though, if we think there's a good use-case.
Thanks Daniel. Basically what I am doing is that I have an autoencoder that I am pulling the encoded data out of the bottleneck layer and using it to make predictions. It sounds like this doesn't currently support what I want to do though unfortunately without a few changes. I may try to make them and compile locally and see how that goes.
Yes, unfortunately that behavior is not supported currently. If you make some revisions and you think they could be beneficial to the broader community, feel free to submit a PR! Always glad to have more help and contributions. 😄 Thank you!
Hey Daniel, I have been working on this over the last few days and would love to see any developer/architecture documentation you may have for this project? I think I was able to get the model.input and model.output to work by simply doing what you said but I would love to also be able to get outputs from specific layers.
Hello @skythomp16 , currently the documentation is on https://danielenricocahall.github.io/elephas/ but outside of that + pydocs in code, we don't have any specific architecture docs. That's something I can add to the queue though.
Hello, in my sequential Keras code, I am doing something like this:
model.compile(optimizer=Adam(), loss='mean_squared_error') trained_model = model.fit(x_train, y_train, batch_size=512, shuffle=True, epochs=50, verbose=1)
encoder = Model(model.input, model.output) encoded_data = encoder.predict(x_train) # bottleneck representation
I am then using the encoded_data array later on in the program.
How can I do something like this with SparkModel? It seems like I cannot build a SparkModel with an input and output of the Keras model?