Closed isaacdevlugt closed 9 months ago
Thanks @isaacdevlugt! This is interesting, because we do test for model weight saving here: https://github.com/PennyLaneAI/pennylane/blob/master/tests/qnn/test_keras.py#L578
However, your example above places a classical layer after the quantum one, while our test just has a classical layer before.
The user was able to solve the problem by updating how AngleEmbedding was initialized in their code, so we are closing this issue. The model.save
function uses autograph, which implicitly adds a batch dimension.
Expected behavior
When
save_weights
andload_weights
are called, I expect that the model's weights are the same before and after they're saved and loaded. Also, if I usemodel.save
to save the whole model andtf.keras.models.load_model
, I expect that the weights are the same.Actual behavior
In both cases (saving just the weights or saving the whole model), the models that are created / loaded after saving give the same outputs / predictions, but when their weights are inspected, what's shown isn't the same.
Additional information
https://discuss.pennylane.ai/t/error-reloading-circuit-from-qasm-string/3679
Source code
Tracebacks
System information
Existing GitHub issues