Closed chang-change closed 2 years ago
May be you can obtain the weights here https://deepxde.readthedocs.io/en/latest/_modules/deepxde/model.html#LossHistory.set_loss_weights
Then simply do a forward pass manually. May be there is a better approach. This one should work.
@praksharma Thanks for your response. I want to get the derivative of the intermediate variable, then construct the residual equation. Finally, adds the residual equation to the loss.
Sorry, this is a vague description. Do you want to compute the residual from the output of the intermediate activation layer instead of the output layer? What do you want to achieve with this? Here is what I think.
I don't think you can do this in DeepXDE. You need to create your own PINN and then modify it.
@praksharma I'm so sorry I didn't describe it clearly. What I want to do is similar to this article.Physics-guided Design and Learning of Neural Networks for Predicting Drag Force on Particle Suspensions in Moving Fluids. However, there are some different.
I think this is not supported by DeepXDE. You have to create your own architecture.
I don't think this structure is robust for 2D and 3D problems. Did they publish this paper? By the way, this architecture is simple and you can write a code in 1 hour.
@tsarikahin @praksharma Thanks for your reply. I know I can create the network by myself. I just want to know if the Deepxde can do it. The Deepxde is a magic tool, and it can save my time. By Deepxde, I can define geometry, boundary conditions, and training data, etc. It's convenient. But, I don't know that how to combine Deepxde with my network. Can you give me some suggestions? Thank you very much.
@praksharma @tsarikahin Good comments.
Actually it is doable in DeepXDE, although not in a straightforward way. See FAQ Q: A standard network doesn’t work for me, and I want to implement a network with a special structure/property.
@lululxvi Thanks for your reply.
I use apply_feature_transform
to transform the inputs of network to the feature I need. The code as follows.
def input_feature_transform(x):
xx = x[:,0:1]
yy = x[:,1:2]
zeta = xx*(xx**2+yy**2+1)/(xx**2+yy**2)
eta = yy*(xx**2+yy**2-1)/(xx**2+yy**2)
inputs = tf.concat([zeta,eta],axis = 1)
return inputs
net.apply_feature_transform(input_feature_transform)
How can I get the derivates of output of the network with respect to the features, instead of the derivates of output of the network with respect to spatial-time coordinate. Can you give me some suggestions? Thanks!
It is not straightforward to do. Then why don't you directly use zeta
and eta
as the network input?
@lululxvi Thanks for your reply. I want to use Conformal Mapping to transform the problem of flowing around a cylinder to the problem of flowing around the wing. Because I don't know how to define the wing geometry by DeepXDE. Can you give me some advice? Thank you.
A few ideas:
@lululxvi Thanks for your reply. How can I use polygon to approximate the irregular shape?
Dear Dr Lu Lu, For the two physical quantities A and B, the result of A will have a direct impact on B. So, I want to build a neural network like this, A is the output of a certain layer in the middle of the neural network, and B is the output of the last layer of the neural network. However, I don't know how can I get the value of A? Is there a better way to deal with this clearly causal variable in DeepXDE? Looking forward to your reply, thank you!