ehsanhaghighat / sciann

Deep learning for Engineers - Physics Informed Deep Learning
htttp://sciann.com
Other
320 stars 114 forks source link

Complex-Valued function with more then one coordinate yields only real-valued solution #19

Closed JakobEliasWagner closed 3 years ago

JakobEliasWagner commented 3 years ago

Hey there, i am pretty new to SciANN and love working with it. I recently tried to get a complex-valued 2D helmholtz problem to work. For me 1D worked just fine. But when working with 2D-data i got the following Warning: <path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) As a result the solution of this model only yields real valued solutions. For the problem specified this is sadly not sufficient. Is this intetntional? Is there a way for me to circumvent this issue? Code to recreate this issue:

from sciann.utils.math import diff
import numpy as np

x_data, y_data = np.meshgrid(
    np.linspace(0, 1, 20),
    np.linspace(0, 1, 20)
)
x_data, y_data = x_data.flatten(), y_data.flatten()

p_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)
k_absorb_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape)

x = sn.Variable("x")
y = sn.Variable("y")
k = 20  # wave number
k_absorb = sn.Functional("k_absorb", [x, y], 3 * [20], "tanh", dtype='complex64')  # wave number modifier
p = sn.Functional("p", [x, y, k_absorb], 8 * [20], "tanh", dtype='complex64')

c1 = sn.Data(p)
c2 = sn.Data(k_absorb)
L1 = -(diff(p, x, order=2) + diff(p, y, order=2) + (k - k_absorb) ** 2 * p)

model = sn.SciModel([x, y], [c1, c2, sn.PDE(L1)])

model.train(
    [x_data, y_data],
    [p_data, k_absorb_data, 'zeros'],
    epochs=1,
    adaptive_weights=True
)

print(f"Output-type: {p.eval(model, [x_data, y_data]).dtype}")

stdout:

<path-to-script>/issue.py
2021-04-01 15:56:09.670221: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-04-01 15:56:09.670241: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
---------------------- SCIANN 0.6.0.4 ---------------------- 
For details, check out our review paper and the documentation at: 
 +  "https://arxiv.org/abs/2005.08803", 
 +  "https://www.sciann.com". 

2021-04-01 15:56:11.870829: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-01 15:56:11.870986: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-04-01 15:56:11.870993: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-04-01 15:56:11.871008: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (JakobsComputer): /proc/driver/nvidia/version does not exist
2021-04-01 15:56:11.871184: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-01 15:56:11.871724: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-01 15:56:11.897528: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-04-01 15:56:11.930168: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3692545000 Hz
Train on 400 samples

+ adaptive_weights at epoch 1: [191.29670595816734, 630.4341240815332, 1.0068604349762287]
<path-to-sciann>/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part
  return array(a, dtype, copy=False, order=order)
400/400 [==============================] - 1s 4ms/sample - loss: 52328.3709 - p_loss: 0.2347 - k_absorb_loss: 0.2266 - mul_2_loss: 51517.6172
Output-type: float32

Process finished with exit code 0

Thank you, Jakob

ehsanhaghighat commented 3 years ago

Hi - thanks for your interest in sciann. Please do join the slack group for faster communication. https://join.slack.com/t/sciann/shared_invite/zt-ne1f5jlx-k_dY8RGo3ZreDXwz0f~CeA https://join.slack.com/t/sciann/shared_invite/zt-ne1f5jlx-k_dY8RGo3ZreDXwz0f~CeA

A disclaimer that I have not so far used sciann with complex variables, but I have the following comments.

p_re = sn.Functional("p_re", [x, y, k_absorb], 8 [20], "tanh") p_im = sn.Functional("p_im", [x, y, k_absorb], 8 [20], "tanh”)

On Apr 1, 2021, at 7:00 AM, JakobEliasWagner @.***> wrote:

Hey there, i am pretty new to SciANN and love working with it. I recently tried to get a complex-valued 2D helmholtz problem to work. For me 1D worked just fine. But when working with 2D-data i got the following Warning:

/sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) As a result the solution of this model only yields real valued solutions. For the problem specified this is sadly not sufficient. Is this intetntional? Is there a way for me to circumvent this issue? Code to recreate this issue: from sciann.utils.math import diff import numpy as np x_data, y_data = np.meshgrid( np.linspace(0, 1, 20), np.linspace(0, 1, 20) ) x_data, y_data = x_data.flatten(), y_data.flatten() p_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape) k_absorb_data = np.random.random(x_data.shape) + 1j * np.random.random(x_data.shape) x = sn.Variable("x") y = sn.Variable("y") k = 20 # wave number k_absorb = sn.Functional("k_absorb", [x, y], 3 * [20], "tanh", dtype='complex64') # wave number modifier p = sn.Functional("p", [x, y, k_absorb], 8 * [20], "tanh", dtype='complex64') c1 = sn.Data(p) c2 = sn.Data(k_absorb) L1 = -(diff(p, x, order=2) + diff(p, y, order=2) + (k - k_absorb) ** 2 * y) model = sn.SciModel([x, y], [c1, c2, sn.PDE(L1)]) model.train( [x_data, y_data], [p_data, k_absorb_data, 'zeros'], epochs=1, adaptive_weights=True ) print(f"Output-type: {p.eval(model, [x_data, y_data]).dtype}") stdout: /issue.py 2021-04-01 15:56:09.670221: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-04-01 15:56:09.670241: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. ---------------------- SCIANN 0.6.0.4 ---------------------- For details, check out our review paper and the documentation at: + "https://arxiv.org/abs/2005.08803", + "https://www.sciann.com". 2021-04-01 15:56:11.870829: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-04-01 15:56:11.870986: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-04-01 15:56:11.870993: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303) 2021-04-01 15:56:11.871008: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (JakobsComputer): /proc/driver/nvidia/version does not exist 2021-04-01 15:56:11.871184: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-04-01 15:56:11.871724: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-04-01 15:56:11.897528: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes) 2021-04-01 15:56:11.930168: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3692545000 Hz Train on 400 samples + adaptive_weights at epoch 1: [191.29670595816734, 630.4341240815332, 1.0068604349762287] /sciann/lib/python3.6/site-packages/numpy/core/_asarray.py:83: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) 400/400 [==============================] - 1s 4ms/sample - loss: 52328.3709 - p_loss: 0.2347 - k_absorb_loss: 0.2266 - mul_2_loss: 51517.6172 Output-type: float32 Process finished with exit code 0 Thank you, Jakob — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub , or unsubscribe .
JakobEliasWagner commented 3 years ago

Hey @ehsanhaghighat, thank you very much for your answer. I honestly believe that this data type is not yet implemented in SciAnn (at least for variables). The dtype for Variables is set in backend_config.py and there is actively an error raised if the dtype is not float

@keras_export('keras.backend.set_floatx')
def set_floatx(value):
  global _FLOATX
  if value not in {'float16', 'float32', 'float64'}:
    raise ValueError('Unknown floatx type: ' + str(value))
  _FLOATX = str(value)

therefore i do not think it is possible to define Variables in the complex domain. Thus i only implemented the functionals to be complex-valued. I already have an implementation with the technique you stated above. This one runs a lot slower then an implementation with the complex tensorflow backend would, as the network is two times bigger. But this is what it looks like :)

x_data, y_data, p_data_real, p_data_imag, k_data_real, k_data_imag = load_test_data(h5_file_path, h5_k_file_path)

x = sn.Variable("x")
y = sn.Variable("y")
k = 20
k_absorb_real = sn.Functional("k_absorb_real", [x, y], 3 * [20], "relu", dtype='float64')
k_absorb_imag = sn.Functional("k_absorb_imag", [x, y], 3 * [20], "relu", dtype='float64')
p_real = sn.Functional("p_real", [x, y, k_absorb_real, k_absorb_imag], 12 * [20], "tanh", dtype='float64')
p_imag = sn.Functional("p_imag", [x, y, k_absorb_real, k_absorb_imag], 12 * [20], "tanh", dtype='float64')

c0 = sn.Data(p_real)
c1 = sn.Data(p_imag)
c2 = sn.Data(k_absorb_real)
c3 = sn.Data(k_absorb_imag)
L1 = -(diff(p_real, x, order=2) + diff(p_real, y, order=2) + p_real * (k - k_absorb_real) ** 2 + 2 * (
        k - k_absorb_real) * k_absorb_imag * p_imag - np.power(k_absorb_imag, 2) * p_real)
L2 = -(diff(p_imag, x, order=2) + diff(p_imag, y, order=2) + p_imag * (k - k_absorb_real) - 2 * (
        k - k_absorb_real) * k_absorb_imag * p_real - np.power(k_absorb_imag, 2) * p_imag)

model = sn.SciModel([x, y], [c0, c1, c2, c3, sn.PDE(L1), sn.PDE(L2)])

Okay, so if i am not mistaken there currently is no idle way to use the full tensorflow backend for complex-valued problems. But as there is a solution to this, it dosn't look like a letdown to me. It would be a nice feature though. Thank you, Jakob

ehsanhaghighat commented 3 years ago

Yeah - I remember now that Keras does not support 'complex' inputs.

The computational time is certainly slower when you double the network. But you can make each network smaller so that you will have similar parameters compared to just one network. Then the performance should be similar.