Open Serj-R opened 2 years ago
Hi, This is due to the fact that you are working with two materials, both with a very different scale. Check out this example for some insights: https://github.com/udcm-su/NTMpy/blob/master/Examples/Substrate.ipynb
The thing is: The number of collocation points (as described in the example) is set to be constant for all layers. That is, if on layer is small the resolution will be high, if the next layer is large, the resolution will be low (and the computational error high.) However, you can avoid this, by splitting your material in multiple layers, each of the same length. E.g. [1000 nm Au] = [500nm Au| 500 nm Au].
In general I would either try to define materials of ~equal length. If this is not according to your experiment, add high resolutio material layer on the edge of materials and decrease the resolution with multiple subsequen layers of the same type.
So, even if I want to simulate the dynamics of electronic and lattice temperatures in a single thick metal layer, say, 1000 nm (without any other layers and substrates), I should divide it in several smaller layers, for example, [200nm/200nm/200nm/200nm/200nm]? Here I chose 200 nm because the stack [500nm/500nm] still gives an underestimated value of temperatures (about two-fold decrease).
Hi Serj. I try to briefly explain why this occurs and how to optimally solve this.
The codes basically solves the diffusion equation in a finite set of points (12 per layer) using a finite set of spline functions. When large layers are considered, these points and functions may be insufficient, causing a poor space resolution.
Using the current version of the code, you can increse the number of points by adding more layer of the same material. The first layers (where you expect to have the fastest space-dependence) should be thinner, for example you might try 200nm + 400nm + 800nm. On the other hand, each layer slows down the code a little bit, some trial and error may be required to find the optimal condition.
Right now I am working on an update to allow the user to manually modify the number of points and spline per layer.
Please, let us know if you manage to reproduce your experimental data with our code. Every feedback is appreciated.
Hi Valentino!
Yes, I got what you both mean. For example, for Au I simulated several situations which should ideally give the same the result. As I'm interested now in bulk Au sample, I'm trying to simulate a 1000-nm-thick Au sample without any substrate (which should not have any effect on heat diffusion at such great thickness of the metal layer itself in my experimental conditions). So, I modelled the following stacks: (100nm 10), (200nm 5) and (250nm 4). The first stack (100nm 10) gives the highest rise both in Te and Ti, than I have lower values for (200nm 5) and even lower for (250nm 4). Evedently, I should reduce the thickness of layers until I reach saturation of both Te and Ti temperatures. Am I right? Unfortunately, the simulation with 50nm * 20 layers takes too much computer resources that it is impossible to simulate at present. So, the question is, what is the optimal layer thickess for every material (Au or Pt - as opposites in their properties) to be correctly simulated? As you mentioned - there is a finite set of points (12 per layer) and one should find the optimal layer thickness for every material that will simulated correctly, right? I understood that one should reduce the thickness of the first several layers, the layers in depth could be more thick.
Such an update for the code where one can just manually modify the number of points and spline per layer would be very very useful.
Also, the possibility to modify the dependence of electronic conductivity on other parameters like Te and Ti would be extremely useful, as electronic conductivity strongly depends on ,for example, the difference between Te and Ti, that is why the conductivity of material in 2T state could be two orders of magnitude higher that in 1T state. So this is a very important parameter.
Hello!
Recently, I've come across your NTMpy open source package for solving coupled parabolic differential equations. As an experimentator, I'd have found using such a package very useful for analyzing my experimental data. Right now, I'm trying to simulate using NTMpy the excitation and relaxation dynamics of electron and lattice subsystems in an Au film heated by a single femtosecond laser pulse. To begin with, I started using your examples to become familiar with your package. In the experiment, we use Au films of different thickness. And while the Au thickness is below around 400-500 nm, the simulation results in adequate Te and Ti. However, increasing the thickness to 1000-3000 nm results in extremely low Te and Ti values at the same pump fluences. The bigger the thickness, the lower the temperatures (from several thousand of K to only several K in electron temperature). What would be the problem in this case? I've tried to increase the thickness of metal layers in your examples from several nm to hundreds nm, and in all cases the resulting temperatures were extremely low, very far from real.