dipc-cc / hubbard

Python tools for mean-field Hubbard models
https://dipc-cc.github.io/hubbard/
GNU Lesser General Public License v3.0
21 stars 8 forks source link

btd inversion to negf.py #106

Open AleksBL opened 2 years ago

AleksBL commented 2 years ago

Added the BTD inversion to NEGF.py Edit: forgot to add the tbt flag to the _G calls in the NEGF class, im a githubnoob and it wont allow be to add this for some reason.....

tfrederiksen commented 2 years ago

Added the BTD inversion to NEGF.py

Thanks for the PR, it looks great!

Edit: forgot to add the tbt flag to the _G calls in the NEGF class, im a githubnoob and it wont allow be to add this for some reason.....

You can keep pushing commits to your branch and they will appear here.

AleksBL commented 2 years ago

I think i have cloned a wrong version of the hubbard code, the rest of the code still has the energy-loops, so this code doesnt make sense right now

sofiasanz commented 2 years ago

Hi @AleksBL, thanks for the PR! The changes I did to the code last week are in another branch (#105). I haven't merged yet that branch since some adjustments needed to be considered.

AleksBL commented 2 years ago

Added some more stuff

AleksBL commented 2 years ago

I dont understand, is this related to the "batching" of the energy-points? :)

zerothi commented 2 years ago

I dont understand, is this related to the "batching" of the energy-points? :)

Yes, but perhaps I misunderstood its purpose, I just saw you did array_split, but to me it seems better to define number og energy points. It is just a comment, you decide :)

AleksBL commented 2 years ago

Ahh that must be Sofias doing, i didnt know this was in there already! :)

sofiasanz commented 2 years ago

Yes I added the array_split method to avoid building too large matrices and do it rather in "batches". This is only done for the NEQ integrals since is the only place where we need in principle not only the diagonal of the Green's function. But actually we would only need the matrix blocks involving the indices that map the electrodes in the device so maybe the array_split is even not necessary there...

zerothi commented 2 years ago

Yes I added the array_split method to avoid building too large matrices and do it rather in "batches". This is only done for the NEQ integrals since is the only place where we need in principle not only the diagonal of the Green's function. But actually we would only need the matrix blocks involving the indices that map the electrodes in the device so maybe the array_split is even not necessary there...

The way I think of this is that you also build up the entire zS - H - sum(Sigma) for each energy point. I.e. the stored memory elements becomes quite high. Is this wrong?
My thought is that at some time later you'll try this out on a larger system, and then it may be much easier to calculate memory requirement from number of energy points, rather than number of blocks.

AleksBL commented 2 years ago

Yes zS - H - sum(Sigma) is build for all the energy-points given, but I guess it only stores the A_i, B_i and C_i blocks so this only scales linearly with n_orbitals and linearly with E-points... I guess having call the matrix operations on these 4-dim arrays are the tradeoff of having to do this in python if we still want some speed? :)

zerothi commented 2 years ago

Yes zS - H - sum(Sigma) is build for all the energy-points given, but I guess it only stores the A_i, B_i and C_i blocks so this only scales linearly with n_orbitals and linearly with E-points... I guess having call the matrix operations on these 4-dim arrays are the tradeoff of having to do this in python if we still want some speed? :)

I think the speed only boils down to being important when the sizes of the blocks are small in which case the overhead becomes high. For larger matrices/blocks (say 400) I wouldn't suspect much of an overhead. This is about how a user would use the routines, not so much about efficiency (which users can control with the parameter).

AleksBL commented 2 years ago

Just tried to see how far i could push my laptop: A 20x20 block matrix with blocks with random shapes in between (200,200) and (300, 300), yielding a (1, 100, 5105, 5105) shaped array in the end takes up about 7GB of RAM when all the stuff used to build it has been cleared from memory. The equivalent numpy array would take up 38GB of space

sofiasanz commented 2 years ago

OK that seems undoable if we want to run calculations locally in our laptops...

AleksBL commented 2 years ago

Hi, I have something that is partially working right now on the graphene strip, but im encountering an error where the self-energy parsed to the _G function has shape "(4,)" instead of the usual (4,4). Its not really a problem to tell the code what to do when it gets this parsed, but is it on purpose that a self-energy array with this shape is given?

sofiasanz commented 2 years ago

Hi @AleksBL which script are you using to test it? I for instance checked test-hubbard-open.py and the shape of the self-energy matrix is (4,4), not (4,) when passed into the _G function... I'm not sure why are you getting that, but you are right in any case it should be a matrix form not a vector.

sofiasanz commented 2 years ago

Hi @AleksBL I think you were right, and there was an error in the shape of the nested lists that contain the self-energies. I fixed it in commit 8c0113afdc3a0edc25f39cdbd592c25c9a2a8731 in the branch green-matrix, thank you for bringing this issue up :-).

AleksBL commented 2 years ago

Okay cool, ill get back to it soon :)