Closed JBorrow closed 6 days ago
@JBorrow asked for a quick code review so here it is.
My main question, whithout having read the rest of the code is the consistency between the h
assumed by the kernel function and the h
read in from Gadget/SWIFT/others.
@MatthieuSchaller, I think the consistency between the h assumed by sphviewer and that used in SWIFT/GADGET should be checked and ensured by the user. More kernels can be added if needed. One way around this would be to create dedicated tools that make the corrections.
I rebased this on my new kernel improvements branch, so it should be a clean merge. Unfortunately, the diff is still huge as I had to re-write the whole render function as it was doing some quite incorrect things:
At the moment, the smoothing lengths are expected to be the kernel cut-off radius. This is not ideal, no.
I don't know how we would go about changing this without introducing huge breaking changes.
I suppose what we could do is use a macro to compile multiple versions of the c_render function with different kernels, and then choose that at run-time with python code.
I am now revising this. Below some comments. I tried your code with very low resolution and it gave me segmentation fault. Could you try too? e.g., 5x5 pixels? As I explain below, the fix for low-resolution images is not related to what you say, but to a very small fix that you introduced in your code (read below). This is now in master and it should make the image largely independent on the resolution.
Smoothing lengths were all integers, meaning that they could only ever span integer numbers of pixels. This lead to a number of particles having hsml = 0 for cases where the resolution was low (hence the incorrect results)
All visualisations assumed that the kernel was centred on the bottom left corner of each pixel, and computed the distance from that point to the bottom left corner of the adjacent pixel. This is now changed to use the floating-point distance between the particle's actual position and the centre of the pixel we're smoothing onto
There was no criterion for falling back into pixel-adding mode. I have added that here, consistent with what I do in swiftsimio, and I think consistent with what Daniel Price's code does.
I don't know what you mean by pixel-adding mode. Could you please expand on this?
There were a number of very oddly named variables that made no sense, and no comments, so it was quite hard to follow.
At the moment, the smoothing lengths are expected to be the kernel cut-off radius. This is not ideal, no.
I don't know how we would go about changing this without introducing huge breaking changes.
I suppose what we could do is use a macro to compile multiple versions of the c_render function with different kernels, and then choose that at run-time with python code.
Although the missing contribution of particles below the kernel size fixes the problems, if you make a plot of total mass recovered as a function of resolution you will see that there is still a difference of about 3% in mass at intermediate resolutions. This is due to some more fundamental problem. The solution will require to stop using a continuous kernel to do the calculation but introduce the relevant correction due to the discreteness of the image.
One this is done and understood, I am happy to start considering code style.
I've fixed the segfault bug. It was due to the variable remainder
being defined but uninitialised somewhere in global scope. In response to your other comments:
Please can you revert your minor change - it causes merge conflicts with this branch!
OK. your branch works now. It doesn't seem to return the right mass for low-res images though.
Can you provide example code? All of my code repdroduces exactly the mass that I put in.
Having the distances computed pixel-to-pixel is extremely problematic. Consider the case where we have two pixels, with a particle just on the border, and a smoothing length of 0.8. This should overlap equally with the cells on the left and right, giving a contribution to them both. In your code, it would give the full contribution of one particle to the cell that it is just within.
This is not extremely problematic. The particle is below the resolution, so that's fine to me. This is harmless.
If we are going to only have one kernel then the macro expansion is un-needed.
We could have many more. That's not a problem.
The particle is fundamentally not below the resolution limit though, its kernel overlaps with strictly two or more cells.
This doesn't pass the test.
import numpy as np
from sphviewer.tools import QuickView
import matplotlib.pyplot as plt
pos = np.random.rand(10000,3)
mass = np.ones(10000)
total = []
for res in [5,10,20,40,80,160,320,640,1280]:
qv = QuickView(pos, mass=mass, xsize=res, ysize=res, x=0.5,y=0.5,z=0.5, extent=[-2,2,-2,2],r='infinity',logscale=False, plot=False)
total.append(np.sum(qv.get_image())
print("SUMAS = ",total)
The particle is fundamentally not below the resolution limit though, its kernel overlaps with strictly two or more cells.
Putting the particle at the edge just moves the particle at the pixel level, i.e., the resolution limit of the image. So, it doesn't matter if the particle moves one pixel as long as the mass is conserved. So, I see this issue as occurring below the resolution of the image.
Fixes incorrect smoothing on small scales as reported in #16.
This also includes the changes to the kernels in another MR (#18); please merge that one first, and then we'll review this.
It also is about 5% faster (with gcc9.1.0 on my Macbook Pro), as well as giving correct answers:
Old:
New:
I have also:
c_code.c
that was completely un-used.