Open dmcarmen opened 2 years ago
Can you generate 3-dimensional interpolators for your data, for example, using scipy.interpolate.RegularGridInterpolator? That might speed up your interpolation.
Hi, that guided me in the right direction, thank you so much! I ended up using scipy.spatial.KDTree to find nearest points, and I think scipy.interpolate.LinearNDInterpolator could be also used for the case (I didn't use scipy.interpolate.RegularGridInterpolator as the dataset was large and that gave me problems.), just in case this can help someone.
Hi! I am trying to run the program in parallel using pylime. In the model I am preparing I read once at the beggining the data (velocity, density...) from a text file and then I interpolate the data looking for the nearest points. Looking for these points in all the list for each grid point takes a lot of time in comparison to normal running with equations.
One of the possible solutions I had in mind was running the program in parallel for it to be faster. As far as I understand, the only part that can be run in parallel is the iterations part, but not the part where the velocity, density, etc. values are calculated for each point. Am I right? If so, there would be any easy way to make this run in parallel? Also, I'd rather do it with pylime but I can try to do it on C. Does the same happen there?
Also, when running pylime with the -p nThreads option I get this warning: You cannot call a python velocity function when multi-threaded. I think this is the same problem as above, but I would like to know for sure what this warning exactly refers to.