Open ankujawa opened 1 year ago
I'm not sure I understand the issue. Do you think that Accelerad should use more memory on the GPU? Accelerad's GPU memory usage is quite minimal, but it is not limited by anything other that the amount of memory available. In most cases, Acceleard uses global memory, which is also quickly accessible by GPUs.
Regarding speed, it's hard to say what speed-up you should expect without knowing more about the model you are running. Accelerad's parallelism is at the primary ray level, so if you have few sensor points, there will be no speed-up.
As a side note, it appears that you are trying to run multiple Accelerad instances simultaneously. This is not recommended because it forces the GPU to do context switching, which is slower than running the Accelerad instances one after the other.
Hi Nathaniel, thanks for your quick and detailed reply. What you're saying confirms and explains what I observed; multiple processes at once is not good and increasing the sensors in my simulations resulted in a significant speed-up. It's good to hear that you don't find it strange that so little memory is used. It was just my impression that it should use more available memory since the simulations were slow (when a too low number of sensors was the real issue).
So now the question for me is why I get completely different results when doing the ray tracing with Accelerad. I will do more testing and try to figure it out. I will come back to you if I need more help.
I'd have to know more about your model and settings to understand why the results you get differ. However, common reasons for different results according to posts on the user group include bad sizing of the irradiance cache and geometry placed far from the origin. These and other factors are discussed on the documentation page.
In general, the model describes a greenhouse with photovolatic panels attached to the roof. I would like to fnd out how irradiance is decreased by the PV structure (and its effect on plant development).
The greenhouse geometry is built up with genboxes, genprism, etc. (metal material), PV panels are represented by genboxes (black material). The dimensions are approx 12mx30m. The sky is created based on the Perez Diffuse Sky Model using meteorological data at the specified location. The scene looks then like this:
In the real world, greenhouses in the mediterranean region are covered with simple white plastic. I implemented a custom trans material to represent the plastic cover. I defined it as:
#greenhouse plastic white
void trans greenhouse_plastic
0
0
7 1 1 1
0.05 0.02
0.7 0.3
The model evaluates irradiance at 10cm above the ground along a grid of 250x250 sensors for every timestamp (hourly values). I did some testing and the rtrace command that I'm calling now is: rtrace -i -ab 2 -aa .5 -ar 256 -ad 2048 -as 256 -ac 4096 -an 1000 -at 0.01 -h -oovs octfile.oct
.
The results that I get with Accelerad look like this now. The one plot is the geometry with the trans material and the other one without the trans material.
For comparison, this is the result that I got when I used the Radiance installation on Windows:
Do you have any idea what could cause this behavior?
Hi Nathaniel, I am referring to an issue that I already opened a few days ago on the GitHub page of
bifacial_radiance
by NREL: [https://github.com/NREL/bifacial_radiance/issues/458].I am using the python package
bifacial_radiance
to access the Radiance software. Irradiance analysis is performed by calling the rtrace function withinbifacial_radiance
. I recently switched from running the simulations locally on my Windows (11th Gen Intel(R) Core(TM) i7-1185g7 @ 3.00GHz with 4 cores, no GPU) to a Linux computer with a NVIDIA Tesla M10 with 5 multiprocessors. I successfully installed Radiance and then Accelerad.The software finds the GPU, however memory usage is limited to approx 700-800MiB per rtrace process.
My question is, why is the usage of memory space limited to these 700MiB per process?
Running multiple simulations at once (one for each timestamp) did not change anything:
Unfortunately the results from these simulations are also not in agreement with comparable ones from the runs on Windows...
Is there something else that I can try? Thanks in advance!