Closed Jean-Roc closed 1 year ago
I've stopped the process and updated to the latest conda build as 2.2 list changes to entwine build time and the new statistic output (similar to untwine --stats I guess) will avoid to run a additional long step (pdal info/entwine scan) to get them.
So far with the same parameters, the cpu load does not get under 35% at lowest, 7 cores are always around 90%. It seems that the processing still doesn't fully make use of all the available resources, I played it safe by having less threads than cpu cores but maybe should I have set an higher one.
By looking at the console output, the main change is that entwine 2.2 keep adding new laz files as soon as one is done while the previous attempt was only adding and processing one file at once.
The processing isn't exactly time linear but the ETA should be in days instead of weeks !
edit : I might have been too enthusiast, the progress rate is gradually lower which may soon reach the same level than 2.1
It started at 1.600 M/h and is now at 322 M/h after 85h, cpu load is similar to the previous try (9-10%). New files keep getting added, sometimes 2 in the same minute.
Sounds like poor performance, but this thread is quite old so I'll close it and it may be reopened with a reproducible test case if needed. In general all threads specified are expected to be nearly fully utilized when building.
Hi,
We are trying to build an EPT based using Entwine 2.1 on Windows 10 with a Conda environment, the input source is an aerial survey of 2226 laz files (1.14 To, 118 billions points). The temp directory is set on a SSD, the input and ouput on others disks, there is 256Go of RAM available.
The entwine build command specify the use of 20 threads and laszip as the datatype, however after two weeks of processing we observe that the average cpu load for this process is ~9%. After a short spike when starting over a new laz file (one at a time), the load can be seen distributed on multiples cores in the task manager but not up to 20 and each with a low cpu rate.
Could you tell me if this a normal threading behaviour that may be due to a bad use on our side ? Would starting over with the recently released 2.2 be of any help ?
Regards, Jean-Roc Morreale