anjaroesel / netcdf4-python

Automatically exported from code.google.com/p/netcdf4-python
Other
0 stars 0 forks source link

netcdf file writing used large computing resources #161

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?

1. creating a netcdf file with 2variables of dimension (731,140,180) and 
closing it.
2. The file is again open in another script and result of a calculation of 
correlation which in appended to each xyz locations
3. The problem is top command shows python using huge memory and 100% cpu which 
make the system hang

What is the expected output? What do you see instead?
a netcdf file with correlation values and lags in it.
I expected the code should not use large memory as the netcdf file is just open 
and only 2 variables with dimension of 731 is written in each loop. but seeing 
large memory is being used.

What version of the product are you using? On what operating system?
I use python 2.7.3 latest netCDF4
on ubuntu 12.04 machine with 4GB ram

Please provide any additional information below.

Original issue reported on code.google.com by sjo.India@gmail.com on 19 Feb 2013 at 9:08

GoogleCodeExporter commented 8 years ago
Impossible to know what's going on without seeing the script.  Can you include 
your script (with data) so I can test it? You could attach the script here and 
put the data on a ftp site.

Original comment by whitaker.jeffrey@gmail.com on 19 Feb 2013 at 1:48

GoogleCodeExporter commented 8 years ago
Thank you,
                I have attached here the 2 codes which I was referring to. first code create_ncf.py will create the netcdf template and in second code gen_xcorr_wnd.py it openes a netcdf file and calculate lagged correlation at each location and save it to the netcdf template file. If we run top command we can see the memory buildup with each iteration and it takes the full memory before completion. I am not able to share the data (it is 844 mb) now as my home network is very slow. By tomorrow I can send the data file. But the issue appears to be there even if any data is used.

Original comment by sjo.India@gmail.com on 19 Feb 2013 at 4:47

GoogleCodeExporter commented 8 years ago
Dear Whitaker,
              Please find below the ftp details for the data. It is compressed and need to unzip. Thanks a lot for extending a helping hand. While running the code if we monitor the top we can see the memory usage increases to a large extend and occupies almost the full memory. It should not happen as the file going to be written is just 150 mb or so.

with best regads,
Sudheer
ftp ftpser.incois.gov.in
user temp
password incoistemp
cd /home0/temp/comp
bin
mget qu_test.nc.gz

gunzip qu_test.nc.gz

Original comment by sjo.India@gmail.com on 20 Feb 2013 at 12:36

GoogleCodeExporter commented 8 years ago
Dear Whitaker,

I have made a work around but this is not an efficient one.
I have made the second code as a function and called it from below python 
script which is called by another bash script. ie bash calling below code 
do_xcorr.py
but the speed id highly compromised. I tried with out bash ( just calling 
python function from second python code still it builds up memory usage ). Only 
if we come out from python the memory is released...  which is rather 
worrying...

$ cat do_xcorr.bash

#!/bin/bash
for i in `seq 0 139` ;do
for j in `seq 0 179` ;do
echo $i $j
./do_xcorr.py $i $j 200 
done
done
$cat do_xcorr.py
#!/usr/bin/python
import sys
slat=sys.argv[1]
slon=sys.argv[2]
sdep=sys.argv[3]
from gen_xcorr import calc_xcorr
calc_xcorr(slat,slon,sdep)

Original comment by sjo.India@gmail.com on 20 Feb 2013 at 6:52

GoogleCodeExporter commented 8 years ago
I'm unable to download the file on the slow internet connection I have right 
now, but here are a couple of comments from looking at your script.  

1) you are opening and closing files at each iteration of the loop.  There is 
no need to do this, just open the files before entering the loop and read/write 
data from the files within the loop.  This will speed things up quite a bit.

2) Try commenting out the call to plt.xcorr within the loop to see if the 
memory leak is coming from matplotlib.  I suspect a new matplotlib figure 
instance is being created at each iteration.

Original comment by whitaker.jeffrey@gmail.com on 20 Feb 2013 at 1:45

GoogleCodeExporter commented 8 years ago
Thank you,
               I shifted the opening and closing to loop to test if it can improve the situation, Initially I had opened the nc file only once ( for both reading and writing). Then the issue was same. I will check the figure instance issue. I think figure instance can be closed explicitly. I am not able to find any alternative for cross-correlation without creating additional code for it. Ideally numpy should have a cross-correlation function which is not there now.'
with best regards,
Sudheer

Original comment by sjo.India@gmail.com on 20 Feb 2013 at 2:13

GoogleCodeExporter commented 8 years ago
Hi,
         I have added a plt.close('all') in the loop.Now the memory is not building up like earlier but the code become extremely slow.

Original comment by sjo.India@gmail.com on 20 Feb 2013 at 3:05

GoogleCodeExporter commented 8 years ago
The bottleneck is not the netcdf io then, it's the matplotlib xcorr call.  
Closing this issue.

Original comment by whitaker.jeffrey@gmail.com on 20 Feb 2013 at 3:25

GoogleCodeExporter commented 8 years ago
However, 
            I tested with avoiding the plt.xcorr totally and modifying the line as below
ncf.variables['r3d'][:,j,i]=p[0:731]
the code has become very slow(it stops in between and continues after a while) 
even compared to the plt.xcorr call ( until it occupies memory close to the ram 
size)
top -p 4819 shows as below...
 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 4819 sjo       20   0  885m 531m  15m D    0 13.6   0:23.32 python

Original comment by sjo.India@gmail.com on 20 Feb 2013 at 3:40