Open qjhart opened 4 years ago
Investigation of the application shows that the slowdown is from the sqlite command. We can replicate this on the command line. Further, we can show that the slowdown is due to the fact that that file is an NFS mount instead of a native filesystem as on the old machine.
We can show this by copying the ~cimis/cimis_zip.db
file to a local location (~qhart
eg). Then,
we can time the queries over many zipcodes.
for db in ~cimis/cimis_zip.db ~qhart/cimis_zip.db; do
for y in 2016 2019; do
for z in 94501 94502 94506 94507 94523 94525; do
echo "$db $y $z";
time sqlite3 $db "select * from zipcode_daily where d_date in ('${y}-10-01','${y}-10-02','${y}-10-03','${y}-10-04','${y}-10-05','${y}-10-06','${y}-10-07','${y}-10-08','${y}-10-09','${y}-10-10','${y}-10-11','${y}-10-12','${y}-10-13','${y}-10-14','${y}-10-15','${y}-10-16','${y}-10-17','${y}-10-18','${y}-10-19','${y}-10-20') and zipcode in ('${z}') order by d_date,zipcode"
done
done
done | tee ~qhart/test.out
The results of this show that that SQLite queries on the NFS mount each take about 8 seconds and on the /dev filesystem, they take about 5/100s of a second. This slow down makes the wms.cgi fail for any load at all.
The script below runs the queries via the webserver.
for z in 94501 94502 94506 94507 94523 94525;
do
time curl "http://localhost/wms/wms.cgi?TIME=2019-12-01:2019-12-10&VERSION=1.1&REQUEST=GetFeatureInfo&ZIPCODE=$z";
done
It was suggested to make a symbolic link from a local version to this to fix. Tested with
cp cimis_zip.db /var/tmp
ln -s /var/tmp/cimis_zip.db foo.db
Then
for db in ~cimis/foo.db; do
for y in 2016 2019; do
for z in 94501 94502 94506 94507 94523 94525; do
echo "$db $y $z";
time sqlite3 $db "select * from zipcode_daily where d_date in ('${y}-10-01','${y}-10-02','${y}-10-03','${y}-10-04','${y}-10-05','${y}-10-06','${y}-10-07','${y}-10-08','${y}-10-09','${y}-10-10','${y}-10-11','${y}-10-12','${y}-10-13','${y}-10-14','${y}-10-15','${y}-10-16','${y}-10-17','${y}-10-18','${y}-10-19','${y}-10-20') and zipcode in ('${z}') order by d_date,zipcode"
done
done
done
This seems to do the trick!
The symbolic link improves performance for sqlite but the wms.cgi call won't follow the symlink. Could wms.cgi be configured to look at a different location, like /var/tmp @qjhart ?
There are two issues at stake, I guess. The location of the sqlite file, and the location of the Grassdb. The wms.cgi file accesses the dimis.db database via a grass call cg.cgi
. This call reads the CG_ZIPCODE_DB parameter in the '~/.grass/rcfile and uses this as a location for the zipcode database. So, yes, you can easily move the cimis.db file to someplace like
/var/cache/cimis.dbor
/var/lib/cimis.db` and that access will go much quicker.
There is a second issue however, in my recollection is that the entire /home/cimis
directory is linked to the NFS mount. This means the NFS mount will still be used to access the ~/.grass7/rc
file from the nfs mount. IF your mount is unreliable, then a better solution might be to move the /home/cimis
back to the local drive, and then mount /home/cimis/gdb
and the temp image directories. In that case, the cimis.db
file can stay put.
None of these will fix point queries however, so as has already been discussed multiple times, the best solution is probably to not use an NFS mount at all, as the previous setup was.
There is a lag on the production server that makes the web service fail. Preliminary indications are that the sqlite queries. are much slower on the new production server. We need to figure out why that is.
Below is an example of how you can test some basic sql statements.