Closed rsignell-usgs closed 8 years ago
@guygriffiths, tagging you here just in case you are curious or have some comment...
@rsignell-usgs Thanks for doing this testing. We can definitely get better results than that. Some thoughts based on some quick tests on a similar dataset (same grid, just one timestep):
As you suspected, there was no domain caching for unstructured grids. I've added that and the other changes to the develop branch. If you're able to test that out, you should see some considerable speed-ups. If there's anything which could benefit from more optimisation let me know and I'll look into it.
@guygriffiths , this sounds awesome! To test these improvements, it should be sufficient for me to do:
git checkout develop
git pull origin develop
mvn clean install
and then move the resulting edal-wms-1.2.4-SNAPSHOT.jar
into ./ncWMS2/WEB-INF/lib/
, and restart tomcat, right?
@rsignell-usgs - No, you'll have to do a full rebuild for this one, I'm afraid. The conversion to use Apache SIS changes things throughout EDAL and ncWMS.
Or you can download the fresh build I just put up here: http://www.personal.rdg.ac.uk/~qx901922/ncWMS2.war
Just tested. OMG. Amazing performance enhancement. And all I had to do was ask! :smiley_cat: Thank you!
The projection improvement is amazing. Loading scalars only takes a few seconds now: and loading vector fields now takes 10s of seconds, instead of minutes:
@guygriffiths , when you think the next release of ncWMS will come out that incorporates these massive improvements?
I can probably do a release tomorrow. I'm just in the process of fixing a bug which will have a big impact on someone else's stuff, so it makes sense to release it once that's done.
In the context of evaluating ncWMS2 performance in TerriaJS vs. Godiva3, I've done some testing on ncWMS2 with the NECOFS GOM3 data.
Title: NECOFS GOM3 Grid
Dataset: A single NetCDF3 file at SMAST, but is modified with NcML in the THREDDS Catalog to be UGRID compliant, so is accessed via OPeNDAP from the SMAST THREDDS Catalog here: http://www.smast.umassd.edu:8080/thredds/forecasts.html?dataset=gom3_nocache
Grid: 53087 nodes, 99137 elements (all triangles)
WMS Endpoint: http://www.smast.umassd.edu:8080/ncWMS2/
GetCapabilities request: http://www.smast.umassd.edu:8080/ncWMS2/wms?SERVICE=WMS&REQUEST=GetCapabilities&VERSION=1.3.0
Test 1: Performance in Godiva 3:
Surface velocity, full extent, defaults. Took 19 seconds, dominated by the time on the 16 tile requests to ncWMS2, each of which took 1-16 seconds to execute. It's clear that the tile requests are being processed in parallel on the server to some extent, as the times from the individual tiles add up to at least 70 seconds. So perhaps four processes are being used on the server? The server has 16 Xeon CPUS with 4 cores each, so perhaps this means the process runs on a single CPU, but can use all 4 cores?
Here's the screenshot:
Here's an example of one of the tile requests: http://www.smast.umassd.edu:8080/ncWMS2/wms?FORMAT=image%2Fpng&TRANSPARENT=TRUE&STYLES=default-vector%2Fdefault&LAYERS=FVCOM-NECOFS-GOM3%2Fu%3Av-group&TIME=2016-10-20T10%3A01%3A52.500Z&ELEVATION=0&COLORSCALERANGE=-0.1529%2C3.257&NUMCOLORBANDS=250&ABOVEMAXCOLOR=0x000000&BELOWMINCOLOR=0x000000&BGCOLOR=transparent&LOGSCALE=false&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A4326&BBOX=-71.652868489587,37.467213541663,-65.279507812504,43.840574218746&WIDTH=256&HEIGHT=256
Testing a 2nd time step took 19 seconds of wall clock, so no speedup on an additional time step. Perhaps no use of tree structure or topology caching then?
This FeatureInfo request at a specific location took 3 minutes! http://www.smast.umassd.edu:8080/ncWMS2/wms?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetFeatureInfo&LAYERS=FVCOM-NECOFS-GOM3%2Fu%3Av-group&QUERY_LAYERS=FVCOM-NECOFS-GOM3%2Fu%3Av-group&STYLES=default-vector%2Fdefault&BBOX=-71.553965%2C40.731524%2C-68.566452%2C43.121534&FEATURE_COUNT=5&HEIGHT=600&WIDTH=750&FORMAT=image%2Fpng&INFO_FORMAT=text%2Fxml&SRS=EPSG%3A4326&X=179&Y=25&TIME=2016-10-20T03%3A00%3A00.000Z&ELEVATION=0
This was done using my internet from home in Falmouth. 20Mbps download, 6Mbps upload.
Test 2: Performance in TerriaJS
Testing from TerriaJS instance on http://gamone.whoi.edu/terriajs
Same full extent surface velocity for 1 time step. Using 2D view. Now there are 384 tile requests, and each one is taking minutes. This one, for example, took 6.4 minutes:
http://www.smast.umassd.edu:8080/ncWMS2/wms?time=2016-10-20T11%3A11%3A15Z&transparent=true&format=image%2Fpng&exceptions=application%2Fvnd.ogc.se_xml&styles=default-vector%2Fdiv-RdBu-inv&tiled=true&feature_count=101&colorscalerange=0%2C2&service=WMS&version=1.1.1&request=GetMap&layers=FVCOM-NECOFS-GOM3%2Fu%3Av-group&srs=EPSG%3A3857&bbox=-8766409.899970295%2C3757032.814272983%2C-7514065.628545966%2C5009377.085697312&width=256&height=256
One issue is that TerriaJS is requesting tiles in Web Mercator (EPSG:3857), while Godiva3 is requesting tiles in Lon/Lat (EPSG:4326), the native coordinates used in CF.