Closed jtarricone closed 8 years ago
Here's an image from MMW's NLCD overlay of Ithaca NY which uses Tiler Perry tiles: https://c.tiles.azavea.com/nlcd/12/1177/1513.png
Here's the same tile from this service: http://localhost:8090/nlcd-agg-tiles/12/1177/1513.png
Some notes:
Querying for a tile that is outside the zoom limits results in 500. I think it should return a 404 instead.
> http :8090/nlcd-agg-tiles/13/2355/3026.png
HTTP/1.1 500 Internal Server Error
Content-Length: 35
Content-Type: text/plain; charset=UTF-8
Date: Wed, 07 Sep 2016 21:36:08 GMT
Server: spray-can/1.3.3
There was an internal server error.
Here's what the service output looks like:
[ERROR] [09/07/2016 21:36:08.039] [ForkJoinPool-4-worker-3] [akka://usace-programanalysis-geop/user/usace-programanalysis] Error during processing of request HttpRequest(GET,http://localhost:8090/nlcd-agg-tiles/13/2355/3026.png,List(Host: localhost:8090, User-Agent: HTTPie/0.9.4, Connection: keep-alive, Accept-Encoding: gzip, deflate),Empty,HTTP/1.1)
geotrellis.spark.io.package$AttributeNotFoundError: Attribute metadata not found for layer Layer(name = "nlcd-zoomed", zoom = 13)
at geotrellis.spark.io.s3.S3AttributeStore.read(S3AttributeStore.scala:55)
at geotrellis.spark.io.AttributeCaching$$anonfun$cacheRead$1.apply(AttributeCaching.scala:11)
at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:189)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
at geotrellis.spark.io.AttributeCaching$class.cacheRead(AttributeCaching.scala:11)
at geotrellis.spark.io.s3.S3AttributeStore.cacheRead(S3AttributeStore.scala:20)
at geotrellis.spark.io.BlobLayerAttributeStore$class.readHeader(AttributeStore.scala:60)
at geotrellis.spark.io.s3.S3AttributeStore.readHeader(S3AttributeStore.scala:20)
at geotrellis.spark.io.s3.S3ValueReader$$anon$1.<init>(S3ValueReader.scala:24)
at geotrellis.spark.io.s3.S3ValueReader.reader(S3ValueReader.scala:23)
at com.azavea.usace.programanalysis.geop.LayerReader$.catalog(LayerReader.scala:51)
at com.azavea.usace.programanalysis.geop.LayerReader$.catalog(LayerReader.scala:41)
at com.azavea.usace.programanalysis.geop.LayerReader$.apply(LayerReader.scala:30)
at com.azavea.usace.programanalysis.geop.GeopServiceActor$$anonfun$tilesHandler$1$$anonfun$apply$5$$anonfun$apply$6.apply(GeopServiceActor.scala:91)
at com.azavea.usace.programanalysis.geop.GeopServiceActor$$anonfun$tilesHandler$1$$anonfun$apply$5$$anonfun$apply$6.apply(GeopServiceActor.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 70316D50CDBB614B), S3 Extended Request ID: c5t0sUD5cFQ43KoxogUsisvJRlp8eGwtGooYv20bmiAcRYl+E28jmzv0Qxd+UI4X81jKqygo7e8=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1127)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:462)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:297)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3672)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1160)
at geotrellis.spark.io.s3.AmazonS3Client.getObject(S3Client.scala:149)
at geotrellis.spark.io.s3.S3Client$class.getObject(S3Client.scala:48)
at geotrellis.spark.io.s3.AmazonS3Client.getObject(S3Client.scala:121)
at geotrellis.spark.io.s3.S3AttributeStore.geotrellis$spark$io$s3$S3AttributeStore$$readKey(S3AttributeStore.scala:39)
at geotrellis.spark.io.s3.S3AttributeStore.read(S3AttributeStore.scala:52)
... 21 more
This is the interesting part:
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 70316D50CDBB614B), S3 Extended Request ID: c5t0sUD5cFQ43KoxogUsisvJRlp8eGwtGooYv20bmiAcRYl+E28jmzv0Qxd+UI4X81jKqygo7e8=
So S3 is returning a 404. We should pass that on, instead of returning a 500.
This looks really good! Will take another look once some of the comments have been addressed.
Querying for a tile that is outside the zoom limits results in 500. I think it should return a 404 instead.
I think that's a consequence of this -->
val result: Option[Tile] =
try {
Some(LayerReader(zoom, x, y, sc))
} catch {
case _: TileNotFoundError => None
}
I think spray responds with a 404 when it encounters a None
type return value. In this case, that happens only when theres a TileNotFoundError
exception. It's worth considering adding something to the spray handler (?) to accept a message along with the result, so that we can be more specific when returning a 404 that the tile wasn't found, or that the S3 reader wigged out, or whatever.
Not sure why our tile is 512×512, when the other one is 256×256. Would it be faster to do a smaller version?
I think that's the size as stored in S3; they're being fetched directly and not explicitly transformed, so that would be my guess. If that's the case, transforming them to 256 might actually hurt performance?
So the exception raised above wasn't of the TileNotFoundError
type? Because if it had been, it would return None
and thus 404, but we are getting 500. Is there a different error we should also be catching and returning None
for in that case?
Taking another look now.
So the exception raised above wasn't of the TileNotFoundError type? Because if it had been, it would return None and thus 404, but we are getting 500. Is there a different error we should also be catching and returning None for in that case?
No, it was (originally) AmazonS3Exception
, then it looks like it was caught as an AttributeNotFoundError
, which is within the GeoTrellis domain. I tested getting bogus tiles at correct zoom levels, e.g. 12/1141/123456
and it returned a 404 as expected, the 500 only appeared when trying to get something that would throw what essentially was an IO error of some kind.
I updated it to return 404's for anything, but it's still worth thinking about revisiting.
Okay great! Now I'm getting a proper 404 for the same URL which was earlier giving me 500s:
> http :8090/nlcd-agg-tiles/13/2355/3026.png
HTTP/1.1 404 Not Found
Content-Length: 0
Date: Thu, 08 Sep 2016 14:33:59 GMT
Server: spray-can/1.3.3
+1. Except for minor formatting issues, this looks and works great! Nice job.
I think you're right: we shouldn't be transforming the tiles at all, and if they're coming through as 512×512 then so be it. We'll revisit this if it becomes an issue on the Leaflet side.
Connects https://github.com/azavea/usace-program-analysis/issues/166
This adds an endpoint to the geoprocessing service at
/nlcd-agg-tiles
that takes requests in the typical{zoom}/{x}/{y}
form and returns a PNG rendered from the NLCD land cover raster data.To test -->
cd
into the project directory.sbt "project geop" assembly
to compile the servicedocker build -t usace-geop-test .
docker run --rm -v ~/.aws:/root/.aws -p 4040:4040 -p 8090:8090 usace-geop-test --driver-memory 2g
GET
request tohttp://localhost:8090/ping
to ensure the service is availableGET
request tohttp://localhost:8090/nlcd-agg-tiles/{zoom}/{x}/{y}
where zoom is a zoom level between 1-12, and x & y are row/column locations that are valid for that zoom level.The result should be a PNG that looks like it could be rendered from land cover data. For example,
http://localhost:8090/nlcd-agg-tiles/12/1194/1549
should return an image like this: