locationtech / geotrellis

GeoTrellis is a geographic data processing engine for high performance applications.
http://geotrellis.io
Other
1.33k stars 360 forks source link

Read single-band TIFF files, large than 2G #3065

Open Silence-Soul opened 5 years ago

Silence-Soul commented 5 years ago

At first, I used GT 2.1.0, and the files could not be ingested RDD Now it is replaced with the GT 2.3.1. But it failed In both versions, the error message is the same. The following is the error log messages and pom.xml

tiff 文件

import geotrellis.raster._
import geotrellis.spark.io.hadoop._
import geotrellis.vector._
import org.apache.spark._
import org.apache.spark.rdd._

object IngestImage {

  def main(args: Array[String]): Unit = {
    // Setup Spark to use Kryo serializer.
    val conf =
      new SparkConf()
        .setMaster("local[*]")
        .setAppName("Tiler")
        .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
        .set("spark.kryo.registrator", "geotrellis.spark.io.kryo.KryoRegistrator")

    val sc = new SparkContext(conf)
    try {
      run(sc)
    } finally {
      sc.stop()
    }
  }

  def run(implicit sc: SparkContext) = {

    val inputRdd: RDD[(ProjectedExtent, Tile)] = sc.hadoopGeoTiffRDD("d:\\TIFF文件\\058185565030_01\\058185565030_01_P001_PAN\\18FEB28040147-P2AS-058185565030_01_P001.TIF")
  }
}

ERROR:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/08/29 10:06:10 INFO SparkContext: Running Spark version 2.3.3
19/08/29 10:06:11 INFO SparkContext: Submitted application: Tiler
19/08/29 10:06:11 INFO SecurityManager: Changing view acls to: lzy
19/08/29 10:06:11 INFO SecurityManager: Changing modify acls to: lzy
19/08/29 10:06:11 INFO SecurityManager: Changing view acls groups to: 
19/08/29 10:06:11 INFO SecurityManager: Changing modify acls groups to: 
19/08/29 10:06:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(lzy); groups with view permissions: Set(); users  with modify permissions: Set(lzy); groups with modify permissions: Set()
19/08/29 10:06:13 INFO Utils: Successfully started service 'sparkDriver' on port 50547.
19/08/29 10:06:13 INFO SparkEnv: Registering MapOutputTracker
19/08/29 10:06:13 INFO SparkEnv: Registering BlockManagerMaster
19/08/29 10:06:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/08/29 10:06:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/08/29 10:06:13 INFO DiskBlockManager: Created local directory at C:\Users\lzy\AppData\Local\Temp\blockmgr-cb9b8a22-734a-45b2-97bd-2bfefa7b3434
19/08/29 10:06:13 INFO MemoryStore: MemoryStore started with capacity 3.0 GB
19/08/29 10:06:13 INFO SparkEnv: Registering OutputCommitCoordinator
19/08/29 10:06:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/08/29 10:06:14 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://LAPTOP-L5V7L77L:4040
19/08/29 10:06:14 INFO Executor: Starting executor ID driver on host localhost
19/08/29 10:06:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50617.
19/08/29 10:06:14 INFO NettyBlockTransferService: Server created on LAPTOP-L5V7L77L:50617
19/08/29 10:06:14 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/08/29 10:06:14 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, LAPTOP-L5V7L77L, 50617, None)
19/08/29 10:06:14 INFO BlockManagerMasterEndpoint: Registering block manager LAPTOP-L5V7L77L:50617 with 3.0 GB RAM, BlockManagerId(driver, LAPTOP-L5V7L77L, 50617, None)
19/08/29 10:06:14 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, LAPTOP-L5V7L77L, 50617, None)
19/08/29 10:06:14 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, LAPTOP-L5V7L77L, 50617, None)
19/08/29 10:06:15 INFO SparkContext: Starting job: count at GeoTiffInfoReader.scala:85
19/08/29 10:06:15 INFO DAGScheduler: Got job 0 (count at GeoTiffInfoReader.scala:85) with 4 output partitions
19/08/29 10:06:15 INFO DAGScheduler: Final stage: ResultStage 0 (count at GeoTiffInfoReader.scala:85)
19/08/29 10:06:15 INFO DAGScheduler: Parents of final stage: List()
19/08/29 10:06:15 INFO DAGScheduler: Missing parents: List()
19/08/29 10:06:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at flatMap at GeoTiffInfoReader.scala:73), which has no missing parents
19/08/29 10:06:16 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 63.0 KB, free 3.0 GB)
19/08/29 10:06:17 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.0 KB, free 3.0 GB)
19/08/29 10:06:17 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on LAPTOP-L5V7L77L:50617 (size: 22.0 KB, free: 3.0 GB)
19/08/29 10:06:17 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039
19/08/29 10:06:17 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at flatMap at GeoTiffInfoReader.scala:73) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19/08/29 10:06:17 INFO TaskSchedulerImpl: Adding task set 0.0 with 4 tasks
19/08/29 10:06:17 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7707 bytes)
19/08/29 10:06:17 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 7707 bytes)
19/08/29 10:06:17 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, executor driver, partition 2, PROCESS_LOCAL, 7707 bytes)
19/08/29 10:06:17 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, executor driver, partition 3, PROCESS_LOCAL, 7814 bytes)
19/08/29 10:06:17 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
19/08/29 10:06:17 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
19/08/29 10:06:17 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
19/08/29 10:06:17 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
19/08/29 10:06:17 INFO MemoryStore: Block rdd_2_1 stored as values in memory (estimated size 16.0 B, free 3.0 GB)
19/08/29 10:06:17 INFO BlockManagerInfo: Added rdd_2_1 in memory on LAPTOP-L5V7L77L:50617 (size: 16.0 B, free: 3.0 GB)
19/08/29 10:06:17 INFO MemoryStore: Block rdd_2_2 stored as values in memory (estimated size 16.0 B, free 3.0 GB)
19/08/29 10:06:17 INFO BlockManagerInfo: Added rdd_2_2 in memory on LAPTOP-L5V7L77L:50617 (size: 16.0 B, free: 3.0 GB)
19/08/29 10:06:17 INFO MemoryStore: Block rdd_2_0 stored as values in memory (estimated size 16.0 B, free 3.0 GB)
19/08/29 10:06:17 INFO BlockManagerInfo: Added rdd_2_0 in memory on LAPTOP-L5V7L77L:50617 (size: 16.0 B, free: 3.0 GB)
19/08/29 10:06:17 WARN BlockManager: Putting block rdd_2_3 failed due to exception java.lang.IllegalArgumentException: Parameter position can not to be negative.
19/08/29 10:06:17 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 623 bytes result sent to driver
19/08/29 10:06:17 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 709 bytes result sent to driver
19/08/29 10:06:17 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 666 bytes result sent to driver
19/08/29 10:06:17 WARN BlockManager: Block rdd_2_3 could not be removed as it was not found on disk or in memory
19/08/29 10:06:17 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 777 ms on localhost (executor driver) (1/4)
19/08/29 10:06:17 ERROR Executor: Exception in task 3.0 in stage 0.0 (TID 3)
java.lang.IllegalArgumentException: Parameter position can not to be negative
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.read(ChecksumFileSystem.java:192)
    at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
    at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
    at geotrellis.spark.io.hadoop.HdfsUtils$.readRange(HdfsUtils.scala:190)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readClippedRange(HdfsRangeReader.scala:39)
    at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:36)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readRange(HdfsRangeReader.scala:31)
    at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
    at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
    at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
    at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
    at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.getGeoTiffInfo(HadoopGeoTiffInfoReader.scala:53)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:75)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:73)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
19/08/29 10:06:17 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 702 ms on localhost (executor driver) (2/4)
19/08/29 10:06:18 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 831 ms on localhost (executor driver) (3/4)
19/08/29 10:06:18 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3, localhost, executor driver): java.lang.IllegalArgumentException: Parameter position can not to be negative
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.read(ChecksumFileSystem.java:192)
    at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
    at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
    at geotrellis.spark.io.hadoop.HdfsUtils$.readRange(HdfsUtils.scala:190)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readClippedRange(HdfsRangeReader.scala:39)
    at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:36)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readRange(HdfsRangeReader.scala:31)
    at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
    at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
    at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
    at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
    at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.getGeoTiffInfo(HadoopGeoTiffInfoReader.scala:53)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:75)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:73)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

19/08/29 10:06:18 ERROR TaskSetManager: Task 3 in stage 0.0 failed 1 times; aborting job
19/08/29 10:06:18 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
19/08/29 10:06:18 INFO TaskSchedulerImpl: Cancelling stage 0
19/08/29 10:06:18 INFO DAGScheduler: ResultStage 0 (count at GeoTiffInfoReader.scala:85) failed in 2.608 s due to Job aborted due to stage failure: Task 3 in stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0 (TID 3, localhost, executor driver): java.lang.IllegalArgumentException: Parameter position can not to be negative
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.read(ChecksumFileSystem.java:192)
    at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
    at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
    at geotrellis.spark.io.hadoop.HdfsUtils$.readRange(HdfsUtils.scala:190)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readClippedRange(HdfsRangeReader.scala:39)
    at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:36)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readRange(HdfsRangeReader.scala:31)
    at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
    at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
    at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
    at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
    at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.getGeoTiffInfo(HadoopGeoTiffInfoReader.scala:53)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:75)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:73)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
19/08/29 10:06:18 INFO DAGScheduler: Job 0 failed: count at GeoTiffInfoReader.scala:85, took 2.743716 s
19/08/29 10:06:18 INFO SparkUI: Stopped Spark web UI at http://LAPTOP-L5V7L77L:4040
19/08/29 10:06:18 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/08/29 10:06:18 INFO MemoryStore: MemoryStore cleared
19/08/29 10:06:18 INFO BlockManager: BlockManager stopped
19/08/29 10:06:18 INFO BlockManagerMaster: BlockManagerMaster stopped
19/08/29 10:06:18 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/08/29 10:06:18 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0 (TID 3, localhost, executor driver): java.lang.IllegalArgumentException: Parameter position can not to be negative
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.read(ChecksumFileSystem.java:192)
    at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
    at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
    at geotrellis.spark.io.hadoop.HdfsUtils$.readRange(HdfsUtils.scala:190)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readClippedRange(HdfsRangeReader.scala:39)
    at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:36)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readRange(HdfsRangeReader.scala:31)
    at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
    at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
    at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
    at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
    at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.getGeoTiffInfo(HadoopGeoTiffInfoReader.scala:53)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:75)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:73)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1661)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1649)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1648)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1648)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1882)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1820)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1168)
    at geotrellis.spark.io.GeoTiffInfoReader$class.readWindows(GeoTiffInfoReader.scala:85)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.readWindows(HadoopGeoTiffInfoReader.scala:30)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffRDD$.apply(HadoopGeoTiffRDD.scala:126)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffRDD$.apply(HadoopGeoTiffRDD.scala:157)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffRDD$.singleband(HadoopGeoTiffRDD.scala:178)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffRDD$.spatial(HadoopGeoTiffRDD.scala:219)
    at geotrellis.spark.io.hadoop.HadoopSparkContextMethods$class.hadoopGeoTiffRDD(HadoopSparkContextMethods.scala:50)
    at geotrellis.spark.io.hadoop.Implicits$HadoopSparkContextMethodsWrapper.hadoopGeoTiffRDD(Implicits.scala:32)
    at geotrellis.spark.io.hadoop.HadoopSparkContextMethods$class.hadoopGeoTiffRDD(HadoopSparkContextMethods.scala:35)
    at geotrellis.spark.io.hadoop.Implicits$HadoopSparkContextMethodsWrapper.hadoopGeoTiffRDD(Implicits.scala:32)
    at src.main.scala.com.siweidg.www.Transformation.FileSystem.IngestImage$.run(IngestImageTest.scala:83)
    at src.main.scala.com.siweidg.www.Transformation.FileSystem.IngestImage$.main(IngestImageTest.scala:44)
    at src.main.scala.com.siweidg.www.Transformation.FileSystem.IngestImage.main(IngestImageTest.scala)
Caused by: java.lang.IllegalArgumentException: Parameter position can not to be negative
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.read(ChecksumFileSystem.java:192)
    at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
    at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
    at geotrellis.spark.io.hadoop.HdfsUtils$.readRange(HdfsUtils.scala:190)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readClippedRange(HdfsRangeReader.scala:39)
    at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:36)
    at geotrellis.spark.io.hadoop.HdfsRangeReader.readRange(HdfsRangeReader.scala:31)
    at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
    at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
    at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
    at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
    at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
    at geotrellis.spark.io.hadoop.HadoopGeoTiffInfoReader.getGeoTiffInfo(HadoopGeoTiffInfoReader.scala:53)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:75)
    at geotrellis.spark.io.GeoTiffInfoReader$$anonfun$1.apply(GeoTiffInfoReader.scala:73)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
19/08/29 10:06:18 INFO ShutdownHookManager: Shutdown hook called
19/08/29 10:06:18 INFO ShutdownHookManager: Deleting directory C:\Users\lzy\AppData\Local\Temp\spark-a7fdcd23-82fa-4801-8d31-23ede422a6c5

Process finished with exit code 1

POM.XML

  <dependencies>
        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-util -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-util_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-spark -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-spark_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-accumulo -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-accumulo_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-geomesa -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-geomesa_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-proj4 -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-proj4_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.3.3</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.3.3</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.typesafe/config -->
        <dependency>
            <groupId>com.typesafe</groupId>
            <artifactId>config</artifactId>
            <version>1.3.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.typesafe.akka/akka-http -->
        <dependency>
            <groupId>com.typesafe.akka</groupId>
            <artifactId>akka-http_2.11</artifactId>
            <version>10.0.7</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/com.typesafe.akka/akka-http-spray-json -->
        <dependency>
            <groupId>com.typesafe.akka</groupId>
            <artifactId>akka-http-spray-json_2.11</artifactId>
            <version>10.0.7</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.typelevel/squants -->
        <dependency>
            <groupId>org.typelevel</groupId>
            <artifactId>squants_2.12</artifactId>
            <version>1.3.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/joda-time/joda-time -->
        <dependency>
            <groupId>joda-time</groupId>
            <artifactId>joda-time</artifactId>
            <version>2.9.9</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.locationtech.geotrellis/geotrellis-s3 -->
        <dependency>
            <groupId>org.locationtech.geotrellis</groupId>
            <artifactId>geotrellis-s3_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.scalaz/scalaz-core -->
        <dependency>
            <groupId>org.scalaz</groupId>
            <artifactId>scalaz-core_2.11</artifactId>
            <version>7.3.0-M29</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.locationtech.jts/jts-core -->
        <dependency>
            <groupId>org.locationtech.jts</groupId>
            <artifactId>jts-core</artifactId>
            <version>1.16.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.hsqldb/hsqldb -->
        <dependency>
            <groupId>org.hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <version>2.4.1</version>
            <scope>test</scope>
        </dependency>

    </dependencies>
pomadchin commented 5 years ago

Extra information about how to reproduce the bug:

val tiff = 
  GeoTiffReader
    .readSingleband("/.../18FEB28040147-P2AS-058185565030_01_P001.TIF", streaming = true)

//> java.lang.IllegalArgumentException: Negative position
//  at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:863)
//  at geotrellis.util.FileRangeReader.readClippedRange(FileRangeReader.scala:38)
//  at geotrellis.util.RangeReader$class.readRange(RangeReader.scala:42)
//  at geotrellis.util.FileRangeReader.readRange(FileRangeReader.scala:31)
//  at geotrellis.util.StreamingByteReader.readChunk(StreamingByteReader.scala:99)
//  at geotrellis.util.StreamingByteReader.ensureChunk(StreamingByteReader.scala:110)
//  at geotrellis.util.StreamingByteReader.getShort(StreamingByteReader.scala:138)
//  at geotrellis.raster.io.geotiff.reader.TiffTagsReader$.read(TiffTagsReader.scala:67)
//  at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readGeoTiffInfo(GeoTiffReader.scala:359)
//  at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readSingleband(GeoTiffReader.scala:118)
//  at geotrellis.raster.io.geotiff.reader.GeoTiffReader$.readSingleband(GeoTiffReader.scala:70)
//  ... 40 elided

There is smth wrong with how the metadata is constructed and we don't support this kind of structure. The tiff can be easily fixed by the following GDAL command:

gdal_translate 18FEB28040147-P2AS-058185565030_01_P001.TIF 18FEB28040147-P2AS-058185565030_01_P001_TST.TIF
Silence-Soul commented 5 years ago

@pomadchin Hey I fixed the TIFF file with GDAL command gdal_translate 18FEB28040147-P2AS-058185565030_01_P001.TIF 18FEB28040147-P2AS-058185565030_01_P001-1.TIF, when I read the file with GeoTiffReader.readSingleband("d:\\TIFF文件\\058185565030_01\\058185565030_01_P001_PAN\\18FEB28040147-P2AS-058185565030_01_P001-1.TIF")

the file size is 2.07 GB (2,226,131,844 bytes) TIM图片20190903171826

val size = f.length.toInt size is negative number

2

pomadchin commented 5 years ago

Some further investigations that were posted by @Silence-Soul in #3066:

Hey, @pomadchin #3065 The reason maybe when the offset is read from the IFH of the TIFF. Bytes 4-7 are the addresses of the first offset. Below is the hexadecimal content of the TIFF file. Bytes 4-7 are b0, 23, af, 84. 2

The decimal should be 176, 23, 175, 132.

But the decimal value obtained during the debugging negative number. The decimal: -80, 35, -81, -124 1

These negative numbers correspond to hexadecimal ffffffb0, 23, ffffffaf, ffffff84.

byte is 8 bits and int is 32 bits, when the byte may be converted to int or byte and int type operation, it will expand the byte memory high bit complement 1 (that is, by sign bit complement) to 32 bits. So byte & 0xff

0xff can set the high 24 bits to 0, and the lower 8 bits remain unchanged, ensure the consistency of the binary data.

    String path ="d:\\tmp\\18FEB28040147-P2AS-058185565030_01_P001.TIF";
    FileInputStream in = new FileInputStream(path);
    DataInputStream din = new DataInputStream(in);
    byte[] bytes = new byte[10];
    din.read(bytes, 0, 8);
    for (byte b: bytes){
        String str1 = Integer.toHexString(b);
        System.out.print(str1);
        System.out.print("--------");
        System.out.print("HEX:"+Integer.parseUnsignedInt(str1, 16));
        System.out.print("              ");
        String str2 = Integer.toHexString((b & 0xff) + 256).substring(1);
        System.out.print(str2);
        System.out.print("--------");
        System.out.println("HEX:"+Integer.parseUnsignedInt(str2, 16));
    }

3