TIBCOSoftware / snappydata

Project SnappyData - memory optimized analytics database, based on Apache Spark™ and Apache Geode™. Stream, Transact, Analyze, Predict in one cluster
http://www.snappydata.io
Other
1.04k stars 203 forks source link

The SnappyData Dashboard shows negtive number in the Total Size of the table column #991

Open alexBaiJW opened 6 years ago

alexBaiJW commented 6 years ago
Name Storage Model Distribution Type Row Count Memory Size Total Size
APP.TEST COLUMN PARTITIONED 4,017,002 16.0 MB -12298615.0 B
APP.TEST2 COLUMN PARTITIONED 196,930 790.6 KB 965.8 KB

All the members status is ok, and I can count the table, but when I select columns from it, the error message is as follows:

snappy-sql> select * from APP.TEST limit 10; 错误 38000:(SQLState=38000 Severity=20000) (Server=xx/xx[1528] Thread=ThriftProcessor-30) 求值表达式时抛出异常“Job aborted due to stage failure: Task 1 in stage 279.0 failed 4 times, most recent failure: Lost task 1.3 in stage 279.0 (TID 12349, xx, executor xx(10466):64535): com.gemstone.gemfire.InternalGemFireError: Bucket com.gemstone.gemfire.internal.cache.BucketRegion[path='/PR/_B__APP_SNAPPYSYSINTERNAL____TESTCOLUMNSTORE___37';scope=DISTRIBUTED_ACK';dataPolicy=PERSISTENT_REPLICATE; concurrencyChecksEnabled; indexUpdater=null; serial=951; primary=true] size (-2774048) negative after applying delta of 1704 at com.gemstone.gemfire.internal.cache.BucketRegion.updateBucketMemoryStats(BucketRegion.java:3115) at com.gemstone.gemfire.internal.cache.BucketRegion.updateMemoryStats(BucketRegion.java:3094) at com.gemstone.gemfire.internal.cache.AbstractRegionEntry._setValue(AbstractRegionEntry.java:1447) at com.gemstone.gemfire.internal.cache.AbstractDiskRegionEntry.setValueWithContext(AbstractDiskRegionEntry.java:61) at com.gemstone.gemfire.internal.cache.DiskEntry$Helper.setValueOnFaultIn(DiskEntry.java:1425) at com.gemstone.gemfire.internal.cache.DiskEntry$Helper.readValueFromDisk(DiskEntry.java:1376) at com.gemstone.gemfire.internal.cache.DiskEntry$Helper.faultInValue(DiskEntry.java:1214) at com.gemstone.gemfire.internal.cache.AbstractOplogDiskRegionEntry.getValue(AbstractOplogDiskRegionEntry.java:99) at org.apache.spark.sql.execution.columnar.impl.DiskMultiColumnBatch.entryMap$lzycompute(ColumnFormatIterator.scala:330) at org.apache.spark.sql.execution.columnar.impl.DiskMultiColumnBatch.entryMap(ColumnFormatIterator.scala:319) at org.apache.spark.sql.execution.columnar.impl.ColumnFormatIterator.getColumnValue(ColumnFormatIterator.scala:173) at org.apache.spark.sql.execution.columnar.ColumnBatchIterator.getColumnBuffer(ColumnBatch.scala:186) at org.apache.spark.sql.execution.columnar.ColumnBatchIterator.moveNext(ColumnBatch.scala:294) at org.apache.spark.sql.execution.row.PRValuesIterator.hasNext(RowFormatScanRDD.scala:448) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(generated.java:144) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:480) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$2.hasNext(WholeStageCodegenExec.scala:571) at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.hasNext(WholeStageCodegenExec.scala:508) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389) at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:461) at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:419) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:95) at org.apache.spark.scheduler.Task.run(Task.scala:126) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.spark.executor.SnappyExecutor$$anon$2$$anon$3.run(SnappyExecutor.scala:57) at java.lang.Thread.run(Thread.java:745)

piercelamb commented 6 years ago

Hi @alexBaiJW may I suggest getting onto our Slack chat and asking your questions there? Many of these questions you're asking will require a discussion with one of our engineers; this is much better suited for the chat environment than Github Issues. You can access it here: http://snappydata-slackin.herokuapp.com/