apache / carbondata

High performance data store solution
carbondata.apache.org
Apache License 2.0
1.43k stars 704 forks source link

There are some errors when run the test case with spark2.3 #4316

Open xubo245 opened 1 year ago

xubo245 commented 1 year ago

There are some errors when run the test case with spark2.3

- Test restructured array<timestamp> as index column on SI with compaction
2023-04-10 03:13:52 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:13:53 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- Test restructured array<string> and string columns as index columns on SI with compaction
2023-04-10 03:13:56 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:13:56 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<string> on secondary index with compaction
2023-04-10 03:14:00 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:00 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<string> and string as index columns on secondary index with compaction
- test load data with array<string> on secondary index
- test SI global sort with si segment merge enabled for complex data types
- test SI global sort with si segment merge enabled for newly added complex column
- test SI global sort with si segment merge enabled for primitive data types
- test SI global sort with si segment merge complex data types by rebuild command
- test SI global sort with si segment merge primitive data types by rebuild command
- test si creation with struct and map type
- test si creation with array
2023-04-10 03:14:26 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:26 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test complex with null and empty data
- test array<date> on secondary index
- test array<timestamp> on secondary index
2023-04-10 03:14:31 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:31 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<varchar> and varchar as index columns on secondary index
2023-04-10 03:14:34 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:34 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test multiple SI with array and primitive type
2023-04-10 03:14:40 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:40 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test SI complex with multiple array contains
TestCarbonInternalMetastore:
- test delete index silent
2023-04-10 03:14:43 ERROR CarbonInternalMetastore$:118 - Exception occurred while drop index table for : Some(test).unknown : Table or view 'unknown' not found in database 'test';
2023-04-10 03:14:43 ERROR CarbonInternalMetastore$:131 - Exception occurred while drop index table for : Some(test).index1 : Table or view 'index1' not found in database 'test';
- test delete index table silently when exception occur
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'index1' not found in database 'test';
    at org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:81)
    at org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:81)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:81)
    at org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:83)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getRawTable$1.apply(HiveExternalCatalog.scala:118)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getRawTable$1.apply(HiveExternalCatalog.scala:118)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
    at org.apache.spark.sql.hive.HiveExternalCatalog.getRawTable(HiveExternalCatalog.scala:117)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:684)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:684)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
    at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:683)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupRelation(SessionCatalog.scala:674)
    at org.apache.spark.sql.hive.CarbonFileMetastore.lookupRelation(CarbonFileMetastore.scala:197)
    at org.apache.spark.sql.hive.CarbonFileMetastore.lookupRelation(CarbonFileMetastore.scala:191)
    at org.apache.spark.sql.secondaryindex.events.SIDropEventListener$$anonfun$onEvent$1.apply(SIDropEventListener.scala:69)
    at org.apache.spark.sql.secondaryindex.events.SIDropEventListener$$anonfun$onEvent$1.apply(SIDropEventListener.scala:65)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.sql.secondaryindex.events.SIDropEventListener.onEvent(SIDropEventListener.scala:65)
    at org.apache.carbondata.events.OperationListenerBus.fireEvent(OperationListenerBus.java:83)
    at org.apache.carbondata.events.package$.withEvents(package.scala:26)
    at org.apache.carbondata.events.package$.withEvents(package.scala:22)
    at org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:93)
    at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:160)
    at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:159)
    at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:118)
    at org.apache.spark.sql.execution.command.AtomicRunnableCommand.runWithAudit(package.scala:155)
    at org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:159)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
    at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3265)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3264)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
    at org.apache.spark.sql.test.SparkTestQueryExecutor.sql(SparkTestQueryExecutor.scala:37)
    at org.apache.spark.sql.test.util.QueryTest.sql(QueryTest.scala:123)
    at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.beforeEach(TestCarbonInternalMetastore.scala:49)
    at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:220)
    at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.runTest(TestCarbonInternalMetastore.scala:33)
    at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
    at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
    at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
    at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
    at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
    at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
    at org.scalatest.Suite$class.run(Suite.scala:1147)
    at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
    at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
    at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
    at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
    at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
    at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.org$scalatest$BeforeAndAfterAll$$super$run(TestCarbonInternalMetastore.scala:33)
    at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
    at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
    at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.run(TestCarbonInternalMetastore.scala:33)
    at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210)
    at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257)
    at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255)
    at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
    at org.scalatest.Suite$class.run(Suite.scala:1144)
    at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
    at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
    at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
    at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1334)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1334)
    at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1011)
    at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1010)
    at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1500)
    at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
    at org.scalatest.tools.Runner$.main(Runner.scala:827)
    at org.scalatest.tools.Runner.main(Runner.scala)
- test show index when SI were created before the change CARBONDATA-3765
- test refresh index with different value of isIndexTableExists
- test refresh index with indexExists as false and empty index table
- test refresh index with indexExists as null
Run completed in 15 minutes, 36 seconds.
Total number of tests run: 283
Suites: completed 32, aborted 0
Tests: succeeded 282, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent ........................ SUCCESS [  2.509 s]
[INFO] Apache CarbonData :: Common ........................ SUCCESS [ 15.990 s]
[INFO] Apache CarbonData :: Format ........................ SUCCESS [ 32.657 s]
[INFO] Apache CarbonData :: Core .......................... SUCCESS [01:32 min]
[INFO] Apache CarbonData :: Processing .................... SUCCESS [ 33.903 s]
[INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 22.800 s]
[INFO] Apache CarbonData :: Materialized View Plan ........ SUCCESS [01:15 min]
[INFO] Apache CarbonData :: Hive .......................... SUCCESS [02:05 min]
[INFO] Apache CarbonData :: SDK ........................... SUCCESS [02:03 min]
[INFO] Apache CarbonData :: CLI ........................... SUCCESS [05:03 min]
[INFO] Apache CarbonData :: Lucene Index .................. SUCCESS [ 22.601 s]
[INFO] Apache CarbonData :: Bloom Index ................... SUCCESS [ 12.992 s]
[INFO] Apache CarbonData :: Geo ........................... SUCCESS [ 23.719 s]
[INFO] Apache CarbonData :: Streaming ..................... SUCCESS [ 33.608 s]
[INFO] Apache CarbonData :: Spark ......................... FAILURE [  01:27 h]
[INFO] Apache CarbonData :: Secondary Index ............... FAILURE [16:28 min]
[INFO] Apache CarbonData :: Index Examples ................ SUCCESS [ 11.280 s]
[INFO] Apache CarbonData :: Flink Proxy ................... SUCCESS [ 15.864 s]
[INFO] Apache CarbonData :: Flink ......................... SUCCESS [05:29 min]
[INFO] Apache CarbonData :: Flink Build ................... SUCCESS [  5.949 s]
[INFO] Apache CarbonData :: Presto ........................ SUCCESS [02:37 min]
[INFO] Apache CarbonData :: Examples ...................... SUCCESS [02:54 min]
[INFO] Apache CarbonData :: Flink Examples ................ SUCCESS [  8.313 s]
[INFO] Apache CarbonData :: Assembly ...................... FAILURE [ 14.763 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:54 h (Wall Clock)
[INFO] Finished at: 2023-04-10T03:14:53+08:00
[INFO] Final Memory: 245M/2221M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project carbondata-spark_2.3: There are test failures -> [Help 1]
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project carbondata-assembly: Error creating shaded jar: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/classes (Is a directory) -> [Help 2]
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project carbondata-secondary-index: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :carbondata-spark_2.3
[INFO] Build failures were ignored.

Process finished with exit code 0

Spark:


AlterTableColumnRenameTestCase:
- test only column rename operation
- CARBONDATA-4053 test rename column, column name in table properties changed correctly
- Rename more than one column at a time in one operation
- rename complex columns with invalid structure/duplicate-names/Map-type
- test alter rename struct of (primitive/struct/array) *** FAILED ***
  Results do not match for query:
  == Parsed Logical Plan ==
  'Project ['str33.a22]
  +- 'UnresolvedRelation `test_rename`

  == Analyzed Logical Plan ==
  a22: struct<b11:int>
  Project [str33#74649.a22 AS a22#74653]
  +- SubqueryAlias test_rename
     +- Relation[str1#74648,str33#74649,str3#74650,intfield#74651] CarbonDatasourceHadoopRelation

  == Optimized Logical Plan ==
  Project [str33#74649.a22 AS a22#74653]
  +- Relation[str1#74648,str33#74649,str3#74650,intfield#74651] CarbonDatasourceHadoopRelation

  == Physical Plan ==
  *(1) Project [str33#74649.a22 AS a22#74653]
  +- *(1) Scan CarbonDatasourceHadoopRelation default.test_rename[str33#74649] Batched: false, DirectScan: false, PushedFilters: [], ReadSchema: [str33.a22]
  == Results ==
  !== Correct Answer - 2 ==   == Spark Answer - 2 ==
  ![[2]]                      [[3]]
  ![[3]]                      [null] (QueryTest.scala:93)
- test alter rename array of (primitive/array/struct)
- test alter rename and change datatype for map of (primitive/array/struct)
- test alter rename and change datatype for struct integer
- test alter rename and change datatype for map integer

[32m- test LocalDictionary with True
- test LocalDictionary with custom Threshold *** FAILED ***
  scala.this.Predef.Boolean2boolean(org.apache.carbondata.core.util.CarbonTestUtil.checkForLocalDictionary(org.apache.carbondata.core.util.CarbonTestUtil.getDimRawChunk(TestNonTransactionalCarbonTable.this.writerPath, scala.this.Predef.int2Integer(0)))) was false (TestNonTransactionalCarbonTable.scala:2447)
- test Local Dictionary with FallBack
- test local dictionary with External Table data load 
- test inverted index column by API !!! IGNORED !!!
- test Local Dictionary with Default
- Test with long string columns with 1 MB pageSize
IntegerDataTypeTestCase:
- select empno from integertypetablejoin
VarcharDataTypesBasicTestCase:
- long string columns cannot be sort_columns
- long string columns can only be string columns
- cannot alter sort_columns dataType to long_string_columns
- check compaction after altering range column dataType to longStringColumn
- long string columns cannot contain duplicate columns
- long_string_columns: column does not exist in table 
- long_string_columns: columns cannot exist in partitions columns
- long_string_columns: columns cannot exist in no_inverted_index columns
- test alter table properties for long string columns 

- test duplicate columns with select query
Run completed in 1 hour, 23 minutes, 47 seconds.
Total number of tests run: 3430
Suites: completed 302, aborted 0
Tests: succeeded 3428, failed 2, canceled 0, ignored 82, pending 0
*** 2 TESTS FAILED ***

index:

[32mCarbonIndexFileMergeTestCaseWithSI:
- Verify correctness of index merge
- Verify command of index merge !!! IGNORED !!!
- Verify command of index merge without enabling property *** FAILED ***
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2001.0 failed 1 times, most recent failure: Lost task 1.0 in stage 2001.0 (TID 79695, localhost, executor driver): java.lang.RuntimeException: Failed to merge index files in path: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed
   at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:122)
   at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)
   at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)
   at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)
   at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
   at org.apache.spark.scheduler.Task.run(Task.scala:109)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Table status update with mergeIndex file has failed
   at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:327)
   at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:114)
   ... 12 more

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1661)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1649)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1648)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1648)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  ...
  Cause: java.lang.RuntimeException: Failed to merge index files in path: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed
  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:122)
  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)
  at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)
  at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)
  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  ...
  Cause: java.io.IOException: Table status update with mergeIndex file has failed
  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:327)
  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:114)
  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)
  at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)
  at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)
  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  ...
- Verify index index merge with compaction
- Verify index index merge for compacted segments
- test refresh index with indexExists as null
Run completed in 15 minutes, 36 seconds.
Total number of tests run: 283
Suites: completed 32, aborted 0
Tests: succeeded 282, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***