filodb / FiloDB

Distributed Prometheus time series database
Apache License 2.0
1.43k stars 227 forks source link

sbt test are failing #146

Closed shukla2009 closed 7 years ago

shukla2009 commented 7 years ago

[JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)

[ERROR] [05/10/2017 20:13:00.670] [test-akka.actor.default-dispatcher-2] [akka://test/user/$a/ds-coord-gdelt-0] foo! java.lang.RuntimeException: foo! at filodb.coordinator.TestSegmentStateCache.getSegmentState(RowSourceSpec.scala:32) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at kamon.trace.TraceContext$class.withNewSegment(TraceContext.scala:53) at kamon.trace.MetricsOnlyContext.withNewSegment(MetricsOnlyContext.scala:28) at filodb.core.Perftools$.subtrace(Perftools.scala:26) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:82) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:79) at kamon.trace.Tracer$$anonfun$withNewContext$1.apply(TracerModule.scala:62) at kamon.trace.Tracer$.withContext(TracerModule.scala:53) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:61) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:77) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:79) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:78) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at filodb.core.reprojector.DefaultReprojector.toSegments(Reprojector.scala:78) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:118) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:117) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

velvia commented 7 years ago

Would you have more info?

Currently tests on Travis on master are passing.

On May 10, 2017, at 7:52 AM, Rahul Shukla notifications@github.com wrote:

[JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/filodb/FiloDB/issues/146, or mute the thread https://github.com/notifications/unsubscribe-auth/ABA321ggNnONz4g4h0rJdEUfmLrJjshBks5r4c8zgaJpZM4NWyDi.

shukla2009 commented 7 years ago

[info] Loading global plugins from /home/synerzip/.sbt/0.13/plugins [info] Loading project definition from /home/synerzip/code-base/junk/FiloDB/project [info] Set current project to filodb (in build file:/home/synerzip/code-base/junk/FiloDB/) [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 8 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 28 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/spark/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 38 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 3 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/core/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 2 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 0 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cli/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/jmh/src/main/scala/filodb.jmh/SparkReadBenchmark.scala:20:60: Non ascii characters are not allowed [info] Processed 7 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 5 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/jmh/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 13 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cassandra/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/coordinator/src/main/scala/filodb.coordinator/DatasetCoordinatorActor.scala:85:1: Non ascii characters are not allowed [info] Processed 15 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/coordinator/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 5 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/stress/target [info] Compiling 15 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/classes... [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/test-classes... [info] DatasetSpec: [info] DatasetOptions serialization [info] - should serialize options successfully [info] ColumnSpec: [info] Column.schemaFold [info] - should add new columns to the schema [info] - should remove deleted columns from the schema [info] - should replace updated column defs in the schema [info] Column.invalidateNewColumn [info] - should check that regular column names don't have : in front [info] - should check that column names cannot contain illegal chars [info] - should check that cannot add columns at lower versions [info] - should check that added columns change some property [info] - should check that new columns are not deleted [info] - should return no reasons for a valid new column [info] Column serialization [info] - should serialize and deserialize properly [info] TypesSpec: [info] ByteVectorOrdering [info] - should compare by length if contents equal [info] - should compare by unsigned bytes [info] KeyTypes [info] - should compare CompositeKeyTypes using ordering trait [info] - getKeyFunc should resolve null values to default values [info] ComputedColumnSpec: [info] :getOrElse [info] - should return WrongNumberArguments when # args not 2 [info] - should return BadArgument if source column not found [info] - should return BadArgument if cannot parse non-string default value [info] - should parse normal (non-null) value and pass it through [info] - should parse null value and pass through default value [info] :round [info] - should return BadArgument if rounding value different type than source column [info] - should return BadArgument if attempt to use :round with unsupported type [info] - should round long value [info] - should round double value [info] :timeslice [info] - should return BadArgument if time duration string not formatted properly [info] - should timeslice long values as milliseconds [info] - should timeslice Timestamp values [info] :monthOfYear [info] - should return month of year for timestamp column [info] :stringPrefix [info] - should take string prefix [info] - should return empty string if column value null [info] :hash [info] - should hash different string values to int between 0 and N [info] - should hash long values to int between 0 and N [info] InMemoryMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there [info] - should return AlreadyExists if dataset already exists [info] - should return NotFound if getDataset on nonexisting dataset [info] - should return all datasets created [info] column API [info] - should return IllegalColumnChange if an invalid column addition submitted [info] - should be able to create a Column and get the Schema [info] - should return IllegalColumnChange if some column additions invalid [info] - should be able to add many new columns at once [info] - deleteDatasets should delete both dataset and columns [info] BinaryRecordSpec: [info] - should create and extract individual fields and match when all fields present [info] - should create and extract fields and check notNull correctly [info] - should get default values back for null fields [info] - should get bytes out and get back same BinaryRecord [info] - should generate same hashcode for different instances of the same RecordSchema [info] - should produce shorter BinaryRecords if smaller number of items fed [info] - should semantically compare BinaryRecords field by field [info] - should semantically compare BinaryRecord Int and Long fields correctly (pending) [info] - should produce sortable ByteArrays from BinaryRecords [info] - should serialize and deserialize RecordSchema and BinaryRecordWrapper [info] ChunkSetInfoSpec: [info] - should serialize and deserialize ChunkSetInfo and no skips [info] - should serialize and deserialize ChunkSetInfo and skips [info] - should find intersection range of composite keys with strings [info] - should not find intersection if key1 is greater than key2 [info] - should find intersection range of keys with timestamps [info] - should return None if error with one of the RowReaders [info] MemTableMemoryTest: Start: free memory = 155834392 init = 2555904(2496K) used = 32475736(31714K) committed = 32899072(32128K) max = -1(-1K) End: free memory = 450301992 elapsed = 1836 ms init = 2555904(2496K) used = 34513904(33704K) committed = 35127296(34304K) max = -1(-1K) [info] - should add tons of rows without overflowing memory and taking too long [info] WriteAheadLogFileSpec: [info] - creates memory mapped file with no data [info] - creates memory mapped buffer for an existing file [info] - write header to the file [info] - write filochunks indicator to the file !!! IGNORED !!! [info] - write filochunks to the file [info] - Able to write chunk data greater than the size of the mapped byte buffer [info] - Able to write large header greater than the size of the mapped byte buffer [info] - Valid WAL header [info] - Invalid file identifier in header [info] - Invalid column definition header [info] - Invalid column count indicator [info] - Invalid column definitions size [info] - Able to read filo chunks successfully [info] - Able to read filo chunks for GdeltTestData successfully [info] KeyFilterSpec: [info] - should parse values for regular KeyTypes [info] - should validate equalsFunc for string and other types [info] - should validate inFunc for string and other types [info] - should parse values for computed KeyTypes [info] ReprojectorSpec: [info] - should write out new chunkSet in sorted rowKey order [info] - should reuse segment metadata on successive flushes [info] - should reload segment metadata if state no longer in cache [info] - should reload segment metadata and replace previous chunk rows successfully [info] InMemoryColumnStoreSpec: [info] appendSegment [info] - should NOOP if the segment is empty [info] - should append new rows successfully [info] - should replace rows to a previous chunk successfully [info] - should replace rows with multi row keys to an uncached segment [info] scanChunks SinglePartitionScan [info] - should read chunks back that were written [info] - should return empty iterator if cannot find chunk (SinglePartitionRangeScan) [info] - should return empty iterator if cannot find partition or version [info] - should return empty chunks if cannot find some columns [info] scanRows [info] - should read back rows that were written [info] - should read back rows written in another database [info] - should read back rows written with multi-column row keys [info] - should filter rows written with single partition key [info] - should range scan by row keys and filter rows with single partition key [info] - should range scan by row keys (SinglePartitionRowKeyScan) [info] - should filter rows written with multiple column partition keys [info] PartitionChunkIndexSpec: [info] RowkeyPartitionChunkIndex [info] - should add out of order chunks and return in rowkey order [info] - should return no chunks if rowKeyRange startKey is greater than endKey [info] ChunkIDPartitionChunkIndex [info] - should add out of order chunks and return in chunkID order [info] - should handle skips [info] SegmentSpec: [info] - SegmentState should add chunk info properly and update state for append only [info] - SegmentState should add chunk info properly when SegmentState prepopulated [info] - SegmentState should add skip lists properly when new rows replace previous chunks [info] - SegmentState should not add skip lists if detectSkips=false [info] - RowWriter and RowReader should work for rows with string row keys [info] - RowWriter and RowReader should work for rows with multi-column row keys [info] FiloMemTableSpec: [info] insertRows, readRows with forced flush [info] - should insert out of order rows and read them back in order [info] - should replace rows and read them back in order [info] - should insert/replace rows with multiple partition keys and read them back in order [info] - should insert/replace rows with multiple row keys and read them back in order [info] - should ingest into multiple partitions using partition column [info] - should ingest BinaryRecords with Timestamp partition column [info] - should keep ingesting rows with null partition col value [info] - should not throw error if :getOrElse computed column used with null partition col value [info] ChunkHeaderSpec: [info] - create UTF8 string with FiloWAL of 8 bytes [info] - create column identifer in 2 bytes [info] - Add no of columns to header of 2 bytes [info] - Single column definition [info] - Multi column definitions [info] - Order of methods to write full header [info] ProjectionSpec: [info] RichProjection [info] - should get MissingColumnNames if cannot find row key or segment key [info] - should get MissingColumnNames if projection columns are missing from schema [info] - should get NoColumnsSpecified if key columns or partition columns are empty [info] - should get MissingColumnNames if cannot find partitioning column [info] - should get back NoSuchFunction if computed column function not found [info] - should return RowKeyComputedColumns err if try to use computed columns in row key [info] - should get back partitioning func for default key if partitioning column is default [info] - should change database with withDatabase [info] - apply() should throw exception for bad schema [info] - should get RichProjection back with proper dataset and schema [info] - should get RichProjection back with multiple partition and row key columns [info] - should create RichProjection properly for String row key column [info] - should (de)serialize to/from readOnlyProjectionStrings [info] - should deserialize readOnlyProjectionStrings with empty columns [info] - should deserialize readOnlyProjectionStrings with database specified [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/cli/target/scala-2.11/classes... [info] Run completed in 24 seconds, 560 milliseconds. [info] Total number of tests run: 132 [info] Suites: completed 17, aborted 0 [info] Tests: succeeded 132, failed 0, canceled 0, ignored 1, pending 1 [info] All tests passed. [INFO] [05/10/2017 20:00:11.199] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:12.379] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:42192] [INFO] [05/10/2017 20:00:12.463] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Starting up... [INFO] [05/10/2017 20:00:13.003] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:13.003] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Started up successfully [INFO] [05/10/2017 20:00:13.030] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:13.078] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - No seed-nodes configured, manual cluster join required [info] DatasetCoordinatorActorSpec: [info] Compiling 8 Scala sources and 1 Java source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/classes... [info] - should respond to GetStats with no flushes and no rows [info] - should not flush if datasets not reached limit yet [info] - should automatically flush after ingesting enough rows

[info] - should send back Nack if over maximum number of rows or Nack sent before with no CheckCanIngest [info] - StartFlush should initiate flush even if # rows not reached trigger yet

[info] - StartFlush should initiate flush when there is no write activity after few seconds [info] - should automatically delete memtable wal files once flush is complete successfully [INFO] [05/10/2017 20:00:25.768] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:25.798] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:25.853] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Remoting shut down. [INFO] [05/10/2017 20:00:26.162] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:26.215] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:35619] [INFO] [05/10/2017 20:00:26.218] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Starting up... [INFO] [05/10/2017 20:00:26.222] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:26.222] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Started up successfully [INFO] [05/10/2017 20:00:26.222] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:26.224] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - No seed-nodes configured, manual cluster join required [info] NodeCoordinatorActorSpec: [info] NodeCoordinatorActor SetupIngestion verification [info] - should return UnknownDataset when dataset missing or no columns defined [info] - should return UndefinedColumns if trying to ingest undefined columns [info] - should return BadSchema if dataset definition bazooka [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/multi-jvm-classes... [info] - should get IngestionReady if try to set up concurrently for same dataset/version [info] - should add new entry for ingestion state for a given dataset/version, only first time [info] NodeCoordinatorActor DatasetOps commands [info] - should be able to create new dataset [info] - should return DatasetAlreadyExists creating dataset that already exists [info] - should be able to drop a dataset [info] - should be able to start ingestion, send rows, and get an ack back

[info] - should stop datasetActor if error occurs and prevent further ingestion

[info] - should reload dataset coordinator actors once the nodes are up [info] - should be able to create new WAL files once the reload and flush is complete !!! IGNORED !!! [INFO] [05/10/2017 20:00:39.138] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:39.138] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:39.142] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Remoting shut down. [info] PartitionMapperSpec: [info] - should be able to add one node at a time immutably [info] - should be able to remove nodes [info] - should get an exception if try to lookup coordinator for empty mapper [info] - should get back coordRefs for different partition key hashes [info] SerializationSpec: [info] - should be able to serialize different IngestionCommands messages [info] - should be able to serialize a PartitionMapper [info] - should be able to serialize and deserialize IngestRows with BinaryRecords [INFO] [05/10/2017 20:00:39.432] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:39.446] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:33670] [INFO] [05/10/2017 20:00:39.447] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Starting up... [INFO] [05/10/2017 20:00:39.454] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:39.454] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Started up successfully [INFO] [05/10/2017 20:00:39.456] [test-akka.actor.default-dispatcher-14] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:39.458] [test-akka.actor.default-dispatcher-15] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - No seed-nodes configured, manual cluster join required [info] RowSourceSpec: [info] - should fail if cannot parse input RowReader [ERROR] [05/10/2017 20:00:44.245] [test-akka.actor.default-dispatcher-18] [akka://test/user/$a/ds-coord-gdelt-0] foo! java.lang.RuntimeException: foo! at filodb.coordinator.TestSegmentStateCache.getSegmentState(RowSourceSpec.scala:32) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at kamon.trace.TraceContext$class.withNewSegment(TraceContext.scala:53) at kamon.trace.MetricsOnlyContext.withNewSegment(MetricsOnlyContext.scala:28) at filodb.core.Perftools$.subtrace(Perftools.scala:26) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:82) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:79) at kamon.trace.Tracer$$anonfun$withNewContext$1.apply(TracerModule.scala:62) at kamon.trace.Tracer$.withContext(TracerModule.scala:53) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:61) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:77) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:79) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:78) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at filodb.core.reprojector.DefaultReprojector.toSegments(Reprojector.scala:78) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:118) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:117) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[info] - should fail fast if NodeCoordinatorActor bombs in middle of ingestion [info] - should ingest all rows and handle memtable flush cycle properly

[info] - should ingest all rows and handle memtable full properly [INFO] [05/10/2017 20:00:54.183] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:54.183] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:54.186] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Remoting shut down. [info] Compiling 1 Scala source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/test-classes... [info] * filodb.coordinator.NodeClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode2 [info] * filodb.coordinator.RowSourceClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode2 [info] DatasetTableSpec: [info] DatasetTable [info] - should create a dataset successfully, then return AlreadyExists [info] - should delete a dataset [info] - should return NotFoundError when trying to get nonexisting dataset [info] - should return the Dataset if it exists [info] CassandraMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there ^CException in thread "Thread-27" java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2903) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1502) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:953) at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:942) at java.lang.Thread.run(Thread.java:745) ^C[info] ScalaTest [info] Run completed in 49 seconds, 667 milliseconds. [info] Total number of tests run: 29 [info] Suites: completed 5, aborted 0 [info] Tests: succeeded 29, failed 0, canceled 0, ignored 1, pending 0 [info] All tests passed. [info] multi-jvm [info] filodb.coordinator.NodeClusterSpec [info] multi-jvm [info] filodb.coordinator.RowSourceClusterSpec ^C[error] Failed: Total 29, Failed 0, Errors 0, Passed 29, Ignored 1 ^Csynerzip@ULTP-438:~/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:~/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:~/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:~/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:~/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:~/code-base/junk/FiloDB$ sbt test [info] Loading global plugins from /home/synerzip/.sbt/0.13/plugins [info] Loading project definition from /home/synerzip/code-base/junk/FiloDB/project [info] Set current project to filodb (in build file:/home/synerzip/code-base/junk/FiloDB/) [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/coordinator/src/main/scala/filodb.coordinator/DatasetCoordinatorActor.scala:85:1: Non ascii characters are not allowed [info] Processed 15 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 15 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/coordinator/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 5 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/stress/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 2 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 4 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cli/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/jmh/src/main/scala/filodb.jmh/SparkReadBenchmark.scala:20:60: Non ascii characters are not allowed [info] Processed 7 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/jmh/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 8 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/spark/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 38 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/core/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 13 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cassandra/target [info] Compiling 38 Scala sources to /home/synerzip/code-base/junk/FiloDB/core/target/scala-2.11/classes... [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] Compiling 15 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/classes... [info] Compiling 20 Scala sources to /home/synerzip/code-base/junk/FiloDB/core/target/scala-2.11/test-classes... [info] Compiling 13 Scala sources to /home/synerzip/code-base/junk/FiloDB/cassandra/target/scala-2.11/classes... [info] DatasetSpec: [info] DatasetOptions serialization [info] - should serialize options successfully [info] ColumnSpec: [info] Column.schemaFold [info] - should add new columns to the schema [info] - should remove deleted columns from the schema [info] - should replace updated column defs in the schema [info] Column.invalidateNewColumn [info] - should check that regular column names don't have : in front [info] - should check that column names cannot contain illegal chars [info] - should check that cannot add columns at lower versions [info] - should check that added columns change some property [info] - should check that new columns are not deleted [info] - should return no reasons for a valid new column [info] Column serialization [info] - should serialize and deserialize properly [info] TypesSpec: [info] ByteVectorOrdering [info] - should compare by length if contents equal [info] - should compare by unsigned bytes [info] KeyTypes [info] - should compare CompositeKeyTypes using ordering trait [info] - getKeyFunc should resolve null values to default values [info] ComputedColumnSpec: [info] :getOrElse [info] - should return WrongNumberArguments when # args not 2 [info] - should return BadArgument if source column not found [info] - should return BadArgument if cannot parse non-string default value [info] - should parse normal (non-null) value and pass it through [info] - should parse null value and pass through default value [info] :round [info] - should return BadArgument if rounding value different type than source column [info] - should return BadArgument if attempt to use :round with unsupported type [info] - should round long value [info] - should round double value [info] :timeslice [info] - should return BadArgument if time duration string not formatted properly [info] - should timeslice long values as milliseconds [info] - should timeslice Timestamp values [info] :monthOfYear [info] - should return month of year for timestamp column [info] :stringPrefix [info] - should take string prefix [info] - should return empty string if column value null [info] :hash [info] - should hash different string values to int between 0 and N [info] - should hash long values to int between 0 and N [info] InMemoryMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there [info] - should return AlreadyExists if dataset already exists [info] - should return NotFound if getDataset on nonexisting dataset [info] - should return all datasets created [info] column API [info] - should return IllegalColumnChange if an invalid column addition submitted [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/test-classes... [info] - should be able to create a Column and get the Schema [info] - should return IllegalColumnChange if some column additions invalid [info] - should be able to add many new columns at once [info] - deleteDatasets should delete both dataset and columns [info] BinaryRecordSpec: [info] - should create and extract individual fields and match when all fields present [info] - should create and extract fields and check notNull correctly [info] - should get default values back for null fields [info] - should get bytes out and get back same BinaryRecord [info] - should generate same hashcode for different instances of the same RecordSchema [info] - should produce shorter BinaryRecords if smaller number of items fed [info] - should semantically compare BinaryRecords field by field [info] - should semantically compare BinaryRecord Int and Long fields correctly (pending) [info] - should produce sortable ByteArrays from BinaryRecords [info] - should serialize and deserialize RecordSchema and BinaryRecordWrapper [info] ChunkSetInfoSpec: [info] - should serialize and deserialize ChunkSetInfo and no skips [info] - should serialize and deserialize ChunkSetInfo and skips [info] - should find intersection range of composite keys with strings [info] - should not find intersection if key1 is greater than key2 [info] - should find intersection range of keys with timestamps [info] - should return None if error with one of the RowReaders [info] MemTableMemoryTest: Start: free memory = 212156408 init = 2555904(2496K) used = 32621608(31857K) committed = 33128448(32352K) max = -1(-1K) End: free memory = 142805248 elapsed = 5455 ms init = 2555904(2496K) used = 34808248(33992K) committed = 35422208(34592K) max = -1(-1K) [info] - should add tons of rows without overflowing memory and taking too long [info] WriteAheadLogFileSpec: [info] - creates memory mapped file with no data [info] - creates memory mapped buffer for an existing file [info] - write header to the file [info] - write filochunks indicator to the file !!! IGNORED !!! [info] - write filochunks to the file [info] - Able to write chunk data greater than the size of the mapped byte buffer [info] - Able to write large header greater than the size of the mapped byte buffer [info] - Valid WAL header [info] - Invalid file identifier in header [info] - Invalid column definition header [info] - Invalid column count indicator [info] - Invalid column definitions size [info] - Able to read filo chunks successfully [info] - Able to read filo chunks for GdeltTestData successfully [info] KeyFilterSpec: [info] - should parse values for regular KeyTypes [info] - should validate equalsFunc for string and other types [info] - should validate inFunc for string and other types [info] - should parse values for computed KeyTypes [info] ReprojectorSpec: [info] - should write out new chunkSet in sorted rowKey order [info] - should reuse segment metadata on successive flushes [info] - should reload segment metadata if state no longer in cache [info] - should reload segment metadata and replace previous chunk rows successfully [info] InMemoryColumnStoreSpec: [info] appendSegment [info] - should NOOP if the segment is empty [info] - should append new rows successfully [info] - should replace rows to a previous chunk successfully [info] - should replace rows with multi row keys to an uncached segment [info] scanChunks SinglePartitionScan [info] - should read chunks back that were written [info] - should return empty iterator if cannot find chunk (SinglePartitionRangeScan) [info] - should return empty iterator if cannot find partition or version [info] - should return empty chunks if cannot find some columns [info] scanRows [info] - should read back rows that were written [info] - should read back rows written in another database [info] - should read back rows written with multi-column row keys [info] - should filter rows written with single partition key [info] - should range scan by row keys and filter rows with single partition key [info] - should range scan by row keys (SinglePartitionRowKeyScan) [info] - should filter rows written with multiple column partition keys [info] PartitionChunkIndexSpec: [info] RowkeyPartitionChunkIndex [info] - should add out of order chunks and return in rowkey order [info] - should return no chunks if rowKeyRange startKey is greater than endKey [info] ChunkIDPartitionChunkIndex [info] - should add out of order chunks and return in chunkID order [info] - should handle skips [info] SegmentSpec: [info] - SegmentState should add chunk info properly and update state for append only [info] - SegmentState should add chunk info properly when SegmentState prepopulated [info] - SegmentState should add skip lists properly when new rows replace previous chunks [info] - SegmentState should not add skip lists if detectSkips=false [info] - RowWriter and RowReader should work for rows with string row keys [info] - RowWriter and RowReader should work for rows with multi-column row keys [info] FiloMemTableSpec: [info] insertRows, readRows with forced flush [info] - should insert out of order rows and read them back in order [info] - should replace rows and read them back in order [info] - should insert/replace rows with multiple partition keys and read them back in order [info] - should insert/replace rows with multiple row keys and read them back in order [info] - should ingest into multiple partitions using partition column [info] - should ingest BinaryRecords with Timestamp partition column [info] - should keep ingesting rows with null partition col value [info] - should not throw error if :getOrElse computed column used with null partition col value [info] ChunkHeaderSpec: [info] - create UTF8 string with FiloWAL of 8 bytes [info] - create column identifer in 2 bytes [info] - Add no of columns to header of 2 bytes [info] - Single column definition [info] - Multi column definitions [info] - Order of methods to write full header [info] ProjectionSpec: [info] RichProjection [info] - should get MissingColumnNames if cannot find row key or segment key [info] - should get MissingColumnNames if projection columns are missing from schema [info] - should get NoColumnsSpecified if key columns or partition columns are empty [info] - should get MissingColumnNames if cannot find partitioning column [info] - should get back NoSuchFunction if computed column function not found [info] - should return RowKeyComputedColumns err if try to use computed columns in row key [info] - should get back partitioning func for default key if partitioning column is default [info] - should change database with withDatabase [info] - apply() should throw exception for bad schema [info] - should get RichProjection back with proper dataset and schema [info] - should get RichProjection back with multiple partition and row key columns [info] - should create RichProjection properly for String row key column [info] - should (de)serialize to/from readOnlyProjectionStrings [info] - should deserialize readOnlyProjectionStrings with empty columns [info] - should deserialize readOnlyProjectionStrings with database specified [info] Run completed in 24 seconds, 908 milliseconds. [info] Total number of tests run: 132 [info] Suites: completed 17, aborted 0 [info] Tests: succeeded 132, failed 0, canceled 0, ignored 1, pending 1 [info] All tests passed. [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/cassandra/target/scala-2.11/test-classes... [INFO] [05/10/2017 20:12:29.342] [pool-1-thread-1] [Remoting] Starting remoting [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/cli/target/scala-2.11/classes... [INFO] [05/10/2017 20:12:30.072] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:34764] [INFO] [05/10/2017 20:12:30.213] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Starting up... [INFO] [05/10/2017 20:12:30.510] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:30.510] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Started up successfully [INFO] [05/10/2017 20:12:30.528] [test-akka.actor.default-dispatcher-2] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Metrics collection has started successfully [INFO] [05/10/2017 20:12:30.560] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - No seed-nodes configured, manual cluster join required [info] DatasetCoordinatorActorSpec: [info] - should respond to GetStats with no flushes and no rows [info] - should not flush if datasets not reached limit yet [info] Compiling 8 Scala sources and 1 Java source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/classes... [info] - should automatically flush after ingesting enough rows

[info] - should send back Nack if over maximum number of rows or Nack sent before with no CheckCanIngest [info] - StartFlush should initiate flush even if # rows not reached trigger yet

[info] - StartFlush should initiate flush when there is no write activity after few seconds [info] - should automatically delete memtable wal files once flush is complete successfully [INFO] [05/10/2017 20:12:41.804] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:12:41.809] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:12:41.882] [test-akka.remote.default-remote-dispatcher-21] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Remoting shut down. [INFO] [05/10/2017 20:12:42.979] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:12:43.071] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:34008] [INFO] [05/10/2017 20:12:43.072] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Starting up... [INFO] [05/10/2017 20:12:43.100] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:43.100] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Started up successfully [INFO] [05/10/2017 20:12:43.102] [test-akka.actor.default-dispatcher-6] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - No seed-nodes configured, manual cluster join required [INFO] [05/10/2017 20:12:43.104] [test-akka.actor.default-dispatcher-4] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Metrics collection has started successfully [info] NodeCoordinatorActorSpec: [info] NodeCoordinatorActor SetupIngestion verification [info] - should return UnknownDataset when dataset missing or no columns defined [info] - should return UndefinedColumns if trying to ingest undefined columns [info] - should return BadSchema if dataset definition bazooka [info] - should get IngestionReady if try to set up concurrently for same dataset/version [info] - should add new entry for ingestion state for a given dataset/version, only first time [info] NodeCoordinatorActor DatasetOps commands [info] - should be able to create new dataset [info] - should return DatasetAlreadyExists creating dataset that already exists [info] - should be able to drop a dataset [info] - should be able to start ingestion, send rows, and get an ack back

[info] - should stop datasetActor if error occurs and prevent further ingestion [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/multi-jvm-classes...

[info] - should reload dataset coordinator actors once the nodes are up [info] - should be able to create new WAL files once the reload and flush is complete !!! IGNORED !!! [INFO] [05/10/2017 20:12:55.479] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:12:55.479] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:12:55.512] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Remoting shut down. [info] PartitionMapperSpec: [info] - should be able to add one node at a time immutably [info] - should be able to remove nodes [info] - should get an exception if try to lookup coordinator for empty mapper [info] - should get back coordRefs for different partition key hashes [info] SerializationSpec: [info] - should be able to serialize different IngestionCommands messages [info] - should be able to serialize a PartitionMapper [info] - should be able to serialize and deserialize IngestRows with BinaryRecords [INFO] [05/10/2017 20:12:55.872] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:12:55.906] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:36985] [INFO] [05/10/2017 20:12:55.907] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Starting up... [INFO] [05/10/2017 20:12:55.918] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:55.930] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Started up successfully [INFO] [05/10/2017 20:12:55.934] [test-akka.actor.default-dispatcher-2] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Metrics collection has started successfully [INFO] [05/10/2017 20:12:55.936] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - No seed-nodes configured, manual cluster join required [info] RowSourceSpec: [info] - should fail if cannot parse input RowReader [ERROR] [05/10/2017 20:13:00.670] [test-akka.actor.default-dispatcher-2] [akka://test/user/$a/ds-coord-gdelt-0] foo! java.lang.RuntimeException: foo! at filodb.coordinator.TestSegmentStateCache.getSegmentState(RowSourceSpec.scala:32) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at kamon.trace.TraceContext$class.withNewSegment(TraceContext.scala:53) at kamon.trace.MetricsOnlyContext.withNewSegment(MetricsOnlyContext.scala:28) at filodb.core.Perftools$.subtrace(Perftools.scala:26) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:82) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:79) at kamon.trace.Tracer$$anonfun$withNewContext$1.apply(TracerModule.scala:62) at kamon.trace.Tracer$.withContext(TracerModule.scala:53) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:61) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:77) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:79) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:78) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at filodb.core.reprojector.DefaultReprojector.toSegments(Reprojector.scala:78) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:118) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:117) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[info] - should fail fast if NodeCoordinatorActor bombs in middle of ingestion [info] - should ingest all rows and handle memtable flush cycle properly

[info] - should ingest all rows and handle memtable full properly [INFO] [05/10/2017 20:13:10.534] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:13:10.534] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:13:10.543] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Remoting shut down. [info] * filodb.coordinator.NodeClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode2 [info] * filodb.coordinator.RowSourceClusterSpec [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode2 [info] Compiling 7 Scala sources to /home/synerzip/code-base/junk/FiloDB/jmh/target/scala-2.11/classes... [info] ScalaTest [info] Run completed in 47 seconds, 926 milliseconds. [info] Total number of tests run: 29 [info] Suites: completed 5, aborted 0 [info] Tests: succeeded 29, failed 0, canceled 0, ignored 1, pending 0 [info] All tests passed. [info] multi-jvm [info] filodb.coordinator.NodeClusterSpec [info] multi-jvm [info] filodb.coordinator.RowSourceClusterSpec [error] Failed: Total 29, Failed 0, Errors 0, Passed 29, Ignored 1 [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] Compiling 4 Scala sources to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/test-classes... [info] DatasetTableSpec: [info] DatasetTable [info] - should create a dataset successfully, then return AlreadyExists [info] - should delete a dataset [info] - should return NotFoundError when trying to get nonexisting dataset [info] - should return the Dataset if it exists [info] CassandraMetaStoreSpec: [info] dataset API [info] Compiling 5 Scala sources to /home/synerzip/code-base/junk/FiloDB/stress/target/scala-2.11/classes... [info] - should create a new Dataset if one not there [info] - should return AlreadyExists if dataset already exists [info] - should return NotFound if getDataset on nonexisting dataset [info] - should return all datasets created [info] column API [info] - should return IllegalColumnChange if an invalid column addition submitted [info] - should be able to create a Column and get the Schema [info] - should return IllegalColumnChange if some column additions invalid [info] - should be able to add many new columns at once [info] - deleteDatasets should delete both dataset and columns [info] CassandraColumnStoreSpec: [info] appendSegment [info] - should NOOP if the segment is empty [info] - should append new rows successfully [info] Exception encountered when attempting to run a suite with class name: filodb.cassandra.columnstore.CassandraColumnStoreSpec ABORTED [info] Exception encountered when attempting to run a suite with class name: filodb.cassandra.columnstore.CassandraColumnStoreSpec (ColumnStoreSpec.scala:42) [info] ColumnTableSpec: [info] ColumnTable [info] - should return empty schema if a dataset does not exist in columns table [info] - should add the first column and read it back as a schema [info] - should return MetadataException if illegal column type encoded in Cassandra [info] IngestionStateTableSpec: [info] initialize [info] - should create ingestion_state table successfully [info] insertIngestionState [info] - should create an entry into table, then return already exists [info] updateIngestionState [info] - should modify state of ingestion for a given actor, dataset , walfilename [info] getIngestionStateByDataset [info] - should fetch entry for a given dataset [info] getIngestionStateByActor [info] - should fetch entry for a given node actor path [info] deleteIngestationStateByDataset [info] - should fetch entry for a given dataset [info] deleteIngestationStateByActor [info] - should remove entry for a given node actor path [info] clearAll [info] - should truncate ingestion_state table successfully [info] dropTable [info] - should delete ingestion_state table successfully [info] Run completed in 2 minutes, 31 seconds. [info] Total number of tests run: 27 [info] Suites: completed 4, aborted 1 [info] Tests: succeeded 27, failed 0, canceled 0, ignored 0, pending 0 [info] 1 SUITE ABORTED [error] Error during tests: [error] filodb.cassandra.columnstore.CassandraColumnStoreSpec [2017-05-10 20:16:04,963] WARN o.a.h.u.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [2017-05-10 20:16:05,401] WARN o.apache.spark.util.Utils [] - Your hostname, ULTP-438 resolves to a loopback address: 127.0.1.1; using 172.25.30.61 instead (on interface eth1) [2017-05-10 20:16:05,411] WARN o.apache.spark.util.Utils [] - Set SPARK_LOCAL_IP if you need to bind to another address [2017-05-10 20:16:08,227] WARN o.a.spark.SparkContext [] - Use an existing SparkContext, some configuration may not take effect. [INFO] [05/10/2017 20:16:09.873] [pool-1-thread-1] [StatsDExtension(akka://kamon)] Starting the Kamon(StatsD) extension [info] InMemoryStoreTest:
[info] - should be able to write to InMemoryColumnStore with multi-column partition keys [2017-05-10 20:16:44,845] WARN o.a.h.u.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [2017-05-10 20:16:45,017] WARN o.apache.spark.util.Utils [] - Your hostname, ULTP-438 resolves to a loopback address: 127.0.1.1; using 172.25.30.61 instead (on interface eth1) [2017-05-10 20:16:45,018] WARN o.apache.spark.util.Utils [] - Set SPARK_LOCAL_IP if you need to bind to another address [2017-05-10 20:16:46,056] WARN o.a.spark.SparkContext [] - Use an existing SparkContext, some configuration may not take effect. [INFO] [05/10/2017 20:16:46.814] [pool-1-thread-1] [StatsDExtension(akka://kamon)] Starting the Kamon(StatsD) extension [info] StreamingTest:
[info] - should ingest successive streaming RDDs as DataFrames... [2017-05-10 20:17:26,142] WARN o.a.h.u.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [2017-05-10 20:17:26,271] WARN o.apache.spark.util.Utils [] - Your hostname, ULTP-438 resolves to a loopback address: 127.0.1.1; using 172.25.30.61 instead (on interface eth1) [2017-05-10 20:17:26,271] WARN o.apache.spark.util.Utils [] - Set SPARK_LOCAL_IP if you need to bind to another address [2017-05-10 20:17:27,284] WARN o.a.spark.SparkContext [] - Use an existing SparkContext, some configuration may not take effect. [INFO] [05/10/2017 20:17:27.973] [pool-1-thread-1] [StatsDExtension(akka://kamon)] Starting the Kamon(StatsD) extension [info] SaveAsFiloTest:
[info] - should create missing columns and partitions and write table [info] - should throw ColumnTypeMismatch if existing columns are not same type [info] - should throw BadSchemaError if illegal computed column specification or bad schema [info] - should not delete original metadata if overwrite with bad schema definition [info] - should write table if there are existing matching columns [info] - should throw error in ErrorIfExists mode if dataset already exists [info] - should write and read using DF write() and read() APIs [info] - should write and read to another keyspace using DF write() and read() APIs [info] - should overwrite existing data if mode=Overwrite
[info] - should append data in Append mode [info] - should append data using SQL INSERT INTO statements
[info] - should be able to write with a user-specified partitioning column [info] - should be able to write with multi-column partition keys
[info] - should be able to parse and use partition filters in queries
[info] - should be able to parse and use partition filters even if partition has computed column [info] - should be able to parse and use single partition query
[info] - should be able to parse and use multipartition query
[info] - should be able to filter by row key and multiple partitions
[info] - should be able do full table scan when all partition keys are not part of the filters [info] - should be able to write with multi-column row keys and filter by row key [info] - should be able to ingest Spark Timestamp columns and query them
[info] - should be able to ingest records using a hash partition key and filter by hashed key [info] Run completed in 12 minutes, 20 seconds. [info] Total number of tests run: 24 [info] Suites: completed 3, aborted 0 [info] Tests: succeeded 24, failed 0, canceled 0, ignored 0, pending 0 [info] All tests passed. [info] Run completed in 132 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Run completed in 55 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [error] (coordinator/test:test) sbt.TestsFailedException: Tests unsuccessful [error] (cassandra/test:test) sbt.TestsFailedException: Tests unsuccessful [error] Total time: 1059 s, completed 10 May, 2017 8:28:21 PM

velvia commented 7 years ago

Hmmm interesting. This is in the multi-ivm test which launches separate nodes. Something wrong with the class path there. Maybe try this:

multi-ivm:clean

The default clean does not clean the multi-JVM targets. BTW the important thing is that the main tests run, the multi-ivm tests will be slightly more flaky.

-Evan

On May 10, 2017, at 11:29 PM, Rahul Shukla notifications@github.com wrote:

SBT test command that you ran: sbt test More of the log/console output which shows which test this is failing: pasted below Branch and commit number: Master/4feff59 Your SBT version and environment: 0.13.11 [info] Loading global plugins from /home/synerzip/.sbt/0.13/plugins [info] Loading project definition from /home/synerzip/code-base/junk/FiloDB/project [info] Set current project to filodb (in build file:/home/synerzip/code-base/junk/FiloDB/) [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 8 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 28 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/spark/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 38 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 3 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/core/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 2 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 0 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cli/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/jmh/src/main/scala/filodb.jmh/SparkReadBenchmark.scala:20:60: Non ascii characters are not allowed [info] Processed 7 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 5 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/jmh/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 13 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cassandra/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/coordinator/src/main/scala/filodb.coordinator/DatasetCoordinatorActor.scala:85:1: Non ascii characters are not allowed [info] Processed 15 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/coordinator/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 5 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/stress/target [info] Compiling 15 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/classes... [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/test-classes... [info] DatasetSpec: [info] DatasetOptions serialization [info] - should serialize options successfully [info] ColumnSpec: [info] Column.schemaFold [info] - should add new columns to the schema [info] - should remove deleted columns from the schema [info] - should replace updated column defs in the schema [info] Column.invalidateNewColumn [info] - should check that regular column names don't have : in front [info] - should check that column names cannot contain illegal chars [info] - should check that cannot add columns at lower versions [info] - should check that added columns change some property [info] - should check that new columns are not deleted [info] - should return no reasons for a valid new column [info] Column serialization [info] - should serialize and deserialize properly [info] TypesSpec: [info] ByteVectorOrdering [info] - should compare by length if contents equal [info] - should compare by unsigned bytes [info] KeyTypes [info] - should compare CompositeKeyTypes using ordering trait [info] - getKeyFunc should resolve null values to default values [info] ComputedColumnSpec: [info] :getOrElse [info] - should return WrongNumberArguments when # args not 2 [info] - should return BadArgument if source column not found [info] - should return BadArgument if cannot parse non-string default value [info] - should parse normal (non-null) value and pass it through [info] - should parse null value and pass through default value [info] :round [info] - should return BadArgument if rounding value different type than source column [info] - should return BadArgument if attempt to use :round with unsupported type [info] - should round long value [info] - should round double value [info] :timeslice [info] - should return BadArgument if time duration string not formatted properly [info] - should timeslice long values as milliseconds [info] - should timeslice Timestamp values [info] :monthOfYear [info] - should return month of year for timestamp column [info] :stringPrefix [info] - should take string prefix [info] - should return empty string if column value null [info] :hash [info] - should hash different string values to int between 0 and N [info] - should hash long values to int between 0 and N [info] InMemoryMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there [info] - should return AlreadyExists if dataset already exists [info] - should return NotFound if getDataset on nonexisting dataset [info] - should return all datasets created [info] column API [info] - should return IllegalColumnChange if an invalid column addition submitted [info] - should be able to create a Column and get the Schema [info] - should return IllegalColumnChange if some column additions invalid [info] - should be able to add many new columns at once [info] - deleteDatasets should delete both dataset and columns [info] BinaryRecordSpec: [info] - should create and extract individual fields and match when all fields present [info] - should create and extract fields and check notNull correctly [info] - should get default values back for null fields [info] - should get bytes out and get back same BinaryRecord [info] - should generate same hashcode for different instances of the same RecordSchema [info] - should produce shorter BinaryRecords if smaller number of items fed [info] - should semantically compare BinaryRecords field by field [info] - should semantically compare BinaryRecord Int and Long fields correctly (pending) [info] - should produce sortable ByteArrays from BinaryRecords [info] - should serialize and deserialize RecordSchema and BinaryRecordWrapper [info] ChunkSetInfoSpec: [info] - should serialize and deserialize ChunkSetInfo and no skips [info] - should serialize and deserialize ChunkSetInfo and skips [info] - should find intersection range of composite keys with strings [info] - should not find intersection if key1 is greater than key2 [info] - should find intersection range of keys with timestamps [info] - should return None if error with one of the RowReaders [info] MemTableMemoryTest: Start: free memory = 155834392 init = 2555904(2496K) used = 32475736(31714K) committed = 32899072(32128K) max = -1(-1K) End: free memory = 450301992 elapsed = 1836 ms init = 2555904(2496K) used = 34513904(33704K) committed = 35127296(34304K) max = -1(-1K) [info] - should add tons of rows without overflowing memory and taking too long [info] WriteAheadLogFileSpec: [info] - creates memory mapped file with no data [info] - creates memory mapped buffer for an existing file [info] - write header to the file [info] - write filochunks indicator to the file !!! IGNORED !!! [info] - write filochunks to the file [info] - Able to write chunk data greater than the size of the mapped byte buffer [info] - Able to write large header greater than the size of the mapped byte buffer [info] - Valid WAL header [info] - Invalid file identifier in header [info] - Invalid column definition header [info] - Invalid column count indicator [info] - Invalid column definitions size [info] - Able to read filo chunks successfully [info] - Able to read filo chunks for GdeltTestData successfully [info] KeyFilterSpec: [info] - should parse values for regular KeyTypes [info] - should validate equalsFunc for string and other types [info] - should validate inFunc for string and other types [info] - should parse values for computed KeyTypes [info] ReprojectorSpec: [info] - should write out new chunkSet in sorted rowKey order [info] - should reuse segment metadata on successive flushes [info] - should reload segment metadata if state no longer in cache [info] - should reload segment metadata and replace previous chunk rows successfully [info] InMemoryColumnStoreSpec: [info] appendSegment [info] - should NOOP if the segment is empty [info] - should append new rows successfully [info] - should replace rows to a previous chunk successfully [info] - should replace rows with multi row keys to an uncached segment [info] scanChunks SinglePartitionScan [info] - should read chunks back that were written [info] - should return empty iterator if cannot find chunk (SinglePartitionRangeScan) [info] - should return empty iterator if cannot find partition or version [info] - should return empty chunks if cannot find some columns [info] scanRows [info] - should read back rows that were written [info] - should read back rows written in another database [info] - should read back rows written with multi-column row keys [info] - should filter rows written with single partition key [info] - should range scan by row keys and filter rows with single partition key [info] - should range scan by row keys (SinglePartitionRowKeyScan) [info] - should filter rows written with multiple column partition keys [info] PartitionChunkIndexSpec: [info] RowkeyPartitionChunkIndex [info] - should add out of order chunks and return in rowkey order [info] - should return no chunks if rowKeyRange startKey is greater than endKey [info] ChunkIDPartitionChunkIndex [info] - should add out of order chunks and return in chunkID order [info] - should handle skips [info] SegmentSpec: [info] - SegmentState should add chunk info properly and update state for append only [info] - SegmentState should add chunk info properly when SegmentState prepopulated [info] - SegmentState should add skip lists properly when new rows replace previous chunks [info] - SegmentState should not add skip lists if detectSkips=false [info] - RowWriter and RowReader should work for rows with string row keys [info] - RowWriter and RowReader should work for rows with multi-column row keys [info] FiloMemTableSpec: [info] insertRows, readRows with forced flush [info] - should insert out of order rows and read them back in order [info] - should replace rows and read them back in order [info] - should insert/replace rows with multiple partition keys and read them back in order [info] - should insert/replace rows with multiple row keys and read them back in order [info] - should ingest into multiple partitions using partition column [info] - should ingest BinaryRecords with Timestamp partition column [info] - should keep ingesting rows with null partition col value [info] - should not throw error if :getOrElse computed column used with null partition col value [info] ChunkHeaderSpec: [info] - create UTF8 string with FiloWAL of 8 bytes [info] - create column identifer in 2 bytes [info] - Add no of columns to header of 2 bytes [info] - Single column definition [info] - Multi column definitions [info] - Order of methods to write full header [info] ProjectionSpec: [info] RichProjection [info] - should get MissingColumnNames if cannot find row key or segment key [info] - should get MissingColumnNames if projection columns are missing from schema [info] - should get NoColumnsSpecified if key columns or partition columns are empty [info] - should get MissingColumnNames if cannot find partitioning column [info] - should get back NoSuchFunction if computed column function not found [info] - should return RowKeyComputedColumns err if try to use computed columns in row key [info] - should get back partitioning func for default key if partitioning column is default [info] - should change database with withDatabase [info] - apply() should throw exception for bad schema [info] - should get RichProjection back with proper dataset and schema [info] - should get RichProjection back with multiple partition and row key columns [info] - should create RichProjection properly for String row key column [info] - should (de)serialize to/from readOnlyProjectionStrings [info] - should deserialize readOnlyProjectionStrings with empty columns [info] - should deserialize readOnlyProjectionStrings with database specified [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/cli/target/scala-2.11/classes... [info] Run completed in 24 seconds, 560 milliseconds. [info] Total number of tests run: 132 [info] Suites: completed 17, aborted 0 [info] Tests: succeeded 132, failed 0, canceled 0, ignored 1, pending 1 [info] All tests passed. [INFO] [05/10/2017 20:00:11.199] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:12.379] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:42192] [INFO] [05/10/2017 20:00:12.463] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Starting up... [INFO] [05/10/2017 20:00:13.003] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:13.003] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Started up successfully [INFO] [05/10/2017 20:00:13.030] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:13.078] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:42192] - No seed-nodes configured, manual cluster join required [info] DatasetCoordinatorActorSpec: [info] Compiling 8 Scala sources and 1 Java source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/classes... [info] - should respond to GetStats with no flushes and no rows [info] - should not flush if datasets not reached limit yet [info] - should automatically flush after ingesting enough rows

[info] - should send back Nack if over maximum number of rows or Nack sent before with no CheckCanIngest [info] - StartFlush should initiate flush even if # rows not reached trigger yet

[info] - StartFlush should initiate flush when there is no write activity after few seconds [info] - should automatically delete memtable wal files once flush is complete successfully [INFO] [05/10/2017 20:00:25.768] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:25.798] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:25.853] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:42192/system/remoting-terminator] Remoting shut down. [INFO] [05/10/2017 20:00:26.162] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:26.215] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:35619] [INFO] [05/10/2017 20:00:26.218] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Starting up... [INFO] [05/10/2017 20:00:26.222] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:26.222] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Started up successfully [INFO] [05/10/2017 20:00:26.222] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:26.224] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:35619] - No seed-nodes configured, manual cluster join required [info] NodeCoordinatorActorSpec: [info] NodeCoordinatorActor SetupIngestion verification [info] - should return UnknownDataset when dataset missing or no columns defined [info] - should return UndefinedColumns if trying to ingest undefined columns [info] - should return BadSchema if dataset definition bazooka [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/multi-jvm-classes... [info] - should get IngestionReady if try to set up concurrently for same dataset/version [info] - should add new entry for ingestion state for a given dataset/version, only first time [info] NodeCoordinatorActor DatasetOps commands [info] - should be able to create new dataset [info] - should return DatasetAlreadyExists creating dataset that already exists [info] - should be able to drop a dataset [info] - should be able to start ingestion, send rows, and get an ack back

[info] - should stop datasetActor if error occurs and prevent further ingestion

[info] - should reload dataset coordinator actors once the nodes are up [info] - should be able to create new WAL files once the reload and flush is complete !!! IGNORED !!! [INFO] [05/10/2017 20:00:39.138] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:39.138] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:39.142] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:35619/system/remoting-terminator] Remoting shut down. [info] PartitionMapperSpec: [info] - should be able to add one node at a time immutably [info] - should be able to remove nodes [info] - should get an exception if try to lookup coordinator for empty mapper [info] - should get back coordRefs for different partition key hashes [info] SerializationSpec: [info] - should be able to serialize different IngestionCommands messages [info] - should be able to serialize a PartitionMapper [info] - should be able to serialize and deserialize IngestRows with BinaryRecords [INFO] [05/10/2017 20:00:39.432] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:00:39.446] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:33670] [INFO] [05/10/2017 20:00:39.447] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Starting up... [INFO] [05/10/2017 20:00:39.454] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:00:39.454] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Started up successfully [INFO] [05/10/2017 20:00:39.456] [test-akka.actor.default-dispatcher-14] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - Metrics collection has started successfully [INFO] [05/10/2017 20:00:39.458] [test-akka.actor.default-dispatcher-15] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:33670] - No seed-nodes configured, manual cluster join required [info] RowSourceSpec: [info] - should fail if cannot parse input RowReader [ERROR] [05/10/2017 20:00:44.245] [test-akka.actor.default-dispatcher-18] [akka://test/user/$a/ds-coord-gdelt-0] foo! java.lang.RuntimeException: foo! at filodb.coordinator.TestSegmentStateCache.getSegmentState(RowSourceSpec.scala:32) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at kamon.trace.TraceContext$class.withNewSegment(TraceContext.scala:53) at kamon.trace.MetricsOnlyContext.withNewSegment(MetricsOnlyContext.scala:28) at filodb.core.Perftools$.subtrace(Perftools.scala:26) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:82) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:79) at kamon.trace.Tracer$$anonfun$withNewContext$1.apply(TracerModule.scala:62) at kamon.trace.Tracer$.withContext(TracerModule.scala:53) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:61) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:77) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:79) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:78) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at filodb.core.reprojector.DefaultReprojector.toSegments(Reprojector.scala:78) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:118) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:117) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[info] - should fail fast if NodeCoordinatorActor bombs in middle of ingestion [info] - should ingest all rows and handle memtable flush cycle properly

[info] - should ingest all rows and handle memtable full properly [INFO] [05/10/2017 20:00:54.183] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:00:54.183] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:00:54.186] [test-akka.remote.default-remote-dispatcher-4] [akka.tcp://test@127.0.1.1:33670/system/remoting-terminator] Remoting shut down. [info] Compiling 1 Scala source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/test-classes... [info] * filodb.coordinator.NodeClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode2 [info] * filodb.coordinator.RowSourceClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode2 [info] DatasetTableSpec: [info] DatasetTable [info] - should create a dataset successfully, then return AlreadyExists [info] - should delete a dataset [info] - should return NotFoundError when trying to get nonexisting dataset [info] - should return the Dataset if it exists [info] CassandraMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there ^CException in thread "Thread-27" java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2903) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1502) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:953) at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:942) at java.lang.Thread.run(Thread.java:745) ^C[info] ScalaTest [info] Run completed in 49 seconds, 667 milliseconds. [info] Total number of tests run: 29 [info] Suites: completed 5, aborted 0 [info] Tests: succeeded 29, failed 0, canceled 0, ignored 1, pending 0 [info] All tests passed. [info] multi-jvm [info] filodb.coordinator.NodeClusterSpec [info] multi-jvm [info] filodb.coordinator.RowSourceClusterSpec ^C[error] Failed: Total 29, Failed 0, Errors 0, Passed 29, Ignored 1 ^Csynerzip@ULTP-438:/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:/code-base/junk/FiloDB$ ^C synerzip@ULTP-438:/code-base/junk/FiloDB$ sbt test [info] Loading global plugins from /home/synerzip/.sbt/0.13/plugins [info] Loading project definition from /home/synerzip/code-base/junk/FiloDB/project [info] Set current project to filodb (in build file:/home/synerzip/code-base/junk/FiloDB/) [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/coordinator/src/main/scala/filodb.coordinator/DatasetCoordinatorActor.scala:85:1: Non ascii characters are not allowed [info] Processed 15 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 15 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/coordinator/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 5 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/stress/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 2 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 4 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cli/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [warn] /home/synerzip/code-base/junk/FiloDB/jmh/src/main/scala/filodb.jmh/SparkReadBenchmark.scala:20:60: Non ascii characters are not allowed [info] Processed 7 file(s) [info] Found 0 errors [info] Found 1 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/jmh/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 8 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/spark/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 38 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/core/target [info] scalastyle using config /home/synerzip/code-base/junk/FiloDB/scalastyle-config.xml [info] Processed 13 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 1 ms [success] created output: /home/synerzip/code-base/junk/FiloDB/cassandra/target [info] Compiling 38 Scala sources to /home/synerzip/code-base/junk/FiloDB/core/target/scala-2.11/classes... [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] Compiling 15 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/classes... [info] Compiling 20 Scala sources to /home/synerzip/code-base/junk/FiloDB/core/target/scala-2.11/test-classes... [info] Compiling 13 Scala sources to /home/synerzip/code-base/junk/FiloDB/cassandra/target/scala-2.11/classes... [info] DatasetSpec: [info] DatasetOptions serialization [info] - should serialize options successfully [info] ColumnSpec: [info] Column.schemaFold [info] - should add new columns to the schema [info] - should remove deleted columns from the schema [info] - should replace updated column defs in the schema [info] Column.invalidateNewColumn [info] - should check that regular column names don't have : in front [info] - should check that column names cannot contain illegal chars [info] - should check that cannot add columns at lower versions [info] - should check that added columns change some property [info] - should check that new columns are not deleted [info] - should return no reasons for a valid new column [info] Column serialization [info] - should serialize and deserialize properly [info] TypesSpec: [info] ByteVectorOrdering [info] - should compare by length if contents equal [info] - should compare by unsigned bytes [info] KeyTypes [info] - should compare CompositeKeyTypes using ordering trait [info] - getKeyFunc should resolve null values to default values [info] ComputedColumnSpec: [info] :getOrElse [info] - should return WrongNumberArguments when # args not 2 [info] - should return BadArgument if source column not found [info] - should return BadArgument if cannot parse non-string default value [info] - should parse normal (non-null) value and pass it through [info] - should parse null value and pass through default value [info] :round [info] - should return BadArgument if rounding value different type than source column [info] - should return BadArgument if attempt to use :round with unsupported type [info] - should round long value [info] - should round double value [info] :timeslice [info] - should return BadArgument if time duration string not formatted properly [info] - should timeslice long values as milliseconds [info] - should timeslice Timestamp values [info] :monthOfYear [info] - should return month of year for timestamp column [info] :stringPrefix [info] - should take string prefix [info] - should return empty string if column value null [info] :hash [info] - should hash different string values to int between 0 and N [info] - should hash long values to int between 0 and N [info] InMemoryMetaStoreSpec: [info] dataset API [info] - should create a new Dataset if one not there [info] - should return AlreadyExists if dataset already exists [info] - should return NotFound if getDataset on nonexisting dataset [info] - should return all datasets created [info] column API [info] - should return IllegalColumnChange if an invalid column addition submitted [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/test-classes... [info] - should be able to create a Column and get the Schema [info] - should return IllegalColumnChange if some column additions invalid [info] - should be able to add many new columns at once [info] - deleteDatasets should delete both dataset and columns [info] BinaryRecordSpec: [info] - should create and extract individual fields and match when all fields present [info] - should create and extract fields and check notNull correctly [info] - should get default values back for null fields [info] - should get bytes out and get back same BinaryRecord [info] - should generate same hashcode for different instances of the same RecordSchema [info] - should produce shorter BinaryRecords if smaller number of items fed [info] - should semantically compare BinaryRecords field by field [info] - should semantically compare BinaryRecord Int and Long fields correctly (pending) [info] - should produce sortable ByteArrays from BinaryRecords [info] - should serialize and deserialize RecordSchema and BinaryRecordWrapper [info] ChunkSetInfoSpec: [info] - should serialize and deserialize ChunkSetInfo and no skips [info] - should serialize and deserialize ChunkSetInfo and skips [info] - should find intersection range of composite keys with strings [info] - should not find intersection if key1 is greater than key2 [info] - should find intersection range of keys with timestamps [info] - should return None if error with one of the RowReaders [info] MemTableMemoryTest: Start: free memory = 212156408 init = 2555904(2496K) used = 32621608(31857K) committed = 33128448(32352K) max = -1(-1K) End: free memory = 142805248 elapsed = 5455 ms init = 2555904(2496K) used = 34808248(33992K) committed = 35422208(34592K) max = -1(-1K) [info] - should add tons of rows without overflowing memory and taking too long [info] WriteAheadLogFileSpec: [info] - creates memory mapped file with no data [info] - creates memory mapped buffer for an existing file [info] - write header to the file [info] - write filochunks indicator to the file !!! IGNORED !!! [info] - write filochunks to the file [info] - Able to write chunk data greater than the size of the mapped byte buffer [info] - Able to write large header greater than the size of the mapped byte buffer [info] - Valid WAL header [info] - Invalid file identifier in header [info] - Invalid column definition header [info] - Invalid column count indicator [info] - Invalid column definitions size [info] - Able to read filo chunks successfully [info] - Able to read filo chunks for GdeltTestData successfully [info] KeyFilterSpec: [info] - should parse values for regular KeyTypes [info] - should validate equalsFunc for string and other types [info] - should validate inFunc for string and other types [info] - should parse values for computed KeyTypes [info] ReprojectorSpec: [info] - should write out new chunkSet in sorted rowKey order [info] - should reuse segment metadata on successive flushes [info] - should reload segment metadata if state no longer in cache [info] - should reload segment metadata and replace previous chunk rows successfully [info] InMemoryColumnStoreSpec: [info] appendSegment [info] - should NOOP if the segment is empty [info] - should append new rows successfully [info] - should replace rows to a previous chunk successfully [info] - should replace rows with multi row keys to an uncached segment [info] scanChunks SinglePartitionScan [info] - should read chunks back that were written [info] - should return empty iterator if cannot find chunk (SinglePartitionRangeScan) [info] - should return empty iterator if cannot find partition or version [info] - should return empty chunks if cannot find some columns [info] scanRows [info] - should read back rows that were written [info] - should read back rows written in another database [info] - should read back rows written with multi-column row keys [info] - should filter rows written with single partition key [info] - should range scan by row keys and filter rows with single partition key [info] - should range scan by row keys (SinglePartitionRowKeyScan) [info] - should filter rows written with multiple column partition keys [info] PartitionChunkIndexSpec: [info] RowkeyPartitionChunkIndex [info] - should add out of order chunks and return in rowkey order [info] - should return no chunks if rowKeyRange startKey is greater than endKey [info] ChunkIDPartitionChunkIndex [info] - should add out of order chunks and return in chunkID order [info] - should handle skips [info] SegmentSpec: [info] - SegmentState should add chunk info properly and update state for append only [info] - SegmentState should add chunk info properly when SegmentState prepopulated [info] - SegmentState should add skip lists properly when new rows replace previous chunks [info] - SegmentState should not add skip lists if detectSkips=false [info] - RowWriter and RowReader should work for rows with string row keys [info] - RowWriter and RowReader should work for rows with multi-column row keys [info] FiloMemTableSpec: [info] insertRows, readRows with forced flush [info] - should insert out of order rows and read them back in order [info] - should replace rows and read them back in order [info] - should insert/replace rows with multiple partition keys and read them back in order [info] - should insert/replace rows with multiple row keys and read them back in order [info] - should ingest into multiple partitions using partition column [info] - should ingest BinaryRecords with Timestamp partition column [info] - should keep ingesting rows with null partition col value [info] - should not throw error if :getOrElse computed column used with null partition col value [info] ChunkHeaderSpec: [info] - create UTF8 string with FiloWAL of 8 bytes [info] - create column identifer in 2 bytes [info] - Add no of columns to header of 2 bytes [info] - Single column definition [info] - Multi column definitions [info] - Order of methods to write full header [info] ProjectionSpec: [info] RichProjection [info] - should get MissingColumnNames if cannot find row key or segment key [info] - should get MissingColumnNames if projection columns are missing from schema [info] - should get NoColumnsSpecified if key columns or partition columns are empty [info] - should get MissingColumnNames if cannot find partitioning column [info] - should get back NoSuchFunction if computed column function not found [info] - should return RowKeyComputedColumns err if try to use computed columns in row key [info] - should get back partitioning func for default key if partitioning column is default [info] - should change database with withDatabase [info] - apply() should throw exception for bad schema [info] - should get RichProjection back with proper dataset and schema [info] - should get RichProjection back with multiple partition and row key columns [info] - should create RichProjection properly for String row key column [info] - should (de)serialize to/from readOnlyProjectionStrings [info] - should deserialize readOnlyProjectionStrings with empty columns [info] - should deserialize readOnlyProjectionStrings with database specified [info] Run completed in 24 seconds, 908 milliseconds. [info] Total number of tests run: 132 [info] Suites: completed 17, aborted 0 [info] Tests: succeeded 132, failed 0, canceled 0, ignored 1, pending 1 [info] All tests passed. [info] Compiling 6 Scala sources to /home/synerzip/code-base/junk/FiloDB/cassandra/target/scala-2.11/test-classes... [INFO] [05/10/2017 20:12:29.342] [pool-1-thread-1] [Remoting] Starting remoting [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/cli/target/scala-2.11/classes... [INFO] [05/10/2017 20:12:30.072] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:34764] [INFO] [05/10/2017 20:12:30.213] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Starting up... [INFO] [05/10/2017 20:12:30.510] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:30.510] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Started up successfully [INFO] [05/10/2017 20:12:30.528] [test-akka.actor.default-dispatcher-2] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - Metrics collection has started successfully [INFO] [05/10/2017 20:12:30.560] [test-akka.actor.default-dispatcher-5] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34764] - No seed-nodes configured, manual cluster join required [info] DatasetCoordinatorActorSpec: [info] - should respond to GetStats with no flushes and no rows [info] - should not flush if datasets not reached limit yet [info] Compiling 8 Scala sources and 1 Java source to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/classes... [info] - should automatically flush after ingesting enough rows

[info] - should send back Nack if over maximum number of rows or Nack sent before with no CheckCanIngest [info] - StartFlush should initiate flush even if # rows not reached trigger yet

[info] - StartFlush should initiate flush when there is no write activity after few seconds [info] - should automatically delete memtable wal files once flush is complete successfully [INFO] [05/10/2017 20:12:41.804] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:12:41.809] [test-akka.remote.default-remote-dispatcher-20] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:12:41.882] [test-akka.remote.default-remote-dispatcher-21] [akka.tcp://test@127.0.1.1:34764/system/remoting-terminator] Remoting shut down. [INFO] [05/10/2017 20:12:42.979] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:12:43.071] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:34008] [INFO] [05/10/2017 20:12:43.072] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Starting up... [INFO] [05/10/2017 20:12:43.100] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:43.100] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Started up successfully [INFO] [05/10/2017 20:12:43.102] [test-akka.actor.default-dispatcher-6] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - No seed-nodes configured, manual cluster join required [INFO] [05/10/2017 20:12:43.104] [test-akka.actor.default-dispatcher-4] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:34008] - Metrics collection has started successfully [info] NodeCoordinatorActorSpec: [info] NodeCoordinatorActor SetupIngestion verification [info] - should return UnknownDataset when dataset missing or no columns defined [info] - should return UndefinedColumns if trying to ingest undefined columns [info] - should return BadSchema if dataset definition bazooka [info] - should get IngestionReady if try to set up concurrently for same dataset/version [info] - should add new entry for ingestion state for a given dataset/version, only first time [info] NodeCoordinatorActor DatasetOps commands [info] - should be able to create new dataset [info] - should return DatasetAlreadyExists creating dataset that already exists [info] - should be able to drop a dataset [info] - should be able to start ingestion, send rows, and get an ack back

[info] - should stop datasetActor if error occurs and prevent further ingestion [info] Compiling 2 Scala sources to /home/synerzip/code-base/junk/FiloDB/coordinator/target/scala-2.11/multi-jvm-classes...

[info] - should reload dataset coordinator actors once the nodes are up [info] - should be able to create new WAL files once the reload and flush is complete !!! IGNORED !!! [INFO] [05/10/2017 20:12:55.479] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:12:55.479] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:12:55.512] [test-akka.remote.default-remote-dispatcher-8] [akka.tcp://test@127.0.1.1:34008/system/remoting-terminator] Remoting shut down. [info] PartitionMapperSpec: [info] - should be able to add one node at a time immutably [info] - should be able to remove nodes [info] - should get an exception if try to lookup coordinator for empty mapper [info] - should get back coordRefs for different partition key hashes [info] SerializationSpec: [info] - should be able to serialize different IngestionCommands messages [info] - should be able to serialize a PartitionMapper [info] - should be able to serialize and deserialize IngestRows with BinaryRecords [INFO] [05/10/2017 20:12:55.872] [pool-1-thread-1] [Remoting] Starting remoting [INFO] [05/10/2017 20:12:55.906] [pool-1-thread-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://test@127.0.1.1:36985] [INFO] [05/10/2017 20:12:55.907] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Starting up... [INFO] [05/10/2017 20:12:55.918] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Registered cluster JMX MBean [akka:type=Cluster] [INFO] [05/10/2017 20:12:55.930] [pool-1-thread-1] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Started up successfully [INFO] [05/10/2017 20:12:55.934] [test-akka.actor.default-dispatcher-2] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - Metrics collection has started successfully [INFO] [05/10/2017 20:12:55.936] [test-akka.actor.default-dispatcher-3] [Cluster(akka://test)] Cluster Node [akka.tcp://test@127.0.1.1:36985] - No seed-nodes configured, manual cluster join required [info] RowSourceSpec: [info] - should fail if cannot parse input RowReader [ERROR] [05/10/2017 20:13:00.670] [test-akka.actor.default-dispatcher-2] [akka://test/user/$a/ds-coord-gdelt-0] foo! java.lang.RuntimeException: foo! at filodb.coordinator.TestSegmentStateCache.getSegmentState(RowSourceSpec.scala:32) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2$$anonfun$2.apply(Reprojector.scala:85) at kamon.trace.TraceContext$class.withNewSegment(TraceContext.scala:53) at kamon.trace.MetricsOnlyContext.withNewSegment(MetricsOnlyContext.scala:28) at filodb.core.Perftools$.subtrace(Perftools.scala:26) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:82) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1$$anonfun$apply$2.apply(Reprojector.scala:79) at kamon.trace.Tracer$$anonfun$withNewContext$1.apply(TracerModule.scala:62) at kamon.trace.Tracer$.withContext(TracerModule.scala:53) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:61) at kamon.trace.Tracer$.withNewContext(TracerModule.scala:77) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:79) at filodb.core.reprojector.DefaultReprojector$$anonfun$toSegments$1.apply(Reprojector.scala:78) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at filodb.core.reprojector.DefaultReprojector.toSegments(Reprojector.scala:78) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:118) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1$$anonfun$apply$3.apply(Reprojector.scala:117) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at filodb.core.reprojector.DefaultReprojector$$anonfun$reproject$1.apply(Reprojector.scala:117) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

[info] - should fail fast if NodeCoordinatorActor bombs in middle of ingestion [info] - should ingest all rows and handle memtable flush cycle properly

[info] - should ingest all rows and handle memtable full properly [INFO] [05/10/2017 20:13:10.534] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Shutting down remote daemon. [INFO] [05/10/2017 20:13:10.534] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Remote daemon shut down; proceeding with flushing remote transports. [INFO] [05/10/2017 20:13:10.543] [test-akka.remote.default-remote-dispatcher-9] [akka.tcp://test@127.0.1.1:36985/system/remoting-terminator] Remoting shut down. [info] * filodb.coordinator.NodeClusterSpec [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.NodeClusterSpecMultiJvmNode2 [info] * filodb.coordinator.RowSourceClusterSpec [JVM-2] RUN ABORTED [JVM-2] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-2] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-2] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-2] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-2] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-2] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-2] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-2] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-2] at scala.util.Try$.apply(Try.scala:192) [JVM-2] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-2] ... [JVM-1] RUN ABORTED [JVM-1] java.lang.NoSuchMethodError: akka.util.Helpers$.toRootLowerCase(Ljava/lang/String;)Ljava/lang/String; [JVM-1] at akka.remote.RemoteSettings.(RemoteSettings.scala:33) [JVM-1] at akka.remote.RemoteActorRefProvider.(RemoteActorRefProvider.scala:136) [JVM-1] at akka.cluster.ClusterActorRefProvider.(ClusterActorRefProvider.scala:54) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [JVM-1] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [JVM-1] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [JVM-1] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [JVM-1] at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78) [JVM-1] at scala.util.Try$.apply(Try.scala:192) [JVM-1] at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73) [JVM-1] ... [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode1 [error] Failed: filodb.coordinator.RowSourceClusterSpecMultiJvmNode2 [info] Compiling 7 Scala sources to /home/synerzip/code-base/junk/FiloDB/jmh/target/scala-2.11/classes... [info] ScalaTest [info] Run completed in 47 seconds, 926 milliseconds. [info] Total number of tests run: 29 [info] Suites: completed 5, aborted 0 [info] Tests: succeeded 29, failed 0, canceled 0, ignored 1, pending 0 [info] All tests passed. [info] multi-jvm [info] filodb.coordinator.NodeClusterSpec [info] multi-jvm [info] filodb.coordinator.RowSourceClusterSpec [error] Failed: Total 29, Failed 0, Errors 0, Passed 29, Ignored 1 [info] [Scalaxy] Optimized stream Range.foreach (strategy: safe) [info] Compiling 4 Scala sources to /home/synerzip/code-base/junk/FiloDB/spark/target/scala-2.11/test-classes... [info] DatasetTableSpec: [info] DatasetTable [info] - should create a dataset successfully, then return AlreadyExists [info] - should delete a dataset [info] - should return NotFoundError when trying to get nonexisting dataset [info] - should return the Dataset if it exists [info] CassandraMetaStoreSpec: [info] dataset API [info] Compiling 5 Scala sources to /home/synerzip/code-base/junk/FiloDB/stress/target/sc

shukla2009 commented 7 years ago

sbt multi-ivm:clean wokred for me