Closed JArma19 closed 3 years ago
Authorization failed for http://dl.bintray.com/spark-packages/maven/neo4j-contrib/neo4j-spark-connector/2.4.5-M1/neo4j-spark-connector-2.4.5-M1.pom 403 Forbidden
This issue is because repository dl.bintray.com
has some unknown problems and we can not access it. If you want to use Exchange, you can download it through central repository https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/
.
Thanks, anyhow now I'm getting a new error of connection refused when I type the input command:
$SPARK_HOME/bin/spark-submit --class com.vesoft.nebula.exchange.Exchange --master local nebula-exchange-2.0.0.jar -c /path/to/application.conf
1/05/11 17:37:47 WARN Utils: Your hostname, justin-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3) 21/05/11 17:37:47 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 21/05/11 17:37:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable log4j:WARN No appenders could be found for logger (com.vesoft.nebula.exchange.config.Configs$). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 21/05/11 17:37:50 INFO SparkContext: Running Spark version 2.4.4 21/05/11 17:37:50 INFO SparkContext: Submitted application: com.vesoft.nebula.exchange.Exchange 21/05/11 17:37:50 INFO SecurityManager: Changing view acls to: justin 21/05/11 17:37:50 INFO SecurityManager: Changing modify acls to: justin 21/05/11 17:37:50 INFO SecurityManager: Changing view acls groups to: 21/05/11 17:37:50 INFO SecurityManager: Changing modify acls groups to: 21/05/11 17:37:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(justin); groups with view permissions: Set(); users with modify permissions: Set(justin); groups with modify permissions: Set() 21/05/11 17:37:51 INFO Utils: Successfully started service 'sparkDriver' on port 41951. 21/05/11 17:37:51 INFO SparkEnv: Registering MapOutputTracker 21/05/11 17:37:51 INFO SparkEnv: Registering BlockManagerMaster 21/05/11 17:37:51 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 21/05/11 17:37:51 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 21/05/11 17:37:51 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d7a53684-f1a2-47e1-b55e-757304d52260 21/05/11 17:37:51 INFO MemoryStore: MemoryStore started with capacity 413.9 MB 21/05/11 17:37:51 INFO SparkEnv: Registering OutputCommitCoordinator 21/05/11 17:37:51 INFO Utils: Successfully started service 'SparkUI' on port 4040. 21/05/11 17:37:51 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040 21/05/11 17:37:51 INFO SparkContext: Added JAR file:/home/justin/Desktop/progetto_dm/nebula-spark-utils/nebula-exchange/nebula-exchange-2.0.0.jar at spark://10.0.2.15:41951/jars/nebula-exchange-2.0.0.jar with timestamp 1620747471916 21/05/11 17:37:52 INFO Executor: Starting executor ID driver on host localhost 21/05/11 17:37:52 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39825. 21/05/11 17:37:52 INFO NettyBlockTransferService: Server created on 10.0.2.15:39825 21/05/11 17:37:52 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 21/05/11 17:37:52 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 39825, None) 21/05/11 17:37:52 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:39825 with 413.9 MB RAM, BlockManagerId(driver, 10.0.2.15, 39825, None) 21/05/11 17:37:52 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 39825, None) 21/05/11 17:37:52 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 39825, None) 21/05/11 17:37:53 INFO Exchange$: Processing Tag user 21/05/11 17:37:53 INFO Exchange$: field keys: id, screen_name, followers_count, friends_count, created_at 21/05/11 17:37:53 INFO Exchange$: nebula keys: id, screen_name, followers_count, friends_count, created_at 21/05/11 17:37:53 INFO Exchange$: Loading from neo4j config: Neo4J source address: bolt://127.0.0.1:7687, user: neo4j, password: dataman, encryption: false, checkPointPath: Some(/tmp/test), exec: match (n:user) return n.id as id, n.name as screen_name, n.followers_count as followers_count, n.friends_count as friends_count, n.created_at as created_at order by id, parallel: 10, database: None 21/05/11 17:37:53 INFO Driver: Direct driver instance 757332719 created for server address 127.0.0.1:7687 21/05/11 17:37:55 INFO Neo4JReader: user offsets: Offset(0,1),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0),Offset(1,0) 21/05/11 17:37:56 INFO SparkContext: Starting job: isEmpty at ServerBaseReader.scala:161 21/05/11 17:37:56 INFO DAGScheduler: Got job 0 (isEmpty at ServerBaseReader.scala:161) with 1 output partitions 21/05/11 17:37:56 INFO DAGScheduler: Final stage: ResultStage 0 (isEmpty at ServerBaseReader.scala:161) 21/05/11 17:37:56 INFO DAGScheduler: Parents of final stage: List() 21/05/11 17:37:56 INFO DAGScheduler: Missing parents: List() 21/05/11 17:37:56 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at flatMap at ServerBaseReader.scala:137), which has no missing parents 21/05/11 17:37:56 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 5.9 KB, free 413.9 MB) 21/05/11 17:37:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.3 KB, free 413.9 MB) 21/05/11 17:37:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:39825 (size: 3.3 KB, free: 413.9 MB) 21/05/11 17:37:57 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161 21/05/11 17:37:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at flatMap at ServerBaseReader.scala:137) (first 15 tasks are for partitions Vector(0)) 21/05/11 17:37:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 21/05/11 17:37:57 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7813 bytes) 21/05/11 17:37:57 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 21/05/11 17:37:57 INFO Executor: Fetching spark://10.0.2.15:41951/jars/nebula-exchange-2.0.0.jar with timestamp 1620747471916 21/05/11 17:37:57 INFO TransportClientFactory: Successfully created connection to /10.0.2.15:41951 after 88 ms (0 ms spent in bootstraps) 21/05/11 17:37:57 INFO Utils: Fetching spark://10.0.2.15:41951/jars/nebula-exchange-2.0.0.jar to /tmp/spark-3ba56012-b106-489c-a21c-9b085c60959e/userFiles-0d3b11ab-8a6f-4c1a-99e2-fe6f950644f1/fetchFileTemp3802964306432942093.tmp 21/05/11 17:38:05 INFO Executor: Adding file:/tmp/spark-3ba56012-b106-489c-a21c-9b085c60959e/userFiles-0d3b11ab-8a6f-4c1a-99e2-fe6f950644f1/nebula-exchange-2.0.0.jar to class loader 21/05/11 17:38:05 INFO Driver: Direct driver instance 126150493 created for server address 127.0.0.1:7687 21/05/11 17:38:06 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1268 bytes result sent to driver 21/05/11 17:38:06 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 8875 ms on localhost (executor driver) (1/1) 21/05/11 17:38:06 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 21/05/11 17:38:06 INFO DAGScheduler: ResultStage 0 (isEmpty at ServerBaseReader.scala:161) finished in 9.900 s 21/05/11 17:38:06 INFO DAGScheduler: Job 0 finished: isEmpty at ServerBaseReader.scala:161, took 10.152718 s 21/05/11 17:38:06 INFO SparkContext: Starting job: first at ServerBaseReader.scala:164 21/05/11 17:38:06 INFO DAGScheduler: Registering RDD 2 (repartition at ServerBaseReader.scala:164) 21/05/11 17:38:06 INFO DAGScheduler: Got job 1 (first at ServerBaseReader.scala:164) with 1 output partitions 21/05/11 17:38:06 INFO DAGScheduler: Final stage: ResultStage 2 (first at ServerBaseReader.scala:164) 21/05/11 17:38:06 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1) 21/05/11 17:38:06 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1) 21/05/11 17:38:06 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[2] at repartition at ServerBaseReader.scala:164), which has no missing parents 21/05/11 17:38:06 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.5 KB, free 413.9 MB) 21/05/11 17:38:06 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.2 KB, free 413.9 MB) 21/05/11 17:38:06 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.0.2.15:39825 (size: 4.2 KB, free: 413.9 MB) 21/05/11 17:38:06 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161 21/05/11 17:38:06 INFO DAGScheduler: Submitting 10 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[2] at repartition at ServerBaseReader.scala:164) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)) 21/05/11 17:38:06 INFO TaskSchedulerImpl: Adding task set 1.0 with 10 tasks 21/05/11 17:38:06 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:06 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) 21/05/11 17:38:06 INFO Driver: Direct driver instance 1466926750 created for server address 127.0.0.1:7687 21/05/11 17:38:07 INFO Driver: Closing driver instance 1466926750 21/05/11 17:38:07 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:07 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 866 bytes result sent to driver 21/05/11 17:38:07 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 2, localhost, executor driver, partition 1, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:07 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 1260 ms on localhost (executor driver) (1/10) 21/05/11 17:38:07 INFO Executor: Running task 1.0 in stage 1.0 (TID 2) 21/05/11 17:38:08 INFO Driver: Direct driver instance 1584753441 created for server address 127.0.0.1:7687 21/05/11 17:38:08 INFO Driver: Closing driver instance 1584753441 21/05/11 17:38:08 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:08 INFO Executor: Finished task 1.0 in stage 1.0 (TID 2). 694 bytes result sent to driver 21/05/11 17:38:08 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 3, localhost, executor driver, partition 2, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:08 INFO Executor: Running task 2.0 in stage 1.0 (TID 3) 21/05/11 17:38:08 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 2) in 672 ms on localhost (executor driver) (2/10) 21/05/11 17:38:08 INFO Driver: Direct driver instance 1338872740 created for server address 127.0.0.1:7687 21/05/11 17:38:08 INFO Driver: Closing driver instance 1338872740 21/05/11 17:38:08 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:09 INFO Executor: Finished task 2.0 in stage 1.0 (TID 3). 694 bytes result sent to driver 21/05/11 17:38:09 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 4, localhost, executor driver, partition 3, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:09 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 3) in 551 ms on localhost (executor driver) (3/10) 21/05/11 17:38:09 INFO Executor: Running task 3.0 in stage 1.0 (TID 4) 21/05/11 17:38:09 INFO Driver: Direct driver instance 1522021564 created for server address 127.0.0.1:7687 21/05/11 17:38:09 INFO Driver: Closing driver instance 1522021564 21/05/11 17:38:09 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:09 INFO Executor: Finished task 3.0 in stage 1.0 (TID 4). 780 bytes result sent to driver 21/05/11 17:38:09 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 5, localhost, executor driver, partition 4, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:09 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 4) in 426 ms on localhost (executor driver) (4/10) 21/05/11 17:38:09 INFO Executor: Running task 4.0 in stage 1.0 (TID 5) 21/05/11 17:38:09 INFO Driver: Direct driver instance 1774796877 created for server address 127.0.0.1:7687 21/05/11 17:38:09 INFO Driver: Closing driver instance 1774796877 21/05/11 17:38:09 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:09 INFO Executor: Finished task 4.0 in stage 1.0 (TID 5). 694 bytes result sent to driver 21/05/11 17:38:09 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 6, localhost, executor driver, partition 5, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:09 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 5) in 441 ms on localhost (executor driver) (5/10) 21/05/11 17:38:09 INFO Executor: Running task 5.0 in stage 1.0 (TID 6) 21/05/11 17:38:10 INFO Driver: Direct driver instance 330945294 created for server address 127.0.0.1:7687 21/05/11 17:38:10 INFO Driver: Closing driver instance 330945294 21/05/11 17:38:10 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:10 INFO Executor: Finished task 5.0 in stage 1.0 (TID 6). 694 bytes result sent to driver 21/05/11 17:38:10 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 7, localhost, executor driver, partition 6, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:10 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 6) in 509 ms on localhost (executor driver) (6/10) 21/05/11 17:38:10 INFO Executor: Running task 6.0 in stage 1.0 (TID 7) 21/05/11 17:38:10 INFO Driver: Direct driver instance 2001761940 created for server address 127.0.0.1:7687 21/05/11 17:38:10 INFO Driver: Closing driver instance 2001761940 21/05/11 17:38:10 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:10 INFO Executor: Finished task 6.0 in stage 1.0 (TID 7). 737 bytes result sent to driver 21/05/11 17:38:10 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 8, localhost, executor driver, partition 7, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:10 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 7) in 380 ms on localhost (executor driver) (7/10) 21/05/11 17:38:10 INFO Executor: Running task 7.0 in stage 1.0 (TID 8) 21/05/11 17:38:10 INFO Driver: Direct driver instance 1712919471 created for server address 127.0.0.1:7687 21/05/11 17:38:10 INFO Driver: Closing driver instance 1712919471 21/05/11 17:38:10 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:11 INFO Executor: Finished task 7.0 in stage 1.0 (TID 8). 694 bytes result sent to driver 21/05/11 17:38:11 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 9, localhost, executor driver, partition 8, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:11 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 8) in 423 ms on localhost (executor driver) (8/10) 21/05/11 17:38:11 INFO Executor: Running task 8.0 in stage 1.0 (TID 9) 21/05/11 17:38:11 INFO Driver: Direct driver instance 1549678249 created for server address 127.0.0.1:7687 21/05/11 17:38:11 INFO Driver: Closing driver instance 1549678249 21/05/11 17:38:11 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:11 INFO Executor: Finished task 8.0 in stage 1.0 (TID 9). 737 bytes result sent to driver 21/05/11 17:38:11 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 10, localhost, executor driver, partition 9, PROCESS_LOCAL, 7802 bytes) 21/05/11 17:38:11 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 9) in 562 ms on localhost (executor driver) (9/10) 21/05/11 17:38:11 INFO Executor: Running task 9.0 in stage 1.0 (TID 10) 21/05/11 17:38:11 INFO Driver: Direct driver instance 1070463594 created for server address 127.0.0.1:7687 21/05/11 17:38:11 INFO Driver: Closing driver instance 1070463594 21/05/11 17:38:11 INFO ConnectionPool: Closing connection pool towards 127.0.0.1:7687 21/05/11 17:38:12 INFO Executor: Finished task 9.0 in stage 1.0 (TID 10). 737 bytes result sent to driver 21/05/11 17:38:12 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 10) in 569 ms on localhost (executor driver) (10/10) 21/05/11 17:38:12 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 21/05/11 17:38:12 INFO DAGScheduler: ShuffleMapStage 1 (repartition at ServerBaseReader.scala:164) finished in 5.751 s 21/05/11 17:38:12 INFO DAGScheduler: looking for newly runnable stages 21/05/11 17:38:12 INFO DAGScheduler: running: Set() 21/05/11 17:38:12 INFO DAGScheduler: waiting: Set(ResultStage 2) 21/05/11 17:38:12 INFO DAGScheduler: failed: Set() 21/05/11 17:38:12 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[5] at repartition at ServerBaseReader.scala:164), which has no missing parents 21/05/11 17:38:12 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.9 KB, free 413.9 MB) 21/05/11 17:38:12 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.3 KB, free 413.9 MB) 21/05/11 17:38:12 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.0.2.15:39825 (size: 2.3 KB, free: 413.9 MB) 21/05/11 17:38:12 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1161 21/05/11 17:38:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[5] at repartition at ServerBaseReader.scala:164) (first 15 tasks are for partitions Vector(0)) 21/05/11 17:38:12 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks 21/05/11 17:38:12 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 11, localhost, executor driver, partition 0, ANY, 7938 bytes) 21/05/11 17:38:12 INFO Executor: Running task 0.0 in stage 2.0 (TID 11) 21/05/11 17:38:12 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks 21/05/11 17:38:12 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 20 ms 21/05/11 17:38:12 INFO Executor: Finished task 0.0 in stage 2.0 (TID 11). 1225 bytes result sent to driver 21/05/11 17:38:12 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 11) in 169 ms on localhost (executor driver) (1/1) 21/05/11 17:38:12 INFO DAGScheduler: ResultStage 2 (first at ServerBaseReader.scala:164) finished in 0.246 s 21/05/11 17:38:12 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 21/05/11 17:38:12 INFO DAGScheduler: Job 1 finished: first at ServerBaseReader.scala:164, took 6.102447 s 21/05/11 17:38:12 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.0.2.15:39825 in memory (size: 2.3 KB, free: 413.9 MB) 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 44 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 14 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 69 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 53 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 48 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 66 21/05/11 17:38:13 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.0.2.15:39825 in memory (size: 4.2 KB, free: 413.9 MB) 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 34 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 28 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 71 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 51 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 64 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 26 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 11 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 35 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 8 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 39 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 13 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 37 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 0 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 27 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 73 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 6 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 49 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 63 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 10 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 24 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 67 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 32 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 2 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 5 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 30 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 65 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 21 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 7 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 52 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 17 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 4 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 3 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 18 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 45 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 1 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 61 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 59 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 12 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 55 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 36 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 74 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 46 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 62 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 9 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 60 21/05/11 17:38:13 INFO ContextCleaner: Cleaned shuffle 0 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 31 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 15 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 72 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 22 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 50 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 58 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 33 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 23 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 70 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 57 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 40 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 43 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 68 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 42 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 29 21/05/11 17:38:13 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.0.2.15:39825 in memory (size: 3.3 KB, free: 413.9 MB) 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 56 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 41 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 25 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 47 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 16 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 54 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 38 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 19 21/05/11 17:38:13 INFO ContextCleaner: Cleaned accumulator 20 21/05/11 17:38:16 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/justin/Desktop/progetto_dm/nebula-spark-utils/nebula-exchange/spark-warehouse'). 21/05/11 17:38:16 INFO SharedState: Warehouse path is 'file:/home/justin/Desktop/progetto_dm/nebula-spark-utils/nebula-exchange/spark-warehouse'. 21/05/11 17:38:17 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint Exception in thread "main" com.facebook.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused) at com.facebook.thrift.transport.TSocket.open(TSocket.java:204) at com.vesoft.nebula.client.meta.MetaClient.doConnect(MetaClient.java:97) at com.vesoft.nebula.client.meta.MetaClient.connect(MetaClient.java:86) at com.vesoft.nebula.exchange.MetaProvider.
(MetaProvider.scala:32) at com.vesoft.nebula.exchange.processor.VerticesProcessor.process(VerticesProcessor.scala:109) at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:145) at com.vesoft.nebula.exchange.Exchange$$anonfun$main$2.apply(Exchange.scala:122) at scala.collection.immutable.List.foreach(List.scala:392) at com.vesoft.nebula.exchange.Exchange$.main(Exchange.scala:122) at com.vesoft.nebula.exchange.Exchange.main(Exchange.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at com.facebook.thrift.transport.TSocket.open(TSocket.java:199)
The point is I don't understand what could it be.
First please check your nebula metad service’s status. If your service is healthy, then check whether the machine executing ·spark-submit· can access the meta service.
execute the command in your spark cluster machine: telnet your_metad_ip your_metad_port
suggest download 2.0.jar from maven,
https://github.com/vesoft-inc/nebula-spark-utils/blob/v2.0.0/nebula-exchange/README.md#how-to-get
First please check your nebula metad service’s status. If your service is healthy, then check whether the machine executing ·spark-submit· can access the meta service.
execute the command in your spark cluster machine:
telnet your_metad_ip your_metad_port
@Nicole00 Actually metad services are correctly working as stated here:
Concerning whether the machine can access the meta service I don't know where I should telnet your_metad_ip your_metad_port
and which metad_ip and metad_port I should use since there are different meta services.
Thanks for your help
You can use 127.0.0.1 as ip, use 49159 or 49160 or 49161 as metad's port.
According to the readme instructions, once I've cloned nebula-spark-utils repo I should just enter
mvn clean package -Dmaven.test.skip=true -Dgpg.skip -Dmaven.javadoc.skip=true
Nevertheless there's no way to run it and I always get a BUILD FAILURE ERROR: Here is the output i getAm I missing something?