JohnSnowLabs / spark-nlp

State of the Art Natural Language Processing
https://sparknlp.org/
Apache License 2.0
3.82k stars 708 forks source link

Problematic frame: C [libtensorflow_framework.so.1+0x744da9] _GLOBAL__sub_I_loader.cc+0x99 #923

Closed FedericoF93 closed 4 years ago

FedericoF93 commented 4 years ago

Description

I have to perform a spark job, which uses the recognize_entities_dl pretrained pipeline, in a mesos (dockerized) cluster. The cmd is as follows:

/opt/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.0,com.couchbase.client:spark-connector_2.11:2.3.0 --master mesos://zk://remote_ip:2181/mesos --deploy-mode client --class tags_extraction.tags_extraction_eng /opt/sparkscala_2.11-0.1.jar

This is the code:

val (sparkSession, sc) = start_spark_session()

def start_spark_session(): (SparkSession, SparkContext) = {

  val sparkSession = SparkSession.builder()
      .master("mesos://zk://remote-ip:32181/mesos")
      .config("spark.mesos.executor.home", "/opt/spark/spark-2.4.5-bin-hadoop2.7")

      .config("spark.jars",
        "/opt/sparkscala_2.11-0.1.jar," +
          "https://repo1.maven.org/maven2/com/couchbase/client/java-client/2.7.6/java-client-2.7.6.jar," +
          "https://repo1.maven.org/maven2/com/couchbase/client/core-io/1.7.6/core-io-1.7.6.jar," +
          "https://repo1.maven.org/maven2/com/couchbase/client/spark-connector_2.11/2.3.0/spark-connector_2.11-2.3.0.jar," +
          "https://repo1.maven.org/maven2/io/opentracing/opentracing-api/0.31.0/opentracing-api-0.31.0.jar," +
          "https://repo1.maven.org/maven2/io/reactivex/rxjava/1.3.8/rxjava-1.3.8.jar," +
          "https://repo1.maven.org/maven2/io/reactivex/rxscala_2.11/0.26.5/rxscala_2.11-0.26.5.jar," +

          //I tried them both and they give the same error
          "https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/spark-nlp-assembly-2.5.0.jar"+
          "https://repo1.maven.org/maven2/com/johnsnowlabs/nlp/spark-nlp_2.11/2.5.0/spark-nlp_2.11-2.5.0.jar"
      )
      .config("spark.executor.extraLibraryPath",
        "/sparkscala_2.11-0.1.jar" +
          "/java-client-2.7.6.jar" +
          "/core-io-1.7.6.jar" +
          "/spark-connector_2.11-2.3.0.jar" +
          "/opentracing-api-0.31.0.jar" +
          "/rxjava-1.3.8.jar" +
          "/rxscala_2.11-0.26.5.jar" +
          "/core-1.1.2.jar" +
          "/spark-streaming-kafka-0-10_2.11-2.4.5.jar" +
          "/spark-sql-kafka-0-10_2.11-2.4.5.jar" +
          "/kafka-clients-2.4.0.jar" +
          "/kafka_2.11-2.4.1.jar" +
          "/spark-nlp-assembly-2.5.0.jar" +
          "/spark-nlp_2.11-2.5.0.jar"
      )
      .getOrCreate()

    sparkSession.sparkContext.setLogLevel("DEBUG")

    val sc = sparkSession.sparkContext
    sc.getConf.getAll.foreach(println)

    (sparkSession, sc)
  }

def main(args: Array[String]) {

    val feeds_df = sparkSession.read.couchbase(schema = feedSchema, options = Map("bucket" -> "feeds"))

    val pipeline = new PretrainedPipeline("recognize_entities_dl", "en")

    println("PIPELINE LOADED") // not printed

    val feeds_tags = pipeline.transform(feeds_df)
      .selectExpr("author_id", "id", "category", "text", "entities.result as tags")

    feeds_tags.printSchema()
    println(feeds_tags)
    println(feeds_tags.getClass.toString)
    println(SizeEstimator.estimate(feeds_tags))
     println("COUNT", feeds_tags.count)

    feeds_tags.show()

    sparkSession.close()
  }

}

While the pipeline is being downloaded, this error is raised when loading stage 4:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x00007f8c09bc2da9, pid=4192, tid=0x00007f8d51343700
#
# JRE version: OpenJDK Runtime Environment (8.0_252-b09) (build 1.8.0_252-8u252-b09-1~16.04-b09)
# Java VM: OpenJDK 64-Bit Server VM (25.252-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libtensorflow_framework.so.1+0x744da9]  _GLOBAL__sub_I_loader.cc+0x99
#
# Core dump written. Default location: /var/lib/mesos/slaves/fb88a3ad-d32c-41ae-be67-36517a272bcb-S0/frameworks/fb88a3ad-d32c-41ae-be67-36517a272bcb-0000/executors/ct:1591367792198:0:tags_extraction_eng:/runs/2a41d953-7343-4dd5-a59b-2e253f0cda55/core or core.4192
#
# An error report file with more information is saved as:
# /var/lib/mesos/slaves/fb88a3ad-d32c-41ae-be67-36517a272bcb-S0/frameworks/fb88a3ad-d32c-41ae-be67-36517a272bcb-0000/executors/ct:1591367792198:0:tags_extraction_eng:/runs/2a41d953-7343-4dd5-a59b-2e253f0cda55/hs_err_pid4192.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Expected Behavior

Download pretrained pipeline withval pipeline = new PretrainedPipeline("recognize_entities_dl", "en")

Current Behavior

Driver's stdout:

(spark.repl.local.jars,file:///root/.ivy2/jars/com.johnsnowlabs.nlp_spark-nlp_2.11-2.5.0.jar,file:///root/.ivy2/jars/com.couchbase.client_spark-connector_2.11-2.3.0.jar,file:///root/.ivy2/jars/com.typesafe_config-1.3.0.jar,file:///root/.ivy2/jars/org.rocksdb_rocksdbjni-6.5.3.jar,file:///root/.ivy2/jars/org.apache.hadoop_hadoop-aws-3.2.0.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-core-1.11.603.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-s3-1.11.603.jar,file:///root/.ivy2/jars/com.github.universal-automata_liblevenshtein-3.0.0.jar,file:///root/.ivy2/jars/com.navigamez_greex-1.0.jar,file:///root/.ivy2/jars/org.json4s_json4s-ext_2.11-3.5.3.jar,file:///root/.ivy2/jars/org.tensorflow_tensorflow-1.15.0.jar,file:///root/.ivy2/jars/net.sf.trove4j_trove4j-3.0.3.jar,file:///root/.ivy2/jars/commons-logging_commons-logging-1.1.3.jar,file:///root/.ivy2/jars/org.apache.httpcomponents_httpclient-4.5.9.jar,file:///root/.ivy2/jars/software.amazon.ion_ion-java-1.0.2.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.dataformat_jackson-dataformat-cbor-2.6.7.jar,file:///root/.ivy2/jars/org.apache.httpcomponents_httpcore-4.4.11.jar,file:///root/.ivy2/jars/commons-codec_commons-codec-1.11.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-kms-1.11.603.jar,file:///root/.ivy2/jars/com.amazonaws_jmespath-java-1.11.603.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-databind-2.6.7.2.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-annotations-2.6.0.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-core-2.6.7.jar,file:///root/.ivy2/jars/com.google.code.findbugs_annotations-3.0.1.jar,file:///root/.ivy2/jars/com.google.protobuf_protobuf-java-util-3.0.0-beta-3.jar,file:///root/.ivy2/jars/com.google.protobuf_protobuf-java-3.0.0-beta-3.jar,file:///root/.ivy2/jars/it.unimi.dsi_fastutil-7.0.12.jar,file:///root/.ivy2/jars/org.projectlombok_lombok-1.16.8.jar,file:///root/.ivy2/jars/org.slf4j_slf4j-api-1.7.21.jar,file:///root/.ivy2/jars/net.jcip_jcip-annotations-1.0.jar,file:///root/.ivy2/jars/com.google.code.findbugs_jsr305-3.0.1.jar,file:///root/.ivy2/jars/com.google.code.gson_gson-2.3.jar,file:///root/.ivy2/jars/dk.brics.automaton_automaton-1.11-8.jar,file:///root/.ivy2/jars/joda-time_joda-time-2.9.5.jar,file:///root/.ivy2/jars/org.joda_joda-convert-1.8.1.jar,file:///root/.ivy2/jars/org.tensorflow_libtensorflow-1.15.0.jar,file:///root/.ivy2/jars/org.tensorflow_libtensorflow_jni-1.15.0.jar,file:///root/.ivy2/jars/com.couchbase.client_java-client-2.7.6.jar,file:///root/.ivy2/jars/com.couchbase.client_dcp-client-0.23.0.jar,file:///root/.ivy2/jars/io.reactivex_rxscala_2.11-0.26.5.jar,file:///root/.ivy2/jars/org.apache.logging.log4j_log4j-api-2.2.jar,file:///root/.ivy2/jars/com.couchbase.client_core-io-1.7.6.jar,file:///root/.ivy2/jars/io.reactivex_rxjava-1.3.8.jar,file:///root/.ivy2/jars/io.opentracing_opentracing-api-0.31.0.jar)
(spark.sql.execution.arrow.enabled,true)
(spark.couchbase.nodes,couchbase://remote_ip)
(com.couchbase.connectTimeout,300000)
(spark.jars,/opt/sparkscala_2.11-0.1.jar,https://repo1.maven.org/maven2/com/couchbase/client/java-client/2.7.6/java-client-2.7.6.jar,https://repo1.maven.org/maven2/com/couchbase/client/core-io/1.7.6/core-io-1.7.6.jar,https://repo1.maven.org/maven2/com/couchbase/client/spark-connector_2.11/2.3.0/spark-connector_2.11-2.3.0.jar,https://repo1.maven.org/maven2/io/opentracing/opentracing-api/0.31.0/opentracing-api-0.31.0.jar,https://repo1.maven.org/maven2/io/reactivex/rxjava/1.3.8/rxjava-1.3.8.jar,https://repo1.maven.org/maven2/io/reactivex/rxscala_2.11/0.26.5/rxscala_2.11-0.26.5.jar,https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/spark-nlp-assembly-2.5.0.jar)
(spark.executor.id,driver)
(spark.driver.port,41651)
(spark.couchbase.bucket.feeds,)
(spark.couchbase.bucket.users,)
(spark.driver.memory,1g)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(com.couchbase.username,apps)
(spark.cores.max,1)
(spark.sql.tungsten.enabled,true)
(spark.driver.host,mesos-slave)
(spark.executor.memory,1g)
(spark.couchbase.bucket.action_sink,)
(com.couchbase.password,password)
(spark.master,mesos://zk://remote_ip:2181/mesos)
(com.couchbase.socketConnect,300000)
(spark.mesos.executor.home,/opt/spark/spark-2.4.5-bin-hadoop2.7)
(spark.submit.deployMode,client)
(spark.app.name,tags_extraction_eng)
(spark.app.id,fb88a3ad-d32c-41ae-be67-36517a272bcb-0005)
(spark.ui.showConsoleProgress,true)
(spark.worker.cleanup.enabled,true)
(spark.executor.extraLibraryPath,/sparkscala_2.11-0.1.jar/java-client-2.7.6.jar/core-io-1.7.6.jar/spark-connector_2.11-2.3.0.jar/opentracing-api-0.31.0.jar/rxjava-1.3.8.jar/rxscala_2.11-0.26.5.jar/core-1.1.2.jar/spark-streaming-kafka-0-10_2.11-2.4.5.jar/spark-sql-kafka-0-10_2.11-2.4.5.jar/kafka-clients-2.4.0.jar/kafka_2.11-2.4.1.jar/spark-nlp-assembly-2.5.0.jar/spark-nlp_2.11-2.5.0.jar)

recognize_entities_dl download started this may take some time.
Approximate size to download 159 MB
Download done! Loading the resource.

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x00007f8c09bc2da9, pid=4192, tid=0x00007f8d51343700
#
# JRE version: OpenJDK Runtime Environment (8.0_252-b09) (build 1.8.0_252-8u252-b09-1~16.04-b09)
# Java VM: OpenJDK 64-Bit Server VM (25.252-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libtensorflow_framework.so.1+0x744da9]  _GLOBAL__sub_I_loader.cc+0x99
#
# Core dump written. Default location: /var/lib/mesos/slaves/fb88a3ad-d32c-41ae-be67-36517a272bcb-S0/frameworks/fb88a3ad-d32c-41ae-be67-36517a272bcb-0000/executors/ct:1591367792198:0:tags_extraction_eng:/runs/2a41d953-7343-4dd5-a59b-2e253f0cda55/core or core.4192
#
# An error report file with more information is saved as:
# /var/lib/mesos/slaves/fb88a3ad-d32c-41ae-be67-36517a272bcb-S0/frameworks/fb88a3ad-d32c-41ae-be67-36517a272bcb-0000/executors/ct:1591367792198:0:tags_extraction_eng:/runs/2a41d953-7343-4dd5-a59b-2e253f0cda55/hs_err_pid4192.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Executor's Logs:

...
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 17
20/06/05 14:40:01 INFO Executor: Running task 0.0 in stage 14.0 (TID 17)
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 26
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_26_piece0 stored as bytes in memory (estimated size 2.2 KB, free 362.9 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 26 took 11 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_26 stored as values in memory (estimated size 3.7 KB, free 362.9 MB)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/metadata/part-00000:0+408
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 25
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_25_piece0 stored as bytes in memory (estimated size 23.1 KB, free 362.8 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 25 took 25 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_25 stored as values in memory (estimated size 322.8 KB, free 362.5 MB)
20/06/05 14:40:01 INFO Executor: Finished task 0.0 in stage 14.0 (TID 17). 1209 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 18
20/06/05 14:40:01 INFO Executor: Running task 0.0 in stage 15.0 (TID 18)
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 28
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_28_piece0 stored as bytes in memory (estimated size 2.2 KB, free 362.5 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 28 took 13 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_28 stored as values in memory (estimated size 3.7 KB, free 362.5 MB)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/metadata/part-00000:0+408
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 27
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_27_piece0 stored as bytes in memory (estimated size 23.1 KB, free 362.5 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 27 took 11 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_27 stored as values in memory (estimated size 322.8 KB, free 362.2 MB)
20/06/05 14:40:01 INFO Executor: Finished task 0.0 in stage 15.0 (TID 18). 1166 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 19
20/06/05 14:40:01 INFO Executor: Running task 0.0 in stage 16.0 (TID 19)
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 30
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_30_piece0 stored as bytes in memory (estimated size 2.4 KB, free 362.2 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 30 took 11 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_30 stored as values in memory (estimated size 3.9 KB, free 362.2 MB)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/fields/datasetParams/part-00005:0+95
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 29
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_29_piece0 stored as bytes in memory (estimated size 23.1 KB, free 362.1 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 29 took 17 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_29 stored as values in memory (estimated size 322.8 KB, free 361.8 MB)
20/06/05 14:40:01 INFO Executor: Finished task 0.0 in stage 16.0 (TID 19). 765 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 20
20/06/05 14:40:01 INFO Executor: Running task 0.0 in stage 17.0 (TID 20)
20/06/05 14:40:01 INFO TorrentBroadcast: Started reading broadcast variable 31
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_31_piece0 stored as bytes in memory (estimated size 2.4 KB, free 362.1 MB)
20/06/05 14:40:01 INFO TorrentBroadcast: Reading broadcast variable 31 took 19 ms
20/06/05 14:40:01 INFO MemoryStore: Block broadcast_31 stored as values in memory (estimated size 3.9 KB, free 362.2 MB)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/fields/datasetParams/part-00007:0+95
20/06/05 14:40:01 INFO Executor: Finished task 0.0 in stage 17.0 (TID 20). 765 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 21
20/06/05 14:40:01 INFO Executor: Running task 1.0 in stage 17.0 (TID 21)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/fields/datasetParams/part-00011:0+2000
20/06/05 14:40:01 INFO Executor: Finished task 1.0 in stage 17.0 (TID 21). 2146 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 22
20/06/05 14:40:01 INFO Executor: Running task 2.0 in stage 17.0 (TID 22)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/fields/datasetParams/part-00011:2000+831
20/06/05 14:40:01 INFO Executor: Finished task 2.0 in stage 17.0 (TID 22). 808 bytes result sent to driver
20/06/05 14:40:01 INFO CoarseGrainedExecutorBackend: Got assigned task 23
20/06/05 14:40:01 INFO Executor: Running task 3.0 in stage 17.0 (TID 23)
20/06/05 14:40:01 INFO HadoopRDD: Input split: file:/root/cache_pretrained/recognize_entities_dl_en_2.4.3_2.4_1584626752821/stages/4_NerDLModel_d4424c9af5f4/fields/datasetParams/part-00009:0+95
20/06/05 14:40:01 INFO Executor: Finished task 3.0 in stage 17.0 (TID 23). 765 bytes result sent to driver
I0605 14:40:04.482619  4374 exec.cpp:445] Executor asked to shutdown
I0605 14:40:04.482844  4374 executor.cpp:184] Received SHUTDOWN event
I0605 14:40:04.482877  4374 executor.cpp:800] Shutting down
I0605 14:40:04.482920  4374 executor.cpp:913] Sending SIGTERM to process tree at pid 4382
20/06/05 14:40:04 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver mesos-slave:41651 disassociated! Shutting down.
I0605 14:40:04.489429  4374 executor.cpp:926] Sent SIGTERM to the following process trees:
[ 
-+- 4382 sh -c LD_LIBRARY_PATH="/sparkscala_2.11-0.1.jar/java-client-2.7.6.jar/core-io-1.7.6.jar/spark-connector_2.11-2.3.0.jar/opentracing-api-0.31.0.jar/rxjava-1.3.8.jar/rxscala_2.11-0.26.5.jar/core-1.1.2.jar/spark-streaming-kafka-0-10_2.11-2.4.5.jar/spark-sql-kafka-0-10_2.11-2.4.5.jar/kafka-clients-2.4.0.jar/kafka_2.11-2.4.1.jar/spark-nlp-assembly-2.5.0.jar/spark-nlp_2.11-2.5.0.jar:$LD_LIBRARY_PATH" "/opt/spark/spark-2.4.5-bin-hadoop2.7/./bin/spark-class" org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@mesos-slave:41651 --executor-id 0 --cores 1 --app-id fb88a3ad-d32c-41ae-be67-36517a272bcb-0005 --hostname mesos-slave 
 \--- 4383 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /opt/spark/spark-2.4.5-bin-hadoop2.7/conf/:/opt/spark/spark-2.4.5-bin-hadoop2.7/jars/* -Xmx1024m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@mesos-slave:41651 --executor-id 0 --cores 1 --app-id fb88a3ad-d32c-41ae-be67-36517a272bcb-0005 --hostname mesos-slave 
]
I0605 14:40:04.489470  4374 executor.cpp:930] Scheduling escalation to SIGKILL in 88secs from now
20/06/05 14:40:04 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/06/05 14:40:04 INFO DiskBlockManager: Shutdown hook called
20/06/05 14:40:04 INFO CouchbaseConnection: Performing Couchbase SDK Shutdown
20/06/05 14:40:04 INFO ShutdownHookManager: Shutdown hook called
20/06/05 14:40:04 INFO ShutdownHookManager: Deleting directory /var/lib/mesos/slaves/fb88a3ad-d32c-41ae-be67-36517a272bcb-S0/frameworks/fb88a3ad-d32c-41ae-be67-36517a272bcb-0005/executors/0/runs/50383a32-eafb-45cd-ab6b-3be4f5d790a4/spark-e87c68df-00c0-4d18-acc5-684a42cab22b
20/06/05 14:40:04 INFO ConfigurationProvider: Closed bucket feeds
20/06/05 14:40:04 INFO Node: Disconnected from Node remote_ip/datanode1
I0605 14:40:04.540186  4379 executor.cpp:998] Command terminated with signal Terminated (pid: 4382)
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown IoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown kvIoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown viewIoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown queryIoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown searchIoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown Core Scheduler: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown Runtime Metrics Collector: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown Latency Metrics Collector: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown analyticsIoPool: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown Netty: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown Tracer: success 
20/06/05 14:40:04 INFO CoreEnvironment: Shutdown OrphanReporter: success 
I0605 14:40:05.542169  4381 process.cpp:927] Stopped the socket accept loop

Your Environment

Docker environment:

Versions:

maziyarpanahi commented 4 years ago

It seems a global symbol table that conflicts with TensorFlow's protobuf usage.

So if it's not one of the dependencies causing this conflict, my next best guess is sparkscala_2.11-0.1.jar. What's inside it? Do you have any C++ code being executed by Pipe? Any Java code?

NOTE: I've seen this error on TensorFlow GitHub and they mostly closed them as infeasible since something else has a conflict with their use of protobuf.

FedericoF93 commented 4 years ago

What's inside it? Do you have any C++ code being executed by Pipe? Any Java code? No, I haven't. sparkscala_2.11-0.1.jar is produced with sbt package, so it isn't a fat jar and contains only my scala code. I have also tried to use spark-nlp version 2.4.5 but nothing has changed.

maziyarpanahi commented 4 years ago

It's not really a Spark NLP error so no matter what version you try as long as what you use has TensorFlow inside it will give you that conflict.

For spark.jars please use the -assembly which is the Fat JAR, the one from Maven is not a Fat JAR so it requires other dependencies to be presented. Also, if you can remove spark-nlp jar from spark.executor.extraLibraryPath, this is not required for us, the spark.jars will distribute the JAR correctly. Maybe repeating it caused the conflict, I am not sure about this.

If what's up there didn't work, the only way to narrow down the actual cause is to remove everything one by one until there is nothing but Spark NLP and that few lines of codes that use the pre-trained pipeline. We don't have a way to reproduce and we've never seen an error like this so the only way is to create a simple package with only Spark NLP, a simple DataFrame without any external source, and try to run it without any other configs/dependencies except spark-nlp jar.

I would do this with my clusters all the time. Then you can add your concept one by one and see when it crashes and that is the cause which I might be able to help if spark-nlp is the cause.

FedericoF93 commented 4 years ago

I have removed all dependencies as you suggests, but the error still raise (always during loading of NerDLModel in stage 4).

CMD:

/opt/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.0 --master mesos://zk://remote_ip:2181/mesos --deploy-mode client --class tags_extraction.tags_extraction_eng /opt/sparkscala_2.11-0.1.jar

val (sparkSession, sc) = start_spark_session()

def start_spark_session(): (SparkSession, SparkContext) = {

  val sparkSession = SparkSession.builder()
      .master("mesos://zk://remote-ip:32181/mesos")
      .config("spark.mesos.executor.home", "/opt/spark/spark-2.4.5-bin-hadoop2.7")
      .config("spark.jars",
          "https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/spark-nlp-assembly-2.5.0.jar"+
      )
      .getOrCreate()

    sparkSession.sparkContext.setLogLevel("DEBUG")

    val sc = sparkSession.sparkContext
    sc.getConf.getAll.foreach(println)

    (sparkSession, sc)
  }

def main(args: Array[String]) {
    val pipeline = new PretrainedPipeline("recognize_entities_dl", "en")

    println("PIPELINE LOADED") // not printed

    sparkSession.close()
  }

}

Driver's stdout:

(spark.sql.execution.arrow.enabled,true)
(spark.driver.port,36341)
(spark.executor.id,driver)
(spark.jars,https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/spark-nlp-assembly-2.5.0.jar)
(spark.driver.memory,1g)
(spark.sql.tungsten.enabled,true)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.app.name,tags_extraction_ita)
(spark.driver.host,mesos-slave)
(spark.master,mesos://zk://remote_ip:32181/mesos)
(spark.submit.deployMode,client)
(spark.worker.cleanup.enabled,true)
(spark.repl.local.jars,file:///root/.ivy2/jars/com.johnsnowlabs.nlp_spark-nlp_2.11-2.4.5.jar,file:///root/.ivy2/jars/com.typesafe_config-1.3.0.jar,file:///root/.ivy2/jars/org.rocksdb_rocksdbjni-6.5.3.jar,file:///root/.ivy2/jars/org.apache.hadoop_hadoop-aws-3.2.0.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-core-1.11.603.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-s3-1.11.603.jar,file:///root/.ivy2/jars/com.github.universal-automata_liblevenshtein-3.0.0.jar,file:///root/.ivy2/jars/com.navigamez_greex-1.0.jar,file:///root/.ivy2/jars/org.json4s_json4s-ext_2.11-3.5.3.jar,file:///root/.ivy2/jars/org.tensorflow_tensorflow-1.15.0.jar,file:///root/.ivy2/jars/net.sf.trove4j_trove4j-3.0.3.jar,file:///root/.ivy2/jars/commons-logging_commons-logging-1.1.3.jar,file:///root/.ivy2/jars/org.apache.httpcomponents_httpclient-4.5.9.jar,file:///root/.ivy2/jars/software.amazon.ion_ion-java-1.0.2.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.dataformat_jackson-dataformat-cbor-2.6.7.jar,file:///root/.ivy2/jars/org.apache.httpcomponents_httpcore-4.4.11.jar,file:///root/.ivy2/jars/commons-codec_commons-codec-1.11.jar,file:///root/.ivy2/jars/com.amazonaws_aws-java-sdk-kms-1.11.603.jar,file:///root/.ivy2/jars/com.amazonaws_jmespath-java-1.11.603.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-databind-2.6.7.2.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-annotations-2.6.0.jar,file:///root/.ivy2/jars/com.fasterxml.jackson.core_jackson-core-2.6.7.jar,file:///root/.ivy2/jars/com.google.code.findbugs_annotations-3.0.1.jar,file:///root/.ivy2/jars/com.google.protobuf_protobuf-java-util-3.0.0-beta-3.jar,file:///root/.ivy2/jars/com.google.protobuf_protobuf-java-3.0.0-beta-3.jar,file:///root/.ivy2/jars/it.unimi.dsi_fastutil-7.0.12.jar,file:///root/.ivy2/jars/org.projectlombok_lombok-1.16.8.jar,file:///root/.ivy2/jars/org.slf4j_slf4j-api-1.7.21.jar,file:///root/.ivy2/jars/net.jcip_jcip-annotations-1.0.jar,file:///root/.ivy2/jars/com.google.code.findbugs_jsr305-3.0.1.jar,file:///root/.ivy2/jars/com.google.code.gson_gson-2.3.jar,file:///root/.ivy2/jars/dk.brics.automaton_automaton-1.11-8.jar,file:///root/.ivy2/jars/joda-time_joda-time-2.9.5.jar,file:///root/.ivy2/jars/org.joda_joda-convert-1.8.1.jar,file:///root/.ivy2/jars/org.tensorflow_libtensorflow-1.15.0.jar,file:///root/.ivy2/jars/org.tensorflow_libtensorflow_jni-1.15.0.jar)
(spark.app.id,b0a8e440-4b3e-4443-b5eb-dff62be9544e-0002)
Pipeline START
recognize_entities_dl download started this may take some time.
Approximate size to download 159 MB
Download done! Loading the resource.
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x00007f998d9dada9, pid=1404, tid=0x00007f9ace6a1700
#
# JRE version: OpenJDK Runtime Environment (8.0_252-b09) (build 1.8.0_252-8u252-b09-1~16.04-b09)
# Java VM: OpenJDK 64-Bit Server VM (25.252-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libtensorflow_framework.so.1+0x744da9]  _GLOBAL__sub_I_loader.cc+0x99
#
# Core dump written. Default location: /var/lib/mesos/slaves/b0a8e440-4b3e-4443-b5eb-dff62be9544e-S0/frameworks/b0a8e440-4b3e-4443-b5eb-dff62be9544e-0000/executors/ct:1591432638402:0:tags_extraction_ita:/runs/195211d6-928d-4575-a2cc-3278ea76f5f0/core or core.1404
#
# An error report file with more information is saved as:
# /var/lib/mesos/slaves/b0a8e440-4b3e-4443-b5eb-dff62be9544e-S0/frameworks/b0a8e440-4b3e-4443-b5eb-dff62be9544e-0000/executors/ct:1591432638402:0:tags_extraction_ita:/runs/195211d6-928d-4575-a2cc-3278ea76f5f0/hs_err_pid1404.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Executor's Logs:

I0606 08:18:44.896095  1513 exec.cpp:162] Version: 1.6.2
I0606 08:18:44.901351  1518 exec.cpp:236] Executor registered on agent 3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0
I0606 08:18:44.903355  1519 executor.cpp:184] Received SUBSCRIBED event
I0606 08:18:44.903748  1519 executor.cpp:188] Subscribed executor on mesos-slave
I0606 08:18:44.903859  1519 executor.cpp:184] Received LAUNCH event
I0606 08:18:44.904917  1519 executor.cpp:683] Starting task 0
I0606 08:18:44.914978  1519 executor.cpp:697] Forked command at 1524
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/06/06 08:18:46 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 1525@mesos-slave
20/06/06 08:18:46 INFO SignalUtils: Registered signal handler for TERM
20/06/06 08:18:46 INFO SignalUtils: Registered signal handler for HUP
20/06/06 08:18:46 INFO SignalUtils: Registered signal handler for INT
20/06/06 08:18:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/06/06 08:18:47 INFO SecurityManager: Changing view acls to: root
20/06/06 08:18:47 INFO SecurityManager: Changing modify acls to: root
20/06/06 08:18:47 INFO SecurityManager: Changing view acls groups to: 
20/06/06 08:18:47 INFO SecurityManager: Changing modify acls groups to: 
20/06/06 08:18:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
20/06/06 08:18:47 INFO TransportClientFactory: Successfully created connection to mesos-slave/10.128.48.5:38543 after 159 ms (0 ms spent in bootstraps)
20/06/06 08:18:48 INFO SecurityManager: Changing view acls to: root
20/06/06 08:18:48 INFO SecurityManager: Changing modify acls to: root
20/06/06 08:18:48 INFO SecurityManager: Changing view acls groups to: 
20/06/06 08:18:48 INFO SecurityManager: Changing modify acls groups to: 
20/06/06 08:18:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
20/06/06 08:18:48 INFO TransportClientFactory: Successfully created connection to mesos-slave/10.128.48.5:38543 after 5 ms (0 ms spent in bootstraps)
20/06/06 08:18:48 INFO DiskBlockManager: Created local directory at /var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/blockmgr-112ec0b6-f982-4d23-b81d-cd07c1e633dc
20/06/06 08:18:48 INFO MemoryStore: MemoryStore started with capacity 912.3 MB
20/06/06 08:18:48 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@mesos-slave:38543
20/06/06 08:18:49 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
20/06/06 08:18:49 INFO Executor: Starting executor ID 0 on host mesos-slave
20/06/06 08:18:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40379.
20/06/06 08:18:49 INFO NettyBlockTransferService: Server created on mesos-slave:40379
20/06/06 08:18:49 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/06/06 08:18:49 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(0, mesos-slave, 40379, None)
20/06/06 08:18:49 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(0, mesos-slave, 40379, None)
20/06/06 08:18:49 INFO BlockManager: Initialized BlockManager: BlockManagerId(0, mesos-slave, 40379, None)
20/06/06 08:20:45 INFO CoarseGrainedExecutorBackend: Got assigned task 0
20/06/06 08:20:45 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/06/06 08:20:45 INFO Executor: Fetching https://repo1.maven.org/maven2/com/johnsnowlabs/nlp/spark-nlp_2.11/2.5.0/spark-nlp_2.11-2.5.0.jar with timestamp 1591431523770
20/06/06 08:20:46 INFO Utils: Fetching https://repo1.maven.org/maven2/com/johnsnowlabs/nlp/spark-nlp_2.11/2.5.0/spark-nlp_2.11-2.5.0.jar to /var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/spark-43f503e1-08d0-4f1a-a2b7-1966294839d9/fetchFileTemp8018430873864746604.tmp
20/06/06 08:20:52 INFO Utils: Copying /var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/spark-43f503e1-08d0-4f1a-a2b7-1966294839d9/-5517677921591431523770_cache to /var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/./spark-nlp_2.11-2.5.0.jar
20/06/06 08:20:52 INFO Executor: Adding file:/var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/./spark-nlp_2.11-2.5.0.jar to class loader
20/06/06 08:20:53 INFO TorrentBroadcast: Started reading broadcast variable 1
20/06/06 08:20:53 INFO TransportClientFactory: Successfully created connection to mesos-slave/10.128.48.5:45809 after 4 ms (0 ms spent in bootstraps)
20/06/06 08:20:53 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 912.3 MB)
20/06/06 08:20:53 INFO TorrentBroadcast: Reading broadcast variable 1 took 279 ms
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.6 KB, free 912.3 MB)
20/06/06 08:20:54 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/metadata/part-00000:0+344
20/06/06 08:20:54 INFO TorrentBroadcast: Started reading broadcast variable 0
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.9 KB, free 912.3 MB)
20/06/06 08:20:54 INFO TorrentBroadcast: Reading broadcast variable 0 took 20 ms
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 320.9 KB, free 912.0 MB)
20/06/06 08:20:54 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1188 bytes result sent to driver
20/06/06 08:20:54 INFO CoarseGrainedExecutorBackend: Got assigned task 1
20/06/06 08:20:54 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
20/06/06 08:20:54 INFO TorrentBroadcast: Started reading broadcast variable 3
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 2.2 KB, free 912.0 MB)
20/06/06 08:20:54 INFO TorrentBroadcast: Reading broadcast variable 3 took 20 ms
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 3.7 KB, free 912.0 MB)
20/06/06 08:20:54 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/0_document_de752c7794e3/metadata/part-00000:0+252
20/06/06 08:20:54 INFO TorrentBroadcast: Started reading broadcast variable 2
20/06/06 08:20:54 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 22.9 KB, free 911.9 MB)
20/06/06 08:20:54 INFO TorrentBroadcast: Reading broadcast variable 2 took 17 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 320.9 KB, free 911.6 MB)
20/06/06 08:20:55 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1010 bytes result sent to driver
20/06/06 08:20:55 INFO CoarseGrainedExecutorBackend: Got assigned task 2
20/06/06 08:20:55 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 5
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2.2 KB, free 911.6 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 5 took 14 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 3.7 KB, free 911.6 MB)
20/06/06 08:20:55 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/0_document_de752c7794e3/metadata/part-00000:0+252
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 4
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 22.9 KB, free 911.6 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 4 took 14 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 320.9 KB, free 911.3 MB)
20/06/06 08:20:55 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 1053 bytes result sent to driver
20/06/06 08:20:55 INFO CoarseGrainedExecutorBackend: Got assigned task 3
20/06/06 08:20:55 INFO Executor: Running task 0.0 in stage 3.0 (TID 3)
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 7
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 2.2 KB, free 911.3 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 7 took 14 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 3.7 KB, free 911.3 MB)
20/06/06 08:20:55 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/1_SENTENCE_b7aed5120ded/metadata/part-00000:0+362
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 6
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 22.9 KB, free 911.2 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 6 took 14 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 320.9 KB, free 910.9 MB)
20/06/06 08:20:55 INFO Executor: Finished task 0.0 in stage 3.0 (TID 3). 1120 bytes result sent to driver
20/06/06 08:20:55 INFO CoarseGrainedExecutorBackend: Got assigned task 4
20/06/06 08:20:55 INFO Executor: Running task 0.0 in stage 4.0 (TID 4)
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 9
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 2.2 KB, free 910.9 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 9 took 14 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 3.7 KB, free 910.9 MB)
20/06/06 08:20:55 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/1_SENTENCE_b7aed5120ded/metadata/part-00000:0+362
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 8
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 22.9 KB, free 910.9 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 8 took 12 ms
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 320.9 KB, free 910.6 MB)
20/06/06 08:20:55 INFO Executor: Finished task 0.0 in stage 4.0 (TID 4). 1163 bytes result sent to driver
20/06/06 08:20:55 INFO CoarseGrainedExecutorBackend: Got assigned task 5
20/06/06 08:20:55 INFO Executor: Running task 0.0 in stage 5.0 (TID 5)
20/06/06 08:20:55 INFO TorrentBroadcast: Started reading broadcast variable 11
20/06/06 08:20:55 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 2.2 KB, free 910.6 MB)
20/06/06 08:20:55 INFO TorrentBroadcast: Reading broadcast variable 11 took 26 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 3.7 KB, free 910.6 MB)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/metadata/part-00000:0+395
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 10
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 22.9 KB, free 910.6 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 10 took 15 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 320.9 KB, free 910.3 MB)
20/06/06 08:20:56 INFO Executor: Finished task 0.0 in stage 5.0 (TID 5). 1153 bytes result sent to driver
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 6
20/06/06 08:20:56 INFO Executor: Running task 0.0 in stage 6.0 (TID 6)
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 13
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_13_piece0 stored as bytes in memory (estimated size 2.2 KB, free 910.2 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 13 took 24 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_13 stored as values in memory (estimated size 3.7 KB, free 910.2 MB)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/metadata/part-00000:0+395
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 12
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 22.9 KB, free 910.2 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 12 took 22 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_12 stored as values in memory (estimated size 320.9 KB, free 909.9 MB)
20/06/06 08:20:56 INFO Executor: Finished task 0.0 in stage 6.0 (TID 6). 1196 bytes result sent to driver
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 7
20/06/06 08:20:56 INFO Executor: Running task 0.0 in stage 7.0 (TID 7)
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 15
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_15_piece0 stored as bytes in memory (estimated size 2.6 KB, free 909.9 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 15 took 14 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_15 stored as values in memory (estimated size 4.3 KB, free 909.9 MB)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/fields/rules/part-00005:0+95
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 14
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_14_piece0 stored as bytes in memory (estimated size 22.9 KB, free 909.9 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 14 took 15 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_14 stored as values in memory (estimated size 320.9 KB, free 909.6 MB)
20/06/06 08:20:56 INFO Executor: Finished task 0.0 in stage 7.0 (TID 7). 757 bytes result sent to driver
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 8
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 9
20/06/06 08:20:56 INFO Executor: Running task 0.0 in stage 8.0 (TID 8)
20/06/06 08:20:56 INFO Executor: Running task 1.0 in stage 8.0 (TID 9)
20/06/06 08:20:56 INFO TorrentBroadcast: Started reading broadcast variable 16
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_16_piece0 stored as bytes in memory (estimated size 2.6 KB, free 909.6 MB)
20/06/06 08:20:56 INFO TorrentBroadcast: Reading broadcast variable 16 took 18 ms
20/06/06 08:20:56 INFO MemoryStore: Block broadcast_16 stored as values in memory (estimated size 4.3 KB, free 909.6 MB)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/fields/rules/part-00007:0+95
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/fields/rules/part-00011:0+2802
20/06/06 08:20:56 INFO Executor: Finished task 0.0 in stage 8.0 (TID 8). 757 bytes result sent to driver
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 10
20/06/06 08:20:56 INFO Executor: Running task 2.0 in stage 8.0 (TID 10)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/fields/rules/part-00011:2802+1757
20/06/06 08:20:56 INFO Executor: Finished task 2.0 in stage 8.0 (TID 10). 757 bytes result sent to driver
20/06/06 08:20:56 INFO Executor: Finished task 1.0 in stage 8.0 (TID 9). 2213 bytes result sent to driver
20/06/06 08:20:56 INFO CoarseGrainedExecutorBackend: Got assigned task 11
20/06/06 08:20:56 INFO Executor: Running task 3.0 in stage 8.0 (TID 11)
20/06/06 08:20:56 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/2_REGEX_TOKENIZER_4db179212c5d/fields/rules/part-00009:0+95
20/06/06 08:20:56 INFO Executor: Finished task 3.0 in stage 8.0 (TID 11). 757 bytes result sent to driver
20/06/06 08:20:57 INFO CoarseGrainedExecutorBackend: Got assigned task 12
20/06/06 08:20:57 INFO Executor: Running task 0.0 in stage 9.0 (TID 12)
20/06/06 08:20:57 INFO TorrentBroadcast: Started reading broadcast variable 19
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_19_piece0 stored as bytes in memory (estimated size 2.2 KB, free 909.6 MB)
20/06/06 08:20:57 INFO TorrentBroadcast: Reading broadcast variable 19 took 13 ms
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_19 stored as values in memory (estimated size 3.7 KB, free 909.6 MB)
20/06/06 08:20:57 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/3_WORD_EMBEDDINGS_MODEL_9641998d2b4c/metadata/part-00000:0+379
20/06/06 08:20:57 INFO TorrentBroadcast: Started reading broadcast variable 18
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_18_piece0 stored as bytes in memory (estimated size 22.9 KB, free 909.5 MB)
20/06/06 08:20:57 INFO TorrentBroadcast: Reading broadcast variable 18 took 22 ms
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_18 stored as values in memory (estimated size 320.9 KB, free 909.2 MB)
20/06/06 08:20:57 INFO Executor: Finished task 0.0 in stage 9.0 (TID 12). 1137 bytes result sent to driver
20/06/06 08:20:57 INFO CoarseGrainedExecutorBackend: Got assigned task 13
20/06/06 08:20:57 INFO Executor: Running task 0.0 in stage 10.0 (TID 13)
20/06/06 08:20:57 INFO TorrentBroadcast: Started reading broadcast variable 21
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_21_piece0 stored as bytes in memory (estimated size 2.2 KB, free 909.2 MB)
20/06/06 08:20:57 INFO TorrentBroadcast: Reading broadcast variable 21 took 13 ms
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_21 stored as values in memory (estimated size 3.7 KB, free 909.2 MB)
20/06/06 08:20:57 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/3_WORD_EMBEDDINGS_MODEL_9641998d2b4c/metadata/part-00000:0+379
20/06/06 08:20:57 INFO TorrentBroadcast: Started reading broadcast variable 20
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 22.9 KB, free 909.2 MB)
20/06/06 08:20:57 INFO TorrentBroadcast: Reading broadcast variable 20 took 18 ms
20/06/06 08:20:57 INFO MemoryStore: Block broadcast_20 stored as values in memory (estimated size 320.9 KB, free 908.9 MB)
20/06/06 08:20:57 INFO Executor: Finished task 0.0 in stage 10.0 (TID 13). 1137 bytes result sent to driver
20/06/06 08:20:59 INFO CoarseGrainedExecutorBackend: Got assigned task 14
20/06/06 08:20:59 INFO Executor: Running task 0.0 in stage 11.0 (TID 14)
20/06/06 08:20:59 INFO TorrentBroadcast: Started reading broadcast variable 23
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_23_piece0 stored as bytes in memory (estimated size 2.2 KB, free 908.9 MB)
20/06/06 08:20:59 INFO TorrentBroadcast: Reading broadcast variable 23 took 10 ms
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_23 stored as values in memory (estimated size 3.7 KB, free 908.9 MB)
20/06/06 08:20:59 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/metadata/part-00000:0+363
20/06/06 08:20:59 INFO TorrentBroadcast: Started reading broadcast variable 22
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_22_piece0 stored as bytes in memory (estimated size 22.9 KB, free 908.8 MB)
20/06/06 08:20:59 INFO TorrentBroadcast: Reading broadcast variable 22 took 11 ms
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_22 stored as values in memory (estimated size 320.9 KB, free 908.5 MB)
20/06/06 08:20:59 INFO Executor: Finished task 0.0 in stage 11.0 (TID 14). 1121 bytes result sent to driver
20/06/06 08:20:59 INFO CoarseGrainedExecutorBackend: Got assigned task 15
20/06/06 08:20:59 INFO Executor: Running task 0.0 in stage 12.0 (TID 15)
20/06/06 08:20:59 INFO TorrentBroadcast: Started reading broadcast variable 25
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_25_piece0 stored as bytes in memory (estimated size 2.2 KB, free 909.5 MB)
20/06/06 08:20:59 INFO TorrentBroadcast: Reading broadcast variable 25 took 12 ms
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_25 stored as values in memory (estimated size 3.7 KB, free 909.5 MB)
20/06/06 08:20:59 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/metadata/part-00000:0+363
20/06/06 08:20:59 INFO TorrentBroadcast: Started reading broadcast variable 24
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_24_piece0 stored as bytes in memory (estimated size 22.9 KB, free 909.5 MB)
20/06/06 08:20:59 INFO TorrentBroadcast: Reading broadcast variable 24 took 15 ms
20/06/06 08:20:59 INFO MemoryStore: Block broadcast_24 stored as values in memory (estimated size 320.9 KB, free 909.5 MB)
20/06/06 08:20:59 INFO Executor: Finished task 0.0 in stage 12.0 (TID 15). 1164 bytes result sent to driver
20/06/06 08:21:00 INFO CoarseGrainedExecutorBackend: Got assigned task 16
20/06/06 08:21:00 INFO Executor: Running task 0.0 in stage 13.0 (TID 16)
20/06/06 08:21:00 INFO TorrentBroadcast: Started reading broadcast variable 27
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_27_piece0 stored as bytes in memory (estimated size 2.4 KB, free 911.3 MB)
20/06/06 08:21:00 INFO TorrentBroadcast: Reading broadcast variable 27 took 11 ms
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_27 stored as values in memory (estimated size 3.9 KB, free 911.6 MB)
20/06/06 08:21:00 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/fields/datasetParams/part-00005:0+95
20/06/06 08:21:00 INFO TorrentBroadcast: Started reading broadcast variable 26
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_26_piece0 stored as bytes in memory (estimated size 22.9 KB, free 911.6 MB)
20/06/06 08:21:00 INFO TorrentBroadcast: Reading broadcast variable 26 took 15 ms
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_26 stored as values in memory (estimated size 320.9 KB, free 911.3 MB)
20/06/06 08:21:00 INFO Executor: Finished task 0.0 in stage 13.0 (TID 16). 765 bytes result sent to driver
20/06/06 08:21:00 INFO CoarseGrainedExecutorBackend: Got assigned task 17
20/06/06 08:21:00 INFO CoarseGrainedExecutorBackend: Got assigned task 18
20/06/06 08:21:00 INFO Executor: Running task 0.0 in stage 14.0 (TID 17)
20/06/06 08:21:00 INFO Executor: Running task 1.0 in stage 14.0 (TID 18)
20/06/06 08:21:00 INFO TorrentBroadcast: Started reading broadcast variable 28
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_28_piece0 stored as bytes in memory (estimated size 2.4 KB, free 911.3 MB)
20/06/06 08:21:00 INFO TorrentBroadcast: Reading broadcast variable 28 took 14 ms
20/06/06 08:21:00 INFO MemoryStore: Block broadcast_28 stored as values in memory (estimated size 3.9 KB, free 911.3 MB)
20/06/06 08:21:00 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/fields/datasetParams/part-00007:0+95
20/06/06 08:21:00 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/fields/datasetParams/part-00011:0+4552
20/06/06 08:21:00 INFO Executor: Finished task 0.0 in stage 14.0 (TID 17). 765 bytes result sent to driver
20/06/06 08:21:00 INFO CoarseGrainedExecutorBackend: Got assigned task 19
20/06/06 08:21:00 INFO Executor: Finished task 1.0 in stage 14.0 (TID 18). 4872 bytes result sent to driver
20/06/06 08:21:00 INFO Executor: Running task 2.0 in stage 14.0 (TID 19)
20/06/06 08:21:00 INFO CoarseGrainedExecutorBackend: Got assigned task 20
20/06/06 08:21:00 INFO Executor: Running task 3.0 in stage 14.0 (TID 20)
20/06/06 08:21:00 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/fields/datasetParams/part-00011:4552+3507
20/06/06 08:21:00 INFO HadoopRDD: Input split: file:/root/cache_pretrained/entity_recognizer_md_it_2.4.0_2.4_1579722834033/stages/4_NerDLModel_8c51dd2286c2/fields/datasetParams/part-00009:0+95
20/06/06 08:21:00 INFO Executor: Finished task 3.0 in stage 14.0 (TID 20). 765 bytes result sent to driver
20/06/06 08:21:00 INFO Executor: Finished task 2.0 in stage 14.0 (TID 19). 765 bytes result sent to driver
I0606 08:21:02.771113  1516 exec.cpp:445] Executor asked to shutdown
I0606 08:21:02.771495  1516 executor.cpp:184] Received SHUTDOWN event
I0606 08:21:02.771529  1516 executor.cpp:800] Shutting down
I0606 08:21:02.771569  1516 executor.cpp:913] Sending SIGTERM to process tree at pid 1524
20/06/06 08:21:02 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver mesos-slave:38543 disassociated! Shutting down.
I0606 08:21:02.779719  1516 executor.cpp:926] Sent SIGTERM to the following process trees:
[ 
-+- 1524 sh -c  "/opt/spark/spark-2.4.5-bin-hadoop2.7/./bin/spark-class" org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@mesos-slave:38543 --executor-id 0 --cores 2 --app-id 3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002 --hostname mesos-slave 
 \--- 1525 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /opt/spark/spark-2.4.5-bin-hadoop2.7/conf/:/opt/spark/spark-2.4.5-bin-hadoop2.7/jars/* -Xmx2048m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@mesos-slave:38543 --executor-id 0 --cores 2 --app-id 3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002 --hostname mesos-slave 
]
I0606 08:21:02.779753  1516 executor.cpp:930] Scheduling escalation to SIGKILL in 88secs from now
20/06/06 08:21:02 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/06/06 08:21:02 INFO DiskBlockManager: Shutdown hook called
20/06/06 08:21:02 INFO ShutdownHookManager: Shutdown hook called
20/06/06 08:21:02 INFO ShutdownHookManager: Deleting directory /var/lib/mesos/slaves/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-S0/frameworks/3c19bc4d-91a0-4a83-8b36-9818b8b430c4-0002/executors/0/runs/e87dd512-7d20-4a6c-b1f9-8cfc841812ba/spark-43f503e1-08d0-4f1a-a2b7-1966294839d9
I0606 08:21:02.878827  1515 executor.cpp:998] Command terminated with signal Terminated (pid: 1524)
maziyarpanahi commented 4 years ago

Ok this good, we have a clean way of testing now. The error is about TensorFlow complaining about a conflict which is normal that happens during NerDL since that is the annotator using TensorFlow.

I see 2.5.0 in spark.jars but I see another jar with different version from local. Could you please remove this config entirely from your Spark Session and startup? spark.repl.local.jars There are many other unnecessary jars especially a different version of spark-nlp and tensorflow which is not required and might be the reason as why there is a conflict.

FedericoF93 commented 4 years ago

spark.repl.local.jars is setted by --packages com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.0. If I remove it from spark-submit the driver produce this error:

Exception in thread "main" java.lang.NoClassDefFoundError: com/johnsnowlabs/nlp/pretrained/PretrainedPipeline
    at tags_extraction.nlp_clean$.main(nlp_clean.scala:70)
    at tags_extraction.nlp_clean.main(nlp_clean.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.johnsnowlabs.nlp.pretrained.PretrainedPipeline
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

It seems like spark.jars affects only executor.

maziyarpanahi commented 4 years ago

You are using spark.jars with the fat jar which comes with everything, so first there is no need for packages. Second, where is this packages? I don’t see it in what you pasted. And if there is a packages somewhere why is it downloading 2.4.5 instead of 2.5.0 which is the version of the fat jar being used. I think some configs are missing from this thread, we are narrowing down to the issue of having different jars/versions and how they are being loaded at the same time.

maziyarpanahi commented 4 years ago

Ok I just saw it’s in the spark-submit. Still doesn’t answer why 2.5.0 is having 2.4.5 jar in your logs? In your spark-submit please try to use --jars and point to the same fat jar there as well and remove packages and see what happens

FedericoF93 commented 4 years ago

Still doesn’t answer why 2.5.0 is having 2.4.5 jar in your logs? My fault, I tried to run both with 2.4.5 and 2.5.0 and I attached the wrong logs. However the error is the same.

In your spark-submit please try to use --jars and point to the same fat jar there as well and remove packages and see what happens Okay, as soon as possible I try it

FedericoF93 commented 4 years ago

I tried using the fatjar in spark-submit but it keeps giving me the same error.

maziyarpanahi commented 4 years ago

This is good, then your code has an issue. Could you please also paste the code you are using your sbt packaging? We are keep coming into what's in that jar and we need to see what and how are things being used in there.

PS: You are using PretrainedPipeline with a new. We never mentioned that anywhere. This is the correct way:

val pipeline = PretrainedPipeline("recognize_entities_dl", lang="en")

Now that you have a clean environment and the spark-submit and SparkSession are synced without anything else in conflict please try the correct code and also mentioned anything else you have in your code. (is it possible to launch spark-shell with the same config to access your spark cluster? It's an easier way to keep try stuff there and see the results with logs as supposed in spark-submit)

FedericoF93 commented 4 years ago

SBT:

name := "SparkScala"

version := "0.1"

scalaVersion := "2.11.12"
//crossScalaVersions := Seq("2.11.9", "2.12.9")

val sparkVersion = "2.4.5"
val igniteVersion = "2.8.0"
val couchVersion = "2.3.0"
val nlpVersion = "2.5.0"

libraryDependencies += "org.apache.spark" %% "spark-core" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-streaming" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-sql" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-mllib" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-mesos" % sparkVersion

libraryDependencies += "com.couchbase.client" %% "spark-connector" % couchVersion

libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % nlpVersion

libraryDependencies += "org.apache.ignite" % "ignite-spark" % igniteVersion

libraryDependencies += "com.github.fommil.netlib" % "all" % "1.1.2"

//libraryDependencies += "org.apache.mesos" % "mesos" % "1.6.2"

Is it possible to launch spark-shell with the same config to access your spark cluster? I don't think it's possible to use spark-shell through Chronos

Anyway I have tried with correct code but it keeps giving me the same error.

maziyarpanahi commented 4 years ago

OK, what if we use spark.jars.packages in spark-submit (--packages) and SparkSession?

maziyarpanahi commented 4 years ago

Second question, in your build.sbt I don't see any provided or assembly strategy like merge, so your sbt package is a fat jar that includes Apache Spark? It won't use the Apache Spark provided by your cluster, am I correct?

An example of how I package my code to be executed on the Apache Spark cluster that provides Apache Spark so I don't have to include them: https://github.com/multivacplatform/multivac-pubmed/blob/master/build.sbt

FedericoF93 commented 4 years ago

OK, what if we use spark.jars.packages in spark-submit (--packages) and SparkSession? Now I try, but I don't think it depends on this. The library is imported correctly, it is during the loading of the stages that it goes wrong.

Second question, in your build.sbt I don't see any provided or assembly strategy like merge, so your sbt package is a fat jar that includes Apache Spark? It won't use the Apache Spark provided by your cluster, am I correct? sbt packages doesn't includes jars. My project's jar contains only class files. I have spark installed directly on mesos-slave's containers.

maziyarpanahi commented 4 years ago

Now I try, but I don't think it depends on this. The library is imported correctly, it is during the loading of the stages that it goes wrong. OK, let's say it only fails in pretrained and TensorFlow related operations. How about this code:

val document = new DocumentAssembler()
    .setInputCol("text")
    .setOutputCol("document")

  val token = new Tokenizer()
    .setInputCols("document")
    .setOutputCol("token")

  val normalizer = new Normalizer()
    .setInputCols("token")
    .setOutputCol("normal")

  val finisher = new Finisher()
    .setInputCols("normal")

  val pipeline = new Pipeline().setStages(Array(document, token, normalizer, finisher))

  pipeline.fit(YOUR_DATAFRAME).transform(YOUR_DATAFRAME).show()
maziyarpanahi commented 4 years ago

Also, please share how you package your last jar sparkscala_2.11-0.1.jar that doesn't include other dependencies. The error clearly says it cannot find something which should be presented unless somewhere it was excluded. In addition, if the previous pipeline I mentioned failed, please do copy the error and the logs again.

maziyarpanahi commented 4 years ago

The last thing is to build and use this repo as a test: https://github.com/maziyarpanahi/spark-nlp-starter

FedericoF93 commented 4 years ago

Ok, the code works.

Also, please share how you package your last jar sparkscala_2.11-0.1.jar that doesn't include other dependencies. The error clearly says it cannot find something which should be presented unless somewhere it was excluded.

sparkscala_2.11-0.1.jar contains a directory for each package in spark project's src/main/scala, each directory contains only .class files

FedericoF93 commented 4 years ago

I have tried to use the code in repo, but I get this error:

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.elapsedMillis()J
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:245)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:273)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:269)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1388)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
    at org.apache.spark.rdd.RDD.take(RDD.scala:1382)
    at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1423)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
    at org.apache.spark.rdd.RDD.first(RDD.scala:1422)
    at org.apache.spark.ml.util.DefaultParamsReader$.loadMetadata(ReadWrite.scala:615)
    at org.apache.spark.ml.util.DefaultParamsReader.load(ReadWrite.scala:493)
    at com.johnsnowlabs.nlp.FeaturesReader.load(ParamsAndFeaturesReadable.scala:12)
    at com.johnsnowlabs.nlp.FeaturesReader.load(ParamsAndFeaturesReadable.scala:8)
    at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadModel(ResourceDownloader.scala:358)
    at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadModel(ResourceDownloader.scala:352)
    at com.johnsnowlabs.nlp.HasPretrained$class.pretrained(HasPretrained.scala:27)
    at com.johnsnowlabs.nlp.embeddings.WordEmbeddingsModel$.com$johnsnowlabs$nlp$embeddings$ReadablePretrainedWordEmbeddings$$super$pretrained(WordEmbeddingsModel.scala:156)
    at com.johnsnowlabs.nlp.embeddings.ReadablePretrainedWordEmbeddings$class.pretrained(WordEmbeddingsModel.scala:122)
    at com.johnsnowlabs.nlp.embeddings.WordEmbeddingsModel$.pretrained(WordEmbeddingsModel.scala:156)
    at com.johnsnowlabs.nlp.embeddings.WordEmbeddingsModel$.pretrained(WordEmbeddingsModel.scala:156)
    at com.johnsnowlabs.nlp.HasPretrained$class.pretrained(HasPretrained.scala:34)
    at com.johnsnowlabs.nlp.embeddings.WordEmbeddingsModel$.com$johnsnowlabs$nlp$embeddings$ReadablePretrainedWordEmbeddings$$super$pretrained(WordEmbeddingsModel.scala:156)
    at com.johnsnowlabs.nlp.embeddings.ReadablePretrainedWordEmbeddings$class.pretrained(WordEmbeddingsModel.scala:119)
    at com.johnsnowlabs.nlp.embeddings.WordEmbeddingsModel$.pretrained(WordEmbeddingsModel.scala:156)
    at tags_extraction.nlp_clean$.main(nlp_clean.scala:105)
    at tags_extraction.nlp_clean.main(nlp_clean.scala)
maziyarpanahi commented 4 years ago

The code in the repo is a solid and simple example of how you can use spark-nlp in your app and package it for a cluster/external Apache Spark. If it fails, then it's about how you package it or how you are using it in your setup. It's really hard to diagnose this as we don't have a way to reproduce and you have a different way of packaging/delivering Spark NLP code which may or may not be the issue. I'll mark this as it won't be fixed

miloradtrninic commented 4 years ago

Any success in resolving this issue? I have the same problem.

cgpeter96 commented 3 years ago

I also haven same problem

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x00007fa24ad44da9, pid=29831, tid=0x00007fa35fef1700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_271-b09) (build 1.8.0_271-b09)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.271-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libtensorflow_framework.so.1+0x744da9]  _GLOBAL__sub_I_loader.cc+0x99
#
# Core dump written. Default location: /da1/s/zhengyuhang/tqparser/core or core.29831
#
# An error report file with more information is saved as:
# /da1/s/zhengyuhang/tqparser/hs_err_pid29831.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
leeivan commented 3 years ago

same problem, if do have some solution?

maziyarpanahi commented 3 years ago

Many things can cause this error. This issue has no solution. That’s being said, your issue might have a same error but may not be related. In order to get any help, please create a new issue, fill the entire requested template, code, full error, and step by step instructions for us to reproduce this error on our side.