RedisLabs / spark-redis

A connector for Spark that allows reading and writing to/from Redis cluster
BSD 3-Clause "New" or "Revised" License
935 stars 368 forks source link

JedisConnectionException: Could not get a resource from the pool #237

Open terrynice opened 4 years ago

terrynice commented 4 years ago

2020-04-30 20:37:04,644 [task-result-getter-0] WARN -[org.apache.spark.scheduler.TaskSetManager]-[WARN]org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)- Lost task 0.0 in stage 0.0 (TID 0, fs-hiido-dn-12-8-137.hiido.host.yydevops.com, executor 1): redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool at redis.clients.jedis.util.Pool.getResource(Pool.java:59) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234) at com.redislabs.provider.redis.ConnectionPool$.connect(ConnectionPool.scala:33) at com.redislabs.provider.redis.RedisEndpoint.connect(RedisConfig.scala:69) at com.redislabs.provider.redis.RedisContext$$anonfun$setKVs$4.apply(redisFunctions.scala:372) at com.redislabs.provider.redis.RedisContext$$anonfun$setKVs$4.apply(redisFunctions.scala:371) at scala.collection.MapLike$MappedValues$$anonfun$foreach$3.apply(MapLike.scala:245) at scala.collection.MapLike$MappedValues$$anonfun$foreach$3.apply(MapLike.scala:245) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at scala.collection.MapLike$MappedValues.foreach(MapLike.scala:245) at com.redislabs.provider.redis.RedisContext$.setKVs(redisFunctions.scala:371) at com.redislabs.provider.redis.RedisContext$$anonfun$toRedisKV$1.apply(redisFunctions.scala:252) at com.redislabs.provider.redis.RedisContext$$anonfun$toRedisKV$1.apply(redisFunctions.scala:252) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host localhost:6379


val spark = SparkSession.builder()
        .appName("SparkReadRedis")
        .master("local[*]")
        .config("spark.redis.host", "10.21.44.147")
        .config("spark.redis.port", "6379")
        .config("spark.redis.auth","")
        //.config("spark.redis.db","0")
        .getOrCreate()

spark.sparkContext.toRedisKV(rddData, expireSeconds)

fe2s commented 4 years ago

Hi @terrynice how do you run the application? Do you submit it to the cluster or through some notebook (e.g. Zeppelin) ?

terrynice commented 4 years ago

Hi @terrynice how do you run the application? Do you submit it to the cluster or through some notebook (e.g. Zeppelin) ?

exec nohup "${SPARK_HOME}"/bin/spark-submit \ --class com.xxxxxxxx.PigeonServer \ --master yarn \ --deploy-mode client \ --principal hdfs@XXXXXX.COM \ --keytab /dp/conf/spark/hdfs.keytab \ --queue dataservice \ --driver-class-path ${DRIVER_CLASSPATH} \ --name PigeonServer-${SERVERIP}${PIGEON_SERVER_PORT} \ ${BASE_DIR}/lib/${main_jar} > ${LOG_DIR}/pigeonserver.out 2>&1 &

terrynice commented 4 years ago

Hi @terrynice how do you run the application? Do you submit it to the cluster or through some notebook (e.g. Zeppelin) ?

` def execute(redisLoadInfo: RedisLoadInfo, rep: Response): Unit = {

SparkSQLEnv.sparkContext.setLocalProperty(SparkSQLEnv.SPARK_SCHEDULER_POOL, "load")
SparkSQLEnv.sparkContext.setLocalProperty(SparkContext.SPARK_JOB_DESCRIPTION, redisLoadInfo.getJobName)
SparkSQLEnv.sparkContext.setLocalProperty(SparkContext.SPARK_JOB_GROUP_ID, redisLoadInfo.actionId)
val sparkSession = SparkSQLEnv.session
val sqls = SQLProcessor.getExecuteCmds(redisLoadInfo.hql)
val cipher = redisLoadInfo.cipher

val host = redisLoadInfo.account.server.split(";")(0);
val keyPrefix = redisLoadInfo.keyPrefix;
val keyIndex = redisLoadInfo.keyIndex;
val valueIndex = redisLoadInfo.valueIndex;
val expireSeconds = redisLoadInfo.expireHours * 3600

val spark = SparkSession.builder()
        .appName("SparkReadRedis")
        .master("local[*]")
        .config("spark.redis.host", "10.21.44.147")
        .config("spark.redis.port", "6379")
        .config("spark.redis.auth","")
        //.config("spark.redis.db","0")
        .getOrCreate()
spark.sparkContext.getConf.set("spark.redis.host", "10.21.44.147")
spark.sparkContext.getConf.set("spark.redis.port", "6379")

var client: CloseableHttpClient = null
var response: CloseableHttpResponse = null
var reader: BufferedReader = null
var seqList = Seq[RedisKV]()
var seqString = Seq[(String, String)]()

try {
  var fileInfoList = getFileInfos(redisLoadInfo.hdfsSrc)

  for (i <- 0 to fileInfoList.size) {
    var fileInfo:FileInfo = fileInfoList.get(i)
    var path: String = fileInfo.path

    var request: HttpPost = new HttpPost(ServerConf.API_URL_GET_DATA)
    request.setConfig(RequestConfig.custom().setConnectTimeout(60000).setConnectionRequestTimeout(60000).setSocketTimeout(60000).build())
    request.setEntity(new StringEntity("{\"file\":\"" + path + "\"}"))
    client = HttpClients.createDefault()
    response = client.execute(request)

    var code: Int = response.getStatusLine().getStatusCode()

    if (code != 200) {
      throw new RuntimeException("get data failed,response code:" + code)
    }

    var entity: HttpEntity = response.getEntity()
    reader = new BufferedReader(new InputStreamReader(entity.getContent(), "UTF-8"))

    var readTotal = 0;
    var writeTotal = 0;
    var line = ""
    var redisKey = ""
    var redisValue = ""
    var rddData: RDD[(String, String)] = null

    breakable(
      while ((line = reader.readLine()) != null) {
        if (line == null) {
          break
        }
        var tuple = line.split(RedisLoadTask2.FIELD_SEPARATOR)
        if (tuple.length >= Math.max(keyIndex, valueIndex)) {
          redisKey = keyPrefix + tuple(keyIndex)
          redisValue = "" + tuple(valueIndex)
          seqString = seqString.padTo(seqString.size + 1, (redisKey, redisValue))

          readTotal = readTotal + 1

        }
      }
    )

    rddData = spark.sparkContext.parallelize(seqString)

    if (expireSeconds <= 0) {
      spark.sparkContext.toRedisKV(rddData, -1)
    } else {

      spark.sparkContext.toRedisKV(rddData, expireSeconds)
    }
    writeTotal += readTotal;

    HttpUtil.closeReader(reader);
    HttpUtil.closeResponse(response);
    HttpUtil.closeHttpClient(client);
  }

} catch {
  case ex: Exception =>
    logInfo(s"e123: ${ex.toString}")
} finally {
  HttpUtil.closeReader(reader);
  HttpUtil.closeResponse(response);
  HttpUtil.closeHttpClient(client);
}

}`

terrynice commented 4 years ago

Hi @terrynice how do you run the application? Do you submit it to the cluster or through some notebook (e.g. Zeppelin) ?

jedis-3.0.0-SNAPSHOT.jar spark-redis_2.11-2.4.2.jar

diguid commented 4 years ago

I'm getting the same thing, but I'm running a notebook backed by AWS EMR, and trying to connect to elasticache. They're both in the same vpc, and security group is set to all traffic for both.

Redis running in EC is in version 5.0.6 Spark running in EMR 2.4.5

%%configure -f 
{
  "jars": [
      "s3://mybucket/spark-redis-2.4.0-SNAPSHOT.jar",
      "s3://mybucket/jar_test/jedis-3.1.0.jar",
      "s3://mybucket/jar_test/commons-pool2-2.0.jar"
  ]
}

from pyspark.sql import *
from pyspark.sql.types import *

spark = (
    SparkSession.builder
        .appName('SparkRedisApp')
        .config('spark.jar.packages', 'com.redislabs.provider.redis:spark-redis-2.4.0-SNAPSHOT')
        .config('spark.redis.host', 'myurl')
        .config('spark.redis.port', '6379')
        .config("spark.redis.auth","")
        .getOrCreate()
)

from pyspark.sql.types import StructType, StructField, StringType
schema = StructType([
    StructField("id", StringType(), True),
    StructField("colA", StringType(), True),
    StructField("colB", StringType(), True)
])

data = [
    ['1', '8', '2'],
    ['2', '5', '3'],
    ['3', '3', '1'],
    ['4', '7', '2']
]
df = spark.createDataFrame(data, schema=schema)
df.show()

(
    df.
    write.
    format("org.apache.spark.sql.redis").
    option("table", "mytable").
    option("key.column", "id").
    save()
)

An error occurred while calling o112.save.
: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
    at redis.clients.jedis.util.Pool.getResource(Pool.java:59)
    at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
    at com.redislabs.provider.redis.ConnectionPool$.connect(ConnectionPool.scala:33)
    at com.redislabs.provider.redis.RedisEndpoint.connect(RedisConfig.scala:69)
    at com.redislabs.provider.redis.RedisConfig.clusterEnabled(RedisConfig.scala:182)
    at com.redislabs.provider.redis.RedisConfig.getNodes(RedisConfig.scala:293)
    at com.redislabs.provider.redis.RedisConfig.getHosts(RedisConfig.scala:209)
    at com.redislabs.provider.redis.RedisConfig.<init>(RedisConfig.scala:132)
    at org.apache.spark.sql.redis.RedisSourceRelation.<init>(RedisSourceRelation.scala:36)
    at org.apache.spark.sql.redis.DefaultSource.createRelation(DefaultSource.scala:21)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$executeQuery$1(SQLExecution.scala:83)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:94)
    at org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$withMetrics(SQLExecution.scala:178)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:93)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:200)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:92)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host test-fs.gtqvl9.ng.0001.use1.cache.amazonaws.com:6379
    at redis.clients.jedis.Connection.connect(Connection.java:204)
    at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:100)
    at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1866)
    at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:117)
    at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:819)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:429)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
    at redis.clients.jedis.util.Pool.getResource(Pool.java:50)
    ... 44 more
Caused by: java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:607)
    at redis.clients.jedis.Connection.connect(Connection.java:181)
    ... 51 more

Traceback (most recent call last):
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 737, in save
    self._jwrite.save()
  File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o112.save.
: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
    at redis.clients.jedis.util.Pool.getResource(Pool.java:59)
    at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
    at com.redislabs.provider.redis.ConnectionPool$.connect(ConnectionPool.scala:33)
    at com.redislabs.provider.redis.RedisEndpoint.connect(RedisConfig.scala:69)
    at com.redislabs.provider.redis.RedisConfig.clusterEnabled(RedisConfig.scala:182)
    at com.redislabs.provider.redis.RedisConfig.getNodes(RedisConfig.scala:293)
    at com.redislabs.provider.redis.RedisConfig.getHosts(RedisConfig.scala:209)
    at com.redislabs.provider.redis.RedisConfig.<init>(RedisConfig.scala:132)
    at org.apache.spark.sql.redis.RedisSourceRelation.<init>(RedisSourceRelation.scala:36)
    at org.apache.spark.sql.redis.DefaultSource.createRelation(DefaultSource.scala:21)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$executeQuery$1(SQLExecution.scala:83)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:94)
    at org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$withMetrics(SQLExecution.scala:178)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:93)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:200)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:92)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host test-fs.gtqvl9.ng.0001.use1.cache.amazonaws.com:6379
    at redis.clients.jedis.Connection.connect(Connection.java:204)
    at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:100)
    at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1866)
    at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:117)
    at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:819)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:429)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
    at redis.clients.jedis.util.Pool.getResource(Pool.java:50)
    ... 44 more
Caused by: java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:607)
    at redis.clients.jedis.Connection.connect(Connection.java:181)
    ... 51 more

Also, I tried adding only spark-redis-2.4.0-jar-with-dependencies.jar rather than the packages separately, but then, I couldn't even establish the session:

The code failed because of a fatal error:
    Session 9 unexpectedly reached final status 'dead'. See logs:
stdout: 

stderr: 
20/08/03 21:00:45 WARN RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
20/08/03 21:00:45 INFO RSCDriver: Received job request ba5d2241-7d88-4e46-a13e-e3c00aa97410
20/08/03 21:00:45 INFO RSCDriver: SparkContext not yet up, queueing job request.
20/08/03 21:00:48 INFO SparkEntries: Starting Spark context...
20/08/03 21:00:48 INFO SparkContext: Running Spark version 2.4.5-amzn-0
20/08/03 21:00:48 INFO SparkContext: Submitted application: livy-session-9
20/08/03 21:00:48 INFO SecurityManager: Changing view acls to: livy
20/08/03 21:00:48 INFO SecurityManager: Changing modify acls to: livy
20/08/03 21:00:48 INFO SecurityManager: Changing view acls groups to: 
20/08/03 21:00:48 INFO SecurityManager: Changing modify acls groups to: 
20/08/03 21:00:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(livy); groups with view permissions: Set(); users  with modify permissions: Set(livy); groups with modify permissions: Set()
20/08/03 21:00:48 INFO Utils: Successfully started service 'sparkDriver' on port 36841.
20/08/03 21:00:48 INFO SparkEnv: Registering MapOutputTracker
20/08/03 21:00:48 INFO SparkEnv: Registering BlockManagerMaster
20/08/03 21:00:48 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/08/03 21:00:48 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/08/03 21:00:48 INFO DiskBlockManager: Created local directory at /mnt/tmp/blockmgr-61086f49-fab1-4681-ba96-d68a75cd63e6
20/08/03 21:00:48 INFO MemoryStore: MemoryStore started with capacity 1028.8 MB
20/08/03 21:00:49 INFO SparkEnv: Registering OutputCommitCoordinator
20/08/03 21:00:49 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/08/03 21:00:49 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://ip-172-31-35-225.ec2.internal:4040
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/rsc-jars/livy-api-0.7.0-incubating.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/livy-api-0.7.0-incubating.jar with timestamp 1596488449347
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/rsc-jars/livy-rsc-0.7.0-incubating.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/livy-rsc-0.7.0-incubating.jar with timestamp 1596488449348
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/rsc-jars/netty-all-4.1.17.Final.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/netty-all-4.1.17.Final.jar with timestamp 1596488449348
20/08/03 21:00:49 INFO SparkContext: Added JAR s3://diego-athena-testing/jar_test/spark-redis_2.11-2.5.0-SNAPSHOT-jar-with-dependencies.jar at s3://diego-athena-testing/jar_test/spark-redis_2.11-2.5.0-SNAPSHOT-jar-with-dependencies.jar with timestamp 1596488449348
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/repl_2.11-jars/commons-codec-1.9.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/commons-codec-1.9.jar with timestamp 1596488449349
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/repl_2.11-jars/livy-core_2.11-0.7.0-incubating.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/livy-core_2.11-0.7.0-incubating.jar with timestamp 1596488449349
20/08/03 21:00:49 INFO SparkContext: Added JAR file:/usr/lib/livy/repl_2.11-jars/livy-repl_2.11-0.7.0-incubating.jar at spark://ip-172-31-35-225.ec2.internal:36841/jars/livy-repl_2.11-0.7.0-incubating.jar with timestamp 1596488449349
20/08/03 21:00:49 INFO Utils: Using initial executors = 50, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
20/08/03 21:00:49 INFO RMProxy: Connecting to ResourceManager at ip-172-31-35-225.ec2.internal/172.31.35.225:8032
20/08/03 21:00:50 INFO Client: Requesting a new application from cluster with 2 NodeManagers
20/08/03 21:00:50 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container)
20/08/03 21:00:50 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
20/08/03 21:00:50 INFO Client: Setting up container launch context for our AM
20/08/03 21:00:50 INFO Client: Setting up the launch environment for our AM container
20/08/03 21:00:50 INFO Client: Preparing resources for our AM container
20/08/03 21:00:50 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
20/08/03 21:00:51 INFO Client: Uploading resource file:/mnt/tmp/spark-55beed7c-f3c7-4ae3-a32d-9562e00618bd/__spark_libs__5758553492355086018.zip -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/__spark_libs__5758553492355086018.zip
20/08/03 21:00:52 INFO Client: Uploading resource file:/usr/lib/livy/rsc-jars/livy-api-0.7.0-incubating.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/livy-api-0.7.0-incubating.jar
20/08/03 21:00:52 INFO Client: Uploading resource file:/usr/lib/livy/rsc-jars/livy-rsc-0.7.0-incubating.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/livy-rsc-0.7.0-incubating.jar
20/08/03 21:00:52 INFO Client: Uploading resource file:/usr/lib/livy/rsc-jars/netty-all-4.1.17.Final.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/netty-all-4.1.17.Final.jar
20/08/03 21:00:52 INFO Client: Uploading resource s3://diego-athena-testing/jar_test/spark-redis_2.11-2.5.0-SNAPSHOT-jar-with-dependencies.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/spark-redis_2.11-2.5.0-SNAPSHOT-jar-with-dependencies.jar
20/08/03 21:00:52 INFO S3NativeFileSystem: Opening 's3://diego-athena-testing/jar_test/spark-redis_2.11-2.5.0-SNAPSHOT-jar-with-dependencies.jar' for reading
20/08/03 21:00:54 INFO Client: Uploading resource file:/usr/lib/livy/repl_2.11-jars/commons-codec-1.9.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/commons-codec-1.9.jar
20/08/03 21:00:54 INFO Client: Uploading resource file:/usr/lib/livy/repl_2.11-jars/livy-core_2.11-0.7.0-incubating.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/livy-core_2.11-0.7.0-incubating.jar
20/08/03 21:00:55 INFO Client: Uploading resource file:/usr/lib/livy/repl_2.11-jars/livy-repl_2.11-0.7.0-incubating.jar -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/livy-repl_2.11-0.7.0-incubating.jar
20/08/03 21:00:55 INFO Client: Uploading resource file:/etc/spark/conf/hive-site.xml -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/hive-site.xml
20/08/03 21:00:55 INFO Client: Uploading resource file:/usr/lib/spark/R/lib/sparkr.zip#sparkr -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/sparkr.zip
20/08/03 21:00:55 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/pyspark.zip
20/08/03 21:00:56 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.7-src.zip -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/py4j-0.10.7-src.zip
20/08/03 21:00:56 WARN Client: Same name resource file:///usr/lib/spark/python/lib/pyspark.zip added multiple times to distributed cache
20/08/03 21:00:56 WARN Client: Same name resource file:///usr/lib/spark/python/lib/py4j-0.10.7-src.zip added multiple times to distributed cache
20/08/03 21:00:56 INFO Client: Uploading resource file:/mnt/tmp/spark-55beed7c-f3c7-4ae3-a32d-9562e00618bd/__spark_conf__6337522731186081855.zip -> hdfs://ip-172-31-35-225.ec2.internal:8020/user/livy/.sparkStaging/application_1596484451999_0010/__spark_conf__.zip
20/08/03 21:00:56 INFO SecurityManager: Changing view acls to: livy
20/08/03 21:00:56 INFO SecurityManager: Changing modify acls to: livy
20/08/03 21:00:56 INFO SecurityManager: Changing view acls groups to: 
20/08/03 21:00:56 INFO SecurityManager: Changing modify acls groups to: 
20/08/03 21:00:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(livy); groups with view permissions: Set(); users  with modify permissions: Set(livy); groups with modify permissions: Set()
20/08/03 21:00:57 INFO Client: Submitting application application_1596484451999_0010 to ResourceManager
20/08/03 21:00:57 INFO YarnClientImpl: Submitted application application_1596484451999_0010
20/08/03 21:00:57 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1596484451999_0010 and attemptId None
20/08/03 21:00:58 INFO Client: Application report for application_1596484451999_0010 (state: ACCEPTED)
20/08/03 21:00:58 INFO Client: 
     client token: N/A
     diagnostics: AM container is launched, waiting for AM container to Register with RM
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1596488457725
     final status: UNDEFINED
     tracking URL: http://ip-172-31-35-225.ec2.internal:20888/proxy/application_1596484451999_0010/
     user: livy
20/08/03 21:00:59 INFO Client: Application report for application_1596484451999_0010 (state: ACCEPTED)
20/08/03 21:01:00 INFO Client: Application report for application_1596484451999_0010 (state: ACCEPTED)
20/08/03 21:01:01 INFO Client: Application report for application_1596484451999_0010 (state: ACCEPTED)
20/08/03 21:01:02 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-172-31-35-225.ec2.internal, PROXY_URI_BASES -> http://ip-172-31-35-225.ec2.internal:20888/proxy/application_1596484451999_0010), /proxy/application_1596484451999_0010
20/08/03 21:01:02 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
20/08/03 21:01:02 INFO Client: Application report for application_1596484451999_0010 (state: RUNNING)
20/08/03 21:01:02 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 172.31.43.106
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1596488457725
     final status: UNDEFINED
     tracking URL: http://ip-172-31-35-225.ec2.internal:20888/proxy/application_1596484451999_0010/
     user: livy
20/08/03 21:01:02 INFO YarnClientSchedulerBackend: Application application_1596484451999_0010 has started running.
20/08/03 21:01:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43075.
20/08/03 21:01:02 INFO NettyBlockTransferService: Server created on ip-172-31-35-225.ec2.internal:43075
20/08/03 21:01:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/08/03 21:01:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, ip-172-31-35-225.ec2.internal, 43075, None)
20/08/03 21:01:02 INFO BlockManagerMasterEndpoint: Registering block manager ip-172-31-35-225.ec2.internal:43075 with 1028.8 MB RAM, BlockManagerId(driver, ip-172-31-35-225.ec2.internal, 43075, None)
20/08/03 21:01:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, ip-172-31-35-225.ec2.internal, 43075, None)
20/08/03 21:01:02 INFO BlockManager: external shuffle service port = 7337
20/08/03 21:01:02 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, ip-172-31-35-225.ec2.internal, 43075, None)
20/08/03 21:01:03 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json..

Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
c) Restart the kernel.
diguid commented 4 years ago

Edit: nvm about not being able to connect.

Even though I had my security group for the EC cluster set to all traffic and anywhere, for some reason, when I edited it to have only port 6379 open to anywhere, It started working.

That said, the problem with the jar with all dependencies is still a thing.