Azure Data Studio Version:
Version: 1.7.0 (system setup)
Commit: e1280022d69b651cfff04b30e830904575c8acda
Date: 2019-05-08T00:55:40.928Z
VS Code 1.33.1
Electron: 3.1.8
Chrome: 66.0.3359.181
Node.js: 10.2.0
V8: 6.6.346.32
OS: Windows_NT x64 10.0.17134
Steps to Reproduce:
I'm connected to the Big Data Cluster. I can run the Spark History, submit jobs etc.
If I select to "Analyze in Notebook" on a file on HDFS, it creates a new Notebook with PySpark3 Kernel, which is attached to the cluster. When I run the notebook, after a long time, I get this error:
Starting Spark application
The code failed because of a fatal error:
Invalid status code '500' from https://x.x.x.x:30443/gateway/default/livy/v1/sessions/0 with error payload:
Error 500 Server Error
HTTP ERROR 500
Problem accessing /gateway/default/livy/v1/sessions/0. Reason:
Server Error
Powered by Jetty://
.
Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
c) Restart the kernel.
Steps to Reproduce: I'm connected to the Big Data Cluster. I can run the Spark History, submit jobs etc. If I select to "Analyze in Notebook" on a file on HDFS, it creates a new Notebook with PySpark3 Kernel, which is attached to the cluster. When I run the notebook, after a long time, I get this error: Starting Spark application The code failed because of a fatal error: Invalid status code '500' from https://x.x.x.x:30443/gateway/default/livy/v1/sessions/0 with error payload:
HTTP ERROR 500
Problem accessing /gateway/default/livy/v1/sessions/0. Reason:
Powered by Jetty://
.
Some things to try: a) Make sure Spark has enough available resources for Jupyter to create a Spark context. b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly. c) Restart the kernel.