-
would it be possible to use nbdime/ndbiff inside PySpark Notebooks? Often these type of notebooks have a specialized jupyter notebook format, e.g. Azure Synapse Analytics Notebooks.
-
### Missing functionality
After databricks runtime 14, the dataframe type is changed in notebook. It was `pyspark.sql.dataframe.DataFrame`, but now it is `pyspark.sql.connect.dataframe.DataFrame`
…
-
Hi 👋
We are currently experimenting with using `sparkdantic` on our Spark schema definitions in our pipelines inside Databricks. However, based on our current configuration, we are bound to install…
-
Hi all, I'm a new user of mosaicml-streaming on Databricks who stumbled upon Mosaic ML (and Petastorm) for loading large data from PySpark to PyTorch tensors. Here is an example [jupyter notebook](htt…
-
We are working with notebooks which use PySpark kernel. Very often we need just to look inside the notebook, but we don't need to run it. But if I click on the notebook, it starts a kernel as well…
-
jupyter/pyspark-notebook:latest
-
spark:python3-java17
bitnami/spark:3.5.2
quay.io/jupyter/pyspark-notebook:latest
-
Hello everyone. I am using Jupyter Enterprise Gateway with PySpark sessions on Kubernetes. The elyra/kernel-spark-py:3.2.3 image works as expected.
I modified the image and rebuilt it to upgrade t…
-
**System information**
* Runtime: Databricks-VSCode (Databricks Runtime 13.3.x Scala 2.12)
* PySpark version: 3.4.2
* Python version: 3.10.1
* Operating system: Windows 10 Build 19045
**Code …
-
## Description
Since Spark 3.4, [spark-connect](https://spark.apache.org/docs/latest/spark-connect-overview.html) (and the equivalent [databricks-connect v2](https://docs.databricks.com/en/dev-tools/…
MigQ2 updated
1 month ago