-
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issue…
-
https://docs.google.com/document/d/17X6-P5H2522SnE-gF1BVwyildp_PDX8oXD-4l9vqQmA/edit#heading=h.oml6js5zq00v
https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#head…
-
### Describe the proposal
This issue tracks the work of adding Apache Paimon table support for Gravitino Spark connector, which makes Gravitino Spark connector to operate Paimon tables.
### Task lis…
-
### What's the use case?
[Spark Connect](https://spark.apache.org/docs/latest/spark-connect-overview.html) is a different Spark architecture that's now used by some vendor runtimes, like [Databricks …
-
# Description
The Spark-Greenplum connector does not work correctly in local Spark mode (local-master).
When performing read operations, there is a significant waiting time,
and write operations …
-
I was looking into running a shared spark cluster using RayDP and spark-connect. I tried a few things but it didn't work, any suggestions on how it can be done?
-
Since spark 3.5, a new pyspark module is added: `pyspark.ml.connect`, it supports a few ML algorithms that runs on spark connect mode. This is design doc:
https://www.google.com/url?q=https://docs.go…
-
### SynapseML version
synapseml_2.12:1.0.8
### System information
- **Language version** (python 3.8, scala 2.12):
- **Spark Version** (3.5.0):
- **Spark Platform** (Databricks):
### Describe th…
-
I installed `spark-4.0.0-preview2` and would like to use `sparklyr` with it.
Unfortunately, it doesn't seem to be supported.
```r
library(sparklyr)
# get the default config
conf
-
```python
import daft
from daft import col, lit
# Create DataFrame from range
df = daft.from_pydict({"col": list(range(10))})
# Method 1: Using DataFrame API
result = df.agg([lit(1).count()])
…