-
I have a problem changing the flink application in my pipeline. The problem is this: for example, I want to change the amount of memory for the task мanager in one of the job clusters. I change the d…
-
Hi,
I'm adding the operator and sparkapplication helm charts to my auto deploy script, among other components (e.g. tomcat, zookeeper, etc). Since sparkapplication crds are defined by the operator,…
-
This is probably mostly the same as https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/594 which has since been marked as closed, however, the solution described there doesn't seem to…
-
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Willi…
-
Trying to run a spark operator, i am using the `pi.py` file and `spark-py-pi.yaml` files
```
import sys
from random import random
from operator import add
from pyspark.sql import SparkSession…
-
### Description
We are replacing presto in the system with velox, but we have done a lot of self-research and optimization on performance based on presto. We found that there is a performance gap b…
-
If we use the `prometheus operator`, we can easily configure the target pod we want to collect metrics for using `pod monitor` crd and label selector and deploy the `prometheus server`.
The approxima…
-
In order to deploy PVC with every spark executor, it is needed to use multiple configurations mentioned in spark documentation -
From spark documentation, these are the needed configurations to use…
-
ETA: 2024-06-30
We want to use IPv4 addresses of SPARK nodes as the scarce resource that makes it expensive for a single party to run many nodes. ATM, we rely on the trusted spark-api service to re…
-
## Description
Unable to Start spark job in kubenetes
- [*] ✋ I have searched the open/closed issues and my issue is not listed.
## Reproduction Code [Required]
Steps to reproduce the be…