exacaster / lighter

REST API for Apache Spark on K8S or YARN
MIT License
91 stars 21 forks source link

Permanent Session not creating #738

Open jhaswati opened 11 months ago

jhaswati commented 11 months ago
      containers:
      - env:
        - name: LIGHTER_KUBERNETES_ENABLED
          value: "true"
        - name: LIGHTER_MAX_RUNNING_JOBS
          value: "15"
        - name: LIGHTER_KUBERNETES_SERVICE_ACCOUNT
          value: lighter
        - name: LIGHTER_URL
          value: http://lighter-lighter-chart-lighter.sf.svc.cluster.local:8080
        - name: KUBERNETES_CLUSTER_DOMAIN
          value: cluster.local
        - name: LIGHTER_SESSION_TIMEOUT_MINUTES
          value: "90000"
        - name: LIGHTER_SESSION_PERMANENT_SESSIONS
          valueFrom:
            configMapKeyRef:
              key: LIGHTER_SESSION_PERMANENT_SESSIONS
              name: lighter-session
        image: ghcr.io/exacaster/lighter:0.0.45-spark3.4.0
        imagePullPolicy: IfNotPresent

I have added the above env in the lighter deployment manifest. Below is the configmap:

Namespace:    sf
Labels:       <none>
Annotations:  <none>

Data
====
LIGHTER_SESSION_PERMANENT_SESSIONS:
----
[
  {
    "id": "permanent-id-used-on-api-calls",
    "submit-params": {
      "name": "Session Name",
      "numExecutors": 1,
      "executorCores": 1,
      "executorMemory": "1G",
      "driverCores": 1,
      "driverMemory": "1G"
    }
  }
]

BinaryData
====

Events:  <none>

However I dont see any lighter sessions getting created. How should I pass the value permanent session in the lighter deployment manifest.

Minutis commented 11 months ago

Hello,

it looks like current documentation is incorrect and it's not possible to pass LIGHTER_SESSION_PERMANENT_SESSIONS at the moment.

As a workaround you could provide whole Lighter application config using following example:

                        -   name: MICRONAUT_APPLICATION_JSON
                            value: '{"lighter":{"session":{"permanent-sessions":[{"id":"permanentId2","submit-params":{"driverCores":"1","executorCores":"1","conf":{"spark.kubernetes.container":"<...>"}}}}}}'

We will look into the approached you used later and fix the documentation.