SickHub / ark-server-charts

A helm chart for an ARK Survival Evolved Cluster
GNU General Public License v3.0
13 stars 4 forks source link

Installation folder #49

Closed ntrehout closed 6 months ago

ntrehout commented 7 months ago

Hello,

I hope this message finds you well. Firstly, I'd like to extend my gratitude for your efforts in creating this chart—it's greatly appreciated.

I'm currently encountering difficulties deploying a cluster and have exhausted several troubleshooting methods without success.

At present, my deployment relies on a basic setup where I've copied and pasted the values.yaml file. However, I've noticed an issue: the ArkServer is being downloaded to /home/steam/.steam, which, unfortunately, is not a designated volume for this purpose.

Shouldn't the ArkServer be downloaded to /arkserver? Presently, with each restart, the server redownloads inside /home/steam/.steam, causing unnecessary repetition

Any guidance or insights you could provide on resolving this issue would be immensely helpful. Thank you for your attention to this matter.

My values.yaml:

nameOverride: ""
fullnameOverride: ""
commonLabels: { }
commonAnnotations: { }
nodeSelector: { }
affinity: { }
tolerations: [ ]
securityContext: { }
# need to adjust image...
#  runAsUser: 1000
#  fsGroup: 1000
podLabels: { }
podAnnotations: { }
podSecurityContext: { }
#  capabilities:
#    drop:
#      - ALL
#    add:
#      - CHOWN
#      - NET_BIND_SERVICE
#  readOnlyRootFilesystem: true
#  runAsNonRoot: false
topologySpreadConstraints: { }

image:
  registry: docker.io
  repository: drpsychick/arkserver
  tag: focal
  pullSecrets: [ ]
  pullPolicy: Always

updateStrategy:
  type: Recreate

restartPolicy: Always

# Time for the server to shutdown gracefully
terminationGracePeriodSeconds: 60

# Defaults to 1, we set this to 0 so servers don't start automatically after deploy
#
replicaCount: 0

# Attach pod directly to host network
# Implies: `service.enabled: false`
# Mutually exclusive: `hostPort`
hostNetwork: false

# Attach specific ports to host node.
# Implies: `service.enabled: false`
# Mutually exclusive: `hostNetwork`
hostPort: true

# set to "ClusterFirstWithHostNet" when hostNetwork=true
#
# dnsPolicy: ClusterFirst

# Cluster name
clusterName: arkcluster

# Mods available in the cluster and enabled by default on all servers.
# Mods are updated with the game and can be overwritten per server.
mods: []
#  - "731604991"
#  - "889745138"
#  - "893904615"
#  - "1404697612"
#  - "621154190"
#  - "564895376"
#  - "931434275"

# Set RCON password for the whole cluster
rcon:
  password: "outworld"

# Global extraEnvVars for all servers
extraEnvVars: [ ]
#  - name: am_arkwarnminutes
#    value: "15"

# Global custom game settings can be overwritten per server
customConfigMap:
  GameIni:
  GameUserSettingsIni:
  EngineIni:

# Servers in the ARK cluster
servers:
  theisland:
    message: "Welcome to The Island"
    sessionName: "[OTWLD] The Island - Hx5 Tx5 Ex3"
    updateOnStart: true
    map: TheIsland
    password: "outworld"
    maxPlayers: 5
    ports:
      queryudp: 27010
      gameudp: 7770
      rcon: 32330
#  # one entry for each server in the cluster
#  extinction:
#    # updateOnStart should be enabled only on the first server
#    updateOnStart: true
#    sessionName: "Extinction"
#    message: "Welcome to Extinction"
#    # map: TheIsland, Ragnarok, CrystalIsles, Aberration_P, ScorchedEarth_P, Extinction, ...
#    map: Extinction
#    password: ""
#    maxPlayers: 10
#    # xpMultiplier is added to default GameUserSettings.ini
#    # if you use `customConfigMap.GameUserSettingsIni` make sure to include it there
#    xpMultiplier: 6
#    extraEnvVars:
#      - name: am_arkwarnminutes
#        value: "30"
#    # ports must be the same on external and internal
#    ## we don't need a service abtraction as every pod is a single server with dedicated ports
#    # a service with nodePort would be "right" configuration, but for latency reasons I'd skip it.
#    # with service: public:30200 -> [nodeport:30200] -> service:$gameudp -> pod:30200
#    # hostnetwork (no service!): public:30200 -> node=pod:30200
#    # pod hostPort (through service): public:30200 -> node=pod:30200
#    # difference hostPort hostNetwork : Port only exposes a single port
#    ports:
#      queryudp: 27010
#      gameudp: 7770
#      rcon: 32330
#    # override mods for a single server
#    # mods: []
#    rcon:
#      password: "foobar"
#    resources:
#      requests:
#        cpu: 1
#        memory: 4Gi
#      limits:
#        cpu: 1.5
#        memory: 6Gi
#    customConfigMap:
#      GameIni: |
#        # Extinction Game.ini
#      GameUserSettingsIni: |
#        # Extinction GameUserSettings.ini
#        [ServerSettings]
#        XPMultiplier=6
#      EngineIni: |
#        # Extinction Engine.ini

# Containers' resource requests and limits
# ref: http://kubernetes.io/docs/user-guide/compute-resources/
#
resources:
  limits:
    cpu: 1.5
    memory: 8Gi
  requests:
    cpu: 1
    memory: 6Gi

# Default ports used by the container. Can be set per server.
# ARK communicates ports to the client, so make sure the container port matches the external port!
# Using these default settings for a cluster only makes sense if you have an IP for each server.
containerPorts:
  gameudp: 7777
  queryudp: 27015
  rcon: 32330

service:
  # enable a service per server
  enabled: false
  # externalTrafficPolicy: Local
  type: NodePort
  # Enable metallb shared ip
  metallb_shared_ip: false

# Enable persistence using Persistent Volume Claims
# ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
#
persistence:
  enabled: true

  # use generic existing volume names prefixed with <cluster-name>: -game, -cluster, -<server-name>
  #
  existingVolumes: false

  # game files from steam, the largest volume, includes installed mods
  game:
    accessModes:
      - ReadWriteMany
    size: 100Gi
    annotations: { }

    # mountPath is used regardless if `persistence` is enabled
    # this is the mount point in the pod image
    mountPath: /arkserver

  # shared cluster files
  cluster:
    accessModes:
      - ReadWriteMany
    size: 200Mi
    annotations: { }
    mountPath: /arkserver/ShooterGame/Saved/clusters

  # contains the world save game and configuration files
  # keeping a backup of this is enough to get your server back up
  save:
    # A named, existing Persistent Volume
    #
    # existingVolume:

    # A manually managed Persistent Volume Claim, if none given, the PVC
    # will be created with the name: <cluster-name>-<server-name>
    #
    # existingClaim:

    # volumeMode defines what type of volume is required by the claim.
    # Value of Filesystem is implied when not included in claim spec.
    #
    # volumeMode: Filesystem

    # PV Storage Class
    # If defined, storageClassName: <storageClass>
    # If set to "-", storageClassName: "", which disables dynamic provisioning
    # If undefined (the default) or set to null, no storageClassName spec is
    # set, choosing the default provisioner.
    #
    # storageClass: "local-storage"

    # PVC Access Modes
    accessModes:
      - ReadWriteOnce

    # PVC size
    size: 2Gi

    annotations: { }

    # The path the volume will be mounted at
    mountPath: /arkserver/ShooterGame/Saved

# Startup and Liveness probe values
# Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
startupProbe:
  # 120s + 600*10s = 6120s max
  initialDelaySeconds: 120
  failureThreshold: 600
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 1
livenessProbe:
  # unhealthy after max 3*10s = 30s
  initialDelaySeconds: 10
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 1

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

extraInitContainers: [ ]
extraVolumeMounts: [ ]
extraVolumes: [ ]

image

image

image

image

image

Best regards,

ntrehout commented 7 months ago

I added this


securityContext:
  runAsUser: 1000
  fsGroup: 1000

and now it's finally downloading in /arkserver, there was not much logs telling me to do so, maybe it should be written in the doc somewhere.

I'm using Longhorn

Dracozny commented 7 months ago

Just a heads up, I had issues with speed when using longhorn. It took forever to do the install and would time out. I had to extend the probe time at the bottom of the yaml to over 5 minutes in order for it to complete.

On Fri, Feb 16, 2024, 22:54 TREHOUT Nathan @.***> wrote:

I added this

securityContext: runAsUser: 1000 fsGroup: 1000

and now it's finally downloading in /arkserver, there was not much logs telling me to do so, maybe it should be written in the doc somewhere.

I'm using Longhorn

— Reply to this email directly, view it on GitHub https://github.com/SickHub/ark-server-charts/issues/49#issuecomment-1949844491, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA2PC6EOKQYIT72LBJFBBKTYUBHY3AVCNFSM6AAAAABDNCOHJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBZHA2DINBZGE . You are receiving this because you are subscribed to this thread.Message ID: @.***>

DrPsychick commented 6 months ago

You're right, the values.yaml even states that the securityContext needs an image change - and by now the image uses the user steam by default.

Did you test this already with the default image? Then I'd make the securityContext a default as well.