factoriotools / factorio-docker

Factorio headless server in a Docker container
https://hub.docker.com/r/factoriotools/factorio/
MIT License
909 stars 220 forks source link

Mod update mechanism #234

Closed proegssilb closed 5 years ago

proegssilb commented 5 years ago

It'd be really handy to have some kind of mechanism for automatically updating all mods (or heck, even installing mods). I'm in Kubernetes, so one possible use case for me is creating a scheduled job to update the mods and bounce the server twice-weekly at a time when I know everyone is offline.

Based on this reddit post, I found this repo, and this other repo.

Both are getting kinda old, so not sure if they'd work. Also, one requires ruby, the other requires python, so they'd weigh down the container a bit. I consider a fair trade, but others would disagree with me.

If this is something worth doing, and we want to include it in the main Factorio server container, I can probably pull together a PR. I did want to make sure this was something the rest of the community was on-board with first.

Thoughts?

patschi commented 5 years ago

There's already a dedicated branch for something like this available: see https://github.com/dtandersen/docker_factorio_server/tree/modupdater. This is also published to Docker Hub under dtandersen/factorio:modupdater.

I've done some tests and it's working for me. Merging to the master branch is missing, however it should be tested on a few setups first.

proegssilb commented 5 years ago

Cool, I'll work on getting that integrated with my setup in the next week or two.

paraschenko commented 5 years ago

Would it be possible to have a configuration file somewhere pinning mod versions to specific values?

For example, Seablock modpack contains a lot of mods from different authors. When one mod gets updated it often breaks the pack until other mods catch up. During such time it's not a good idea to update the tricky mods (updating other simple mods might be fine though).

I think it would be useful to be able to have a subset of mods pinned to specific versions.

Also such file could in theory be used for initial installation of the mods.

SuperSandro2000 commented 5 years ago

@paraschenko You should open a new issue for that as it is quite more advanced than the current PR we have

proegssilb commented 5 years ago

I tried factoriotools/factorio:modupdater in a kubernetes job. Of course it required the corresponding Factorio server to be offline (more work to do for me), but it also errored, without explaining why. Here's the logs I have to work with:

Checking for update of mod auto-research...
Checking for update of mod AutoDeconstruct...
Checking for update of mod Bottleneck...
Checking for update of mod OutpostPlanner...
Checking for update of mod PlannerCore...
Checking for update of mod Warehousing...
Checking for update of mod YARM...

I can share the Pod YAML that wound up being created, but the TL;DR is I overrode the entrypoint, and mounted my volumes same as normal.

spec:
  containers:
  - command:
    - /docker-update-mods.sh

But, without my usual init container, it wouldn't have access to the normal config files.

    volumeMounts:
    - mountPath: /factorio
      name: white-plasma-data
    - mountPath: /config-in
      name: white-plasma-config
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-gsxpw
      readOnly: true

So, obviously I need to make a few adjustments, but before I do, is it expected that this configuration should work? Do you need me to inspect anything?

The job will re-run at 4am every day, so I'll have a fresh answer the day after and updates are posted for the image.

Fank commented 5 years ago

Could you please link the pod YAML so i could take a look and maybe help.

proegssilb commented 5 years ago

CronJob:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  annotations:
    field.cattle.io/creatorId: user-2s5h9
  creationTimestamp: 2019-05-06T01:58:43Z
  labels:
    cattle.io/creator: norman
  name: modupdater
  namespace: factorio
  resourceVersion: "28915797"
  selfLink: /apis/batch/v1beta1/namespaces/factorio/cronjobs/modupdater
  uid: 7a5a8969-6fa2-11e9-aadc-80c16e250730
spec:
  concurrencyPolicy: Allow
  failedJobsHistoryLimit: 10
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          annotations:
            cattle.io/timestamp: 2019-05-06T01:58:43Z
          creationTimestamp: null
        spec:
          containers:
          - command:
            - /docker-update-mods.sh
            image: factoriotools/factorio:modupdater
            imagePullPolicy: Always
            name: modupdater
            resources: {}
            securityContext:
              allowPrivilegeEscalation: false
              capabilities: {}
              privileged: false
              procMount: Default
              readOnlyRootFilesystem: false
              runAsNonRoot: false
              runAsUser: 1503
            stdin: true
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            tty: true
            volumeMounts:
            - mountPath: /factorio
              name: white-plasma-data
            - mountPath: /config-in
              name: white-plasma-config
          dnsPolicy: ClusterFirstWithHostNet
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext:
            fsGroup: 1503
          terminationGracePeriodSeconds: 30
          volumes:
          - name: white-plasma-data
            persistentVolumeClaim:
              claimName: white-plasma-data
          - configMap:
              defaultMode: 256
              name: white-plasma-config
              optional: false
            name: white-plasma-config
  schedule: 0 4 * * *
  successfulJobsHistoryLimit: 10
  suspend: false
status:
  lastScheduleTime: 2019-05-06T04:00:00Z

Pod:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cattle.io/timestamp: 2019-05-06T01:58:43Z
  creationTimestamp: 2019-05-06T12:57:40Z
  generateName: modupdater-1557115200-
  labels:
    controller-uid: 6dbf9c9c-6fb3-11e9-aadc-80c16e250730
    job-name: modupdater-1557115200
  name: modupdater-1557115200-dtfx2
  namespace: factorio
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: modupdater-1557115200
    uid: 6dbf9c9c-6fb3-11e9-aadc-80c16e250730
  resourceVersion: "28913417"
  selfLink: /api/v1/namespaces/factorio/pods/modupdater-1557115200-dtfx2
  uid: 8841dfbd-6ffe-11e9-aadc-80c16e250730
spec:
  containers:
  - command:
    - /docker-update-mods.sh
    image: factoriotools/factorio:modupdater
    imagePullPolicy: Always
    name: modupdater
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities: {}
      privileged: false
      procMount: Default
      readOnlyRootFilesystem: false
      runAsNonRoot: false
      runAsUser: 1503
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true
    volumeMounts:
    - mountPath: /factorio
      name: white-plasma-data
    - mountPath: /config-in
      name: white-plasma-config
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-gsxpw
      readOnly: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeName: block2
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1503
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: white-plasma-data
    persistentVolumeClaim:
      claimName: white-plasma-data
  - configMap:
      defaultMode: 256
      name: white-plasma-config
      optional: false
    name: white-plasma-config
  - name: default-token-gsxpw
    secret:
      defaultMode: 420
      secretName: default-token-gsxpw
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-05-06T12:57:41Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-05-06T12:59:41Z
    message: 'containers with unready status: [modupdater]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2019-05-06T12:59:41Z
    message: 'containers with unready status: [modupdater]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2019-05-06T12:57:41Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://f723f6a57bc0fcbeec3e3a7c16fb8a5eb2b79f3e8f6ed173a67aef74860c583c
    image: factoriotools/factorio:modupdater
    imageID: docker-pullable://factoriotools/factorio@sha256:530ce26885b79928b6be8ebcb87ff68f23fd89b3106688fba1d95f6da18e2aaf
    lastState: {}
    name: modupdater
    ready: false
    restartCount: 0
    state:
      terminated:
        containerID: docker://f723f6a57bc0fcbeec3e3a7c16fb8a5eb2b79f3e8f6ed173a67aef74860c583c
        exitCode: 1
        finishedAt: 2019-05-06T12:59:39Z
        reason: Error
        startedAt: 2019-05-06T12:59:38Z
  hostIP: 192.168.5.5
  phase: Failed
  podIP: 10.233.65.81
  qosClass: BestEffort
  startTime: 2019-05-06T12:57:41Z
proegssilb commented 5 years ago

I'm working on getting the init container to work, hoping that this is some kind of issue with access to the relevant config files.

As noted in #254 , if the mod updater is broken out into a separate container, I'm OK with that.

proegssilb commented 5 years ago

Init container added in to match the config of how I'm running the Factorio server container, and it's still failing, with no real change to the logs.

SuperSandro2000 commented 5 years ago

You could add set -x to the corosponding script and then post the more detailed logs to get a better idea why it is failing.

proegssilb commented 5 years ago

You could add set -x to the corresponding script and then post the more detailed logs to get a better idea why it is failing.

I initially tried setting debug mode via sh arguments instead, and that didn't immediately work (no extra output was generated). I'm in the middle of redoing the kubernetes cluster, so it's going to be some time before I can poke at this again.

SuperSandro2000 commented 5 years ago

This got implemented in 28598a42a33530a1d687c1dbbcfcf742f605dc02.