medic / cht-upgrade-service

1 stars 0 forks source link

Cannot install a new build of the same branch #44

Open latin-panda opened 5 days ago

latin-panda commented 5 days ago

We have an ESK instance with a branch deployed. This instance can upgrade between releases 4.X.X, and between different branches without a problem.

When upgrading to the same branch but a different build, it keeps installing the old build and shows no errors

https://care-teams-demo.dev.medicmobile.org/admin/#/upgrade is in 4.11.0-9327-update-nav.10714293548 (old build)

The CI published the new branch:

Publishing doc...
medic:medic:9327-update-nav published!
regctl image copy 720541322708.dkr.ecr.eu-west-2.amazonaws.com/medic/cht-api:4.11.0-9327-update-nav.10815901430 public.ecr.aws/medic/cht-api:4.11.0-9327-update-nav
public.ecr.aws/medic/cht-api:4.11.0-9327-update-nav

This build is in the staging server: Screenshot 2024-09-12 at 4 45 24 PM

mrjones-plip commented 5 days ago

@latin-panda - can you list the steps to reproduce this problem? specifically, if you're using CHT deploy script and which values yaml file your using (copy everything in here except the password , secret etc ; )

thanks!

latin-panda commented 4 days ago

Sure.

This is the values.yml ```yml project_name: namespace: chtversion: 4.10.0 upstream_servers: docker_registry: "public.ecr.aws/medic" builds_url: "https://staging.dev.medicmobile.org/_couch/builds_4" # CouchDB Settings couchdb: password: secret: user: uuid: clusteredCouch_enabled: false couchdb_node_storage_size: 100Mi clusteredCouch: noOfCouchDBNodes: 3 toleration: key: "dev-couchdb-only" operator: "Equal" value: "true" effect: "NoSchedule" ingress: annotations: groupname: "dev-cht-alb" tags: "Environment=dev,Team=QA" certificate: host: ".dev.medicmobile.org" hosted_zone_id: load_balancer: "dualstack.k8s-devchtalb-.eu-west-2.elb.amazonaws.com" environment: "remote" # "local", "remote" cluster_type: "eks" # "eks" or "k3s-k3d" cert_source: "eks-medic" # "eks-medic" or "specify-file-path" or "my-ip-co" certificate_crt_file_path: "/path/to/certificate.crt" # Only required if cert_source is "specify-file-path" certificate_key_file_path: "/path/to/certificate.key" # Only required if cert_source is "specify-file-path" nodes: # If using clustered couchdb, add the nodes here: node-1: name-of-first-node, node-2: name-of-second-node, etc. # Add equal number of nodes as specified in clusteredCouch.noOfCouchDBNodes node-1: "-cht-couchdb-1" # This is the name of the first node where couchdb will be deployed node-2: "-cht-couchdb-2" # This is the name of the second node where couchdb will be deployed node-3: "-cht-couchdb-3" # This is the name of the third node where couchdb will be deployed # For single couchdb node, use the following: # Leave it commented out if you don't know what it means. # Leave it commented out if you want to let kubernetes deploy this on any available node. (Recommended) # single_node_deploy: "gamma-cht-node" # This is the name of the node where all components will be deployed - for non-clustered configuration. # Applicable only if using k3s k3s_use_vSphere_storage_class: "false" # "true" or "false" # vSphere specific configurations. If you set "true" for k3s_use_vSphere_storage_class, fill in the details below. vSphere: datastoreName: "DatastoreName" # Replace with your datastore name diskPath: "path/to/disk" # Replace with your disk path # ----------------------------------------- # Pre-existing data section # ----------------------------------------- couchdb_data: preExistingDataAvailable: "false" #If this is false, you don't have to fill in details in local_storage or remote. # If preExistingDataAvailable is true, fill in the details below. # For local_storage, fill in the details if you are using k3s-k3d cluster type. local_storage: #If using k3s-k3d cluster type and you already have existing data. preExistingDiskPath-1: "/var/lib/couchdb1" #If node1 has pre-existing data. preExistingDiskPath-2: "/var/lib/couchdb2" #If node2 has pre-existing data. preExistingDiskPath-3: "/var/lib/couchdb3" #If node3 has pre-existing data. # For ebs storage when using eks cluster type, fill in the details below. ebs: preExistingEBSVolumeID: "vol-0123456789abcdefg" # If you have already created the EBS volume, put the ID here. preExistingEBSVolumeSize: "100Gi" # The size of the EBS volume. ```
  1. Deploy using ./cht-deploy -f values.yaml
  2. Go to admin app > upgrade page
  3. Upgrade to a branch - now CHT is on that branch version
  4. Make changes in the branch and get it published
  5. Go to admin app > upgrade page
  6. Upgrade to a branch
  7. See the upgrade page shows the same build from step 3
mrjones-plip commented 4 days ago

ooohh!! gotcha! so you upgrade from within the admin UI - gotcha!

As a work around: instead of step 5, does it work to run cht-deploy again with but with chtversion: 4.11.0-9327-update-nav.10714293548 (or whatever branch you want) ?

latin-panda commented 1 day ago

As a work around: instead of step 5, does it work to run cht-deploy again with but with chtversion: 4.11.0-9327-update-nav.10714293548 (or whatever branch you want) ?

Will that delete the data in CouchDB?

I think the most straightforward workaround for us is to create a new branch whenever we need to deploy a version of the work, while keeping the data in CouchDB. <-- I'll try it this week