kartoza / WBR-SEMP

0 stars 3 forks source link

mainstream g3W site and publish outputs #74

Open gubuntu opened 3 years ago

gubuntu commented 3 years ago

Admire set up this site a while ago http://maps.kartoza.com:8082/en/map/waterberg/qdjango/1/

NyakudyaA commented 3 years ago

Admire set up this site a while ago http://maps.kartoza.com:8082/en/map/waterberg/qdjango/1/

@lucernae I fixed this up so it's now running again.

NyakudyaA commented 3 years ago

@lucernae Can you tackle this task this week?. The deadline for this is coming up this Friday

NyakudyaA commented 3 years ago

@lucernae We need the following

lucernae commented 3 years ago

@NyakudyaA are we supposed to have specific domain name for this since you mention SSL?

NyakudyaA commented 3 years ago

@NyakudyaA are we supposed to have specific domain name for this since you mention SSL?

It can be under kartoza ie (https://maps.wbr.kartoza.com) for now unless @gubuntu has a domain for this already

lucernae commented 3 years ago

Since it seems everyone is blocked by this, I deployed the app with legacy recipes here in Rancher 2's shared cluster:

https://maps.wbr.kartoza.com

Progress

Comments

Regarding node migration

The old service were deployed in a shared node in maps.kartoza.com. We need to decide from these options:

Regarding Helm charts

Currently I deployed it using ad-hoc deployment. Manual modification of docker-compose recipes in the maps.kartoza.com server. I will store this manifests for safe keeping in kartoza/devops. However proper better deployment would be to create a helm chart for g3w-suite then publish necessary changes for the docker images back to g3w-suite. Then for subsequent g3w-suite deployment, we can be confident that we can deploy it easily using helm chart.

Since it seems we need the service online ASAP, I deployed it using ad-hoc deployment. Then if permitted, make the helm charts. When the charts are ready, we swap the deployment (fairly trivial to do if the image version and data doesn't change).

Regarding Upstream Changes

The current g3w-suite recipes relies on development images and host bind mount. We need to suggest a better generic approach to them to avoid future breaking changes from the recipes and images. As what we already do in SPCGeoNode deployment.

Regarding data migration

I think we need to setup a session for this. It's easier to do it by using Rancher's abilities to share k8s environment. But I need to demonstrate it first. For now, we can lists are possible data to migrate. Here's what I found from just reading the recipes:

If you think that there are any other files unmentioned, then fill me in.

Regarding Data Sync

If this is for us, I think we can just use rsync or FileZilla in general, or VSCode. If this is for client, I'm not sure at the moment. I haven't tried btsync yet in k8s, there might be something we need to change regarding the network environment as btsync is peer to peer. Minio is also possible if you may consider it.

gubuntu commented 3 years ago

@NyakudyaA how are we doing here? As much as possible of our layers must be available via this portal this coming week. Also, the portal needs to be public i.e. anyone can see and interact with the map without having to register or log in.

Use of service endpoints and downloading should be free and easy, but should ideally require registration. Registration should if possible include a question about why the person wants the data

NyakudyaA commented 3 years ago

@NyakudyaA how are we doing here? As much as possible of our layers must be available via this portal this coming week. Also, the portal needs to be public i.e. anyone can see and interact with the map without having to register or log in.

Use of service endpoints and downloading should be free and easy, but should ideally require registration. Registration should if possible include a question about why the person wants the data

An update on this @zacharlie has been testing the workflow and has encountered some bugs. Some of them have been rectified by Rizky. I will be working on this tomorrow to try to degus the issues raised by Charlie and the login issue you have mentioned above

lucernae commented 3 years ago

Status update in this issue so we remember the state:

  1. Created maps.wbr.kartoza.com and staging.maps.wbr.kartoza.com piggybacking in kartoza-shared cluster (no new node provisioned). Deployed in Rancher 2 with LetsEncrypt SSL
  2. No helm charts created yet (not enough time yet). Manifests stored in kartoza/devops
  3. No data sync strategy has been decided yet
zacharlie commented 3 years ago

Some site performance issues are outlined in #91, #92, & #93

zacharlie commented 3 years ago

Once performance issues have been addressed we will need to evaluate whether the current infrastructure is suitable for the platform needs. If we are intending to use the platform for large numbers of users or for marketing purposes we will likely need to increase resources to ensure that the service is highly performant.

gubuntu commented 1 year ago

@NyakudyaA @tharanathkartoza was all the above taken into account when doing the recent migration to R2/k8s?

Please close if all done