Closed Murali-Cloudbridge closed 1 year ago
I installed ingress as well since it was pre-requisite.
nginx-ingress nginx-ingress NodePort 172.20.206.25
Trying to install even older versions of tackle v1.2 on different EKS cluster to check.
Latest update: I could fix it myself by checking logs and fixing errors. I will prepare a doc for installing it on EKS.
I can see the login page, but the credentials provided are not working.
admin / Passw0rd!
Once I give creds, it displays the below page.
I was able to see user admin created in logs.
Could you please suggest whether the creds are correct or to check anything different to check this issue.
As per @Murali-Cloudbridge in https://github.com/konveyor/tackle2-operator/issues/169#issuecomment-1485823629
I have deployed tackle in the EKS cluster in an AWS environment.
Yes I was not sure of accessing it, based on the documentation I tried to use the proxy.
kubectl port-forward svc/ 9090 -n my-tackle-operator
kubectl port-forward svc/tackle-hub 8080 -n my-tackle-operator
kubectl port-forward svc/tackle-ui 8080 -n my-tackle-operator
@Murali-Cloudbridge is attempting to log into the UI after forwarding 'tackle-ui', begins to login and is redirected to tackle-keycloak-sso.my-tackle-operator.svc which isn't reachable.
I think the fix for this is to get the Ingress resource working on the EKS cluster @Murali-Cloudbridge is using.
@fbladilo @jmontleon @djzager @ibolton336 is there any advice you can share to help @Murali-Cloudbridge
From: https://github.com/konveyor/tackle2-operator/issues/169#issuecomment-1485854794
Thanks a lot, John. I will try to modify the CR and check.
It will be great if I can access it via Ingress resource.
One more to report, once the tackle is installed keycloak pod was pending. It was due to pv was not bound. I had to install Addon in the EKS cluster - EBS-related drivers. I also got some permission issues, as shown below, the pod was crashingloopbackoff.
[mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied
Then, I updated deployment yaml file and added below section in security context:
securitycontext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000
Then Issue was solved for keycloak pods and postgress.
@Murali-Cloudbridge wanted to thank you for sharing above, I am working through an EKS setup in off hours to learn more about this so we can get docs for EKS usage. Have run into same issues as you
Need to configure the EBS-CSI addon for EKS as of k8s 1.23 the default has changed to ebs-csi opposed to in-tree, so ebs-csi AddOn is needed.
Created some notes here for the EBS-CSI AddOn install for others: https://gist.github.com/jwmatthews/d701da13eda5d57d4e8d2adf594fc4f2
tackle-pathfinder-postgresql in CrashLoopBackOff
tackle-pathfinder-postgresql-695b9798d6-pbq56 0/1 CrashLoopBackOff 9 (2m21s ago) 11h
$ kubectl logs tackle-pathfinder-postgresql-695b9798d6-pbq56 -n konveyor-tackle
mkdir: cannot create directory '/var/lib/pgsql/data': Permission denied
I don't have much else to share just yet, but wanted to let you know I will continue to work through this so I can help to dig into the auth case and determine what we need to get Ingress working.
We have automation now for deploying EKS clusters for testing with ALB ingress configured. https://github.com/konveyor/hack_env_helpers/tree/main/aws/eks
With pr #197 non-auth will be working if the tackle CR is created as:
kind: Tackle
apiVersion: tackle.konveyor.io/v1alpha1
metadata:
name: tackle
namespace: konveyor-tackle
spec:
ui_ingress_class_name: alb
ui_ingress_path_type: Prefix
Note that rwx_supported
and feature_auth_required
are now both set to false
by default.
The last piece I need to test is the workflow when feature_auth_required: true
is set.
The fix to https://github.com/konveyor/tackle2-operator/issues/167#issuecomment-1482627378 will be in https://github.com/konveyor/tackle2-ui/pull/822
The issue is we needed to add a few forwarding header parameters to our reverse proxy, without that it wasn't possible to use our auth workflow with kubectl port-forward svc/tackle-ui 7080:8080 -n konveyor-tackle
This issue didn't present itself when running on OpenShift clusters as when we accessed the UI via an OpenShift Route the header parameters were being added.
I have installed the latest version of tackle but not some pods are in a pending state.
`NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-deployment-7fb96c846b-cvf4t 1/1 Running 0 5h42m default nginx-deployment-7fb96c846b-nhp6d 1/1 Running 0 5h42m default nginx-deployment-7fb96c846b-wbx9h 1/1 Running 0 5h42m konveyor-tackle konveyor-tackle-wr9w2 1/1 Running 0 5h28m kube-system aws-node-9nqp7 1/1 Running 0 6h51m kube-system aws-node-tsmq4 1/1 Running 0 6h51m kube-system cluster-autoscaler-7d58d9969d-75mtf 1/1 Running 0 4h26m kube-system coredns-7cf95b74cc-hdk2z 1/1 Running 0 6h57m kube-system coredns-7cf95b74cc-x6twf 1/1 Running 0 6h57m kube-system kube-proxy-t5h5m 1/1 Running 0 6h51m kube-system kube-proxy-xx8vj 1/1 Running 0 6h51m my-tackle-operator tackle-keycloak-postgresql-f455ffdf8-b5qn9 0/1 Pending 0 5h19m my-tackle-operator tackle-operator-7fc8bd6bb7-87ks7 1/1 Running 0 5h18m olm a42881ea1507f1361d60825127fbe483b60b751c575a69ef734fcd24e6ctlfq 0/1 Completed 0 6h20m olm catalog-operator-655fb46cd4-5jnkb 1/1 Running 0 6h22m olm olm-operator-67fdb4b99d-txv6c 1/1 Running 0 6h22m olm packageserver-5f55f478ff-s2fv4 1/1 Running 0 6h21m muralithimmegowda@CB-L015 ~ %
tackle-keycloak-postgresql : It has no events or errors but is in a pending state.
It says succeeded. I have installed this in the latest version of EKS 1.25. muralithimmegowda@CB-L015 ~ % kubectl get csv -n my-tackle-operator NAME DISPLAY VERSION REPLACES PHASE tackle-operator.v2.1.3 Tackle Operator 2.1.3 tackle-operator.v2.1.2 Succeeded muralithimmegowda@CB-L015 ~ %
It has not created any ingress how to access the application. Please suggest.