Closed lukelin2048 closed 3 years ago
I installed it on microk8s. What exact issue are you facing?
@noorul Hi , did you have all step? I use clean debian and microk8s , but my error log show like web hook failed , and I install it at my own server like a small pc(x64) , use microk8s addon like ingress and metallb
@lukelin2048 Which step is giving you this error? Also I would suggest you join https://kubernetes.slack.com/archives/C9MBGQJRH
okay , I retried it and I think is my fault , my pc private IP is 10.10.10.20 and set the metalLB to private IP like 10.10.10.200-10.10.10.240 , but route will not forwarding IP range 10.10.10.200-10.10.10.240 to 10.10.10.20 ...
and if I set metalLB to 10.10.10.20-10.10.10.20 , jx admin operator
can be no error with install , but the jx deployment will always pending (READY : 0/1)
so is possible install jx3 at single node and single private IP ? (without LB support?)
I don't think metallb will cause any issue to installation process. If the pods are in pending state you can check the reason using kubectl describe pod
okay ... it look like ...
if I enable microk8s ingress addon , then jx admin operator
will install it own nginx reverse proxy , then 80 / 443 port will collection with 2 service (microk8s addon "ingress" is a nginx reverse proxy and use 80 / 443 port too)
if I disable microk8s ingress addon , then jx admin operator
will loop , because I use helm install gitea in the k8s (I'd write service mapping port to 30080 and 30022) ...and use it service repo to jx3 , I think if I use github maybe okay , but I want build all at single k8s ...
error part log :
...
jx gitops helmfile move --output-dir config-root --dir /tmp/generate --dir-includes-release-name
jx secret convert --source-dir config-root -r jx-vault
jx secret replicate --selector secret.jenkins-x.io/replica-source=true
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault jx secret populate --source filesystem --secret-namespace jx-vault
Error: failed to populate secrets: failed to save properties key: tekton-container-registry-auth properties: .dockerconfigjson on ExternalSecret tekton-container-registry-auth: failed to replicate Secret for local backend: failed to create Secret tekton-container-registry-auth in namespace jx-production: namespaces "jx-production" not found
Usage:
populate [flags]
Examples:
jx-secret populate
Flags:
-b, --batch-mode Runs in batch mode without prompting for user input
--boot-secret-namespace string the namespace to that contains the boot secret used to populate git secrets from
-d, --dir string the directory to look for the .jx/secret/mapping/secret-mappings.yaml file (default ".")
-f, --filter string the filter to filter on ExternalSecret names
--helm-secrets-dir string the directory where the helm secrets live with a folder per namespace and a file with a '.yaml' extension for each secret name. Defaults to $JX_HELM_SECRET_FOLDER
-h, --help help for populate
--log-level string Sets the logging level. If not specified defaults to $JX_LOG_LEVEL
--no-wait disables waiting for the secret store (e.g. vault) to be available
-n, --ns string the namespace to filter the ExternalSecret resources
--secret-namespace string the namespace in which secret infrastructure resides such as Hashicorp Vault (default "jx-vault")
-s, --source string the source location for the ExternalSecrets, valid values include filesystem or kubernetes (default "kubernetes")
--verbose Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warn, info, debug, trace
-w, --wait duration the maximum time period to wait for the vault pod to be ready if using the vault backendType (default 2h0m0s)
error: failed to populate secrets: failed to save properties key: tekton-container-registry-auth properties: .dockerconfigjson on ExternalSecret tekton-container-registry-auth: failed to replicate Secret for local backend: failed to create Secret tekton-container-registry-auth in namespace jx-production: namespaces "jx-production" not found
make[1]: [versionStream/src/Makefile.mk:122: fetch] Error 1 (ignored)
jx gitops namespace --dir-mode --dir config-root/namespaces
jx gitops helmfile report
namespace jx
...
jx gitops annotate --dir config-root/namespaces --kind Deployment --selector app=pusher-wave --invert-selector wave.pusher.com/update-on-config-change=true
git add --all
git commit -m "chore: regenerated" -m "/pipeline cancel"
make regen-phase-3
[master 6fbd8df] chore: regenerated
2 files changed, 104 insertions(+)
create mode 100644 config-root/namespaces/jx/jx-preview/jx-preview-0.0.181-release.yaml
create mode 100644 config-root/namespaces/jx/lighthouse/lighthouse-1.0.36-release.yaml
make[1]: Leaving directory '/workspace/source'
make[1]: Entering directory '/workspace/source'
Already up to date.
remote: invalid credentials from 10.10.10.10:42584
fatal: Authentication failed for 'http://10.10.10.10:30080/gitea_admin/jx3-kubernetes.git/'
make[1]: *** [versionStream/src/Makefile.mk:341: push] Error 128
error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: ''
make[1]: Leaving directory '/workspace/source'
make: *** [versionStream/src/Makefile.mk:240: regen-check] Error 1
tailing boot Job pod jx-boot-954cdf2d-1b44-47f0-9f90-ea15a29f0f08-ds6vh
jx gitops git setup
generated Git credentials file: /workspace/xdg_config/git/credentials with username: gitea_admin email:
jx gitops apply
found last commit message: fix domain
make regen-phase-1
make[1]: Entering directory '/workspace/source'
...
jx gitops annotate --dir config-root/namespaces --kind Deployment --selector app=pusher-wave --invert-selector wave.pusher.com/update-on-config-change=true
git add --all
git commit -m "chore: regenerated" -m "/pipeline cancel"
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
(use "git push" to publish your local commits)
nothing to commit, working tree clean
make[1]: [versionStream/src/Makefile.mk:316: commit] Error 1 (ignored)
make[1]: Leaving directory '/workspace/source'
make regen-phase-3
make[1]: Entering directory '/workspace/source'
Already up to date.
remote: invalid credentials from 10.10.10.10:34544
fatal: Authentication failed for 'http://10.10.10.10:30080/gitea_admin/jx3-kubernetes.git/'
make[1]: *** [versionStream/src/Makefile.mk:341: push] Error 128
error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: ''
make[1]: Leaving directory '/workspace/source'
make: *** [versionStream/src/Makefile.mk:240: regen-check] Error 1
boot Job pod jx-boot-954cdf2d-1b44-47f0-9f90-ea15a29f0f08-nmhzd has Failed
pod jx-boot-954cdf2d-1b44-47f0-9f90-ea15a29f0f08-g4vl9 has status Ready
tailing boot Job pod jx-boot-954cdf2d-1b44-47f0-9f90-ea15a29f0f08-g4vl9
jx gitops git setup
generated Git credentials file: /workspace/xdg_config/git/credentials with username: gitea_admin email:
jx gitops apply
found last commit message: fix domain
make regen-phase-1
make[1]: Entering directory '/workspace/source'
...(loop)
and I use this to binding gitea port (my pc private IP in this demo is 10.10.10.10)
apiVersion: v1
kind: Service
metadata:
name: gitea-out
spec:
ports:
- port: 30080
targetPort: 3000
nodePort: 30080
name: gitea-http
- port: 30022
targetPort: 22
nodePort: 30022
name: gitea-ssh
selector:
app: gitea
type: NodePort
I think jenkins-x installs its own ingress. So I don't think microk8s ingress is required. Here is the output of microk8s status
output from my instance
noorul@jxpoc:~$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
ha-cluster # Configure high availability on the current node
istio # Core Istio service mesh services
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
jaeger # Kubernetes Jaeger operator with its simple config
keda # Kubernetes-based Event Driven Autoscaling
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
multus # Multus CNI enables attaching multiple network interfaces to pods
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
traefik # traefik Ingress controller for external access
noorul@jxpoc:~$
without any logs I'm guessing; but its sounding like it installed OK but just you didn't get ingress working right? i.e. the jx admin log
returned Successful? Just missing webhooks?
these docs may help:
Hi , I tried many time to install jx3 to microk8s , use doc https://jenkins-x.io/v3/admin/platforms/on-premise/ but failed many times ... is anywhere doc to install jx3 to microk8s? I think microk8s is simple and without VM choice for k8s ...