eclipse-che / che

Kubernetes based Cloud Development Environments for Enterprise Teams
http://eclipse.org/che
Eclipse Public License 2.0
6.96k stars 1.19k forks source link

What is best practice installing Che locally in M1 Mac #21908

Closed yuki-ohnaka-cv closed 1 year ago

yuki-ohnaka-cv commented 1 year ago

Summary

I've tried installing che on M1 Mac but it doesn't work. If you have a track record of working well on M1 Mac, can you let me know?

Relevant information

AObuchow commented 1 year ago

Eclipse Che is not currently supported on the M1 ARM architecture, as the container images used in Che are not being compiled for this architecture. The main reason being that the primary use case is to run Che on a cluster, and the M1 ARM architecture is currently targeting personal computers.

However, making M1 ARM builds of the container images used in Che should be possible - though I do not foresee these builds being officially supported.

I briefly looked into Docker supporting x86 binaries and found this issue, which is still not resolved, unfortunately.

I have a personal M1 MacBook Pro and may be able to help in this area (e.g. configuring builds for m1 ARM & testing them), though I cannot make any promises due to other task priorities.

CC: @l0rd

yuki-ohnaka-cv commented 1 year ago

@AObuchow Thank you for the answer. I'm sorry that che doesn't support M1 ARM Have you tried Lima , which can run x86_64 VMs on M1 ARM?(I'm not doing well... #21889 )

CC: @l0rd

sgaist commented 1 year ago

Hi,

You can deploy che on a Mac M1. The main issue is the che-operator image however that can be easily circumvented. Use the image hash to ensure that one is used. Docker on an M1 is capable of running both amd64 and arm64 images. For example: --che-operator-image=quay.io/eclipse/che-operator@sha256:e61d2f3a43edb62c8b06a98d3113b0a14df5cb58b39d9c55a7e235a10262af3c (this is for version 7.59.0 of the operator).

For the rest, you will have to adapt the minikube guidelines. You can get some inspiration from this gist written for minikube. While the stack is a bit different, the workflow is the same.

For the Docker-Desktop configuration, you will have to go into the VM to edit the kube-apiserver.yaml manifest file that can be found in /var/lib/kubeadm/manifests (make a copy first).

To copy the certificate for Keycloak, copy and paste it in vi, it's the easier way. Note that you will also need to add a volume and a volumeMount for it in the kube-apiserver.yaml manifest.

On a side note, the way Keycloak is deployed in the gist means that every time it's restarted you'll lose the data in there. If you want something that is persistent, you should deploy it with a PostgreSQL backend.

You can then finally install Che using chectl rather than the helm chart as done in the gist. You will want to use larger timeouts, the che pod can take very long to start. Check the pod states even if chectl claims failure as it could simply be the startup time.

Here is an example CheCluster CRD

apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
  name: eclipse-che
spec:
  networking:
    domain: WWW.XXX.YYY.ZZZ.nip.io
    auth:
      identityProviderURL: https://keycloak.WWW.XXX.YYY.ZZZ.nip.io/realms/che
      oAuthClientName: k8s-client
      oAuthSecret: k8s-client

  components:
    cheServer:
      extraProperties:
        CHE_OIDC_USERNAME__CLAIM: 'email'

the oAuthXXX values matches the one that you will have if using the gist to create the Keycloak realm.

You might also want to give more storage and RAM for Docker to run.

yuki-ohnaka-cv commented 1 year ago

docker desktop v4.16.2 provided a way to run images on amd64 with Rosetta. Use Rosetta for x86/amd64 emulation on Apple Silicon

By pulling the Docker Image in advance using the --platform amd64 option and changing the imagePullPolicy in the template to IfNotPresent, by executing chectl server:deploy --platform docker-desktop, The Che Operator Pod has been confirmed to be deployed. But Gateway pod bootstrap fails.

% cat /var/folders/cx/vtsxjjlj4rz7zlsgd_kbfjhw0000gn/T/chectl-logs/1674470342651/eclipse-che/che-gateway-6b456fb5c6-z6xqx/oauth-proxy.log 
[2023/01/23 10:48:25] [main.go:54] invalid configuration:
  provider missing setting: client-id
  missing setting: client-secret or client-secret-file

From this log, I think Keycloak needs to be deployed and configured.

yuki-ohnaka-cv commented 1 year ago

@sgaist I want to set up Keycloak but am not familiar with k8s. Is there any documentation for setting up Keycloak locally?

Thanks.

yuki-ohnaka-cv commented 1 year ago

tried with --che-operator-image=quay.io/eclipse/che-operator@sha256:e61d2f3a43edb62c8b06a98d3113b0a14df5cb58b39d9c55a7e235a10262af3c option, but I got Error ErrImagePull.

And tried docker pull, but got Error... ๐Ÿ™ƒ

% docker pull quay.io/eclipse/che-operator@sha256:e61d2f3a43edb62c8b06a98d3113b0a14df5cb58b39d9c55a7e235a10262af3c
quay.io/eclipse/che-operator@sha256:e61d2f3a43edb62c8b06a98d3113b0a14df5cb58b39d9c55a7e235a10262af3c: Pulling from eclipse/che-operator
no matching manifest for linux/arm64/v8 in the manifest list entries
sgaist commented 1 year ago

@yuki-ohnaka-cv for the Keycloak deployment, see the deploy_keycloak function in the gist I mentioned in my original post. As explained earlier, it's a development environment that you will need to reconfigure if restarted but it should at least get you started to test Che.

What information do you get if you call kubectl describe pods <name of the failing> ?

If you want to pull through docker, you are missing --platform linux/amd64.

yuki-ohnaka-cv commented 1 year ago

@sgaist

Thank you for support!

What information do you get if you call kubectl describe pods ?

I got this result.

% kubectl describe pods che-gateway-8667bd4c55-4c5rr -n eclipse-che 
Name:             che-gateway-8667bd4c55-4c5rr
Namespace:        eclipse-che
Priority:         0
Service Account:  che-gateway
Node:             docker-desktop/192.168.65.4
Start Time:       Mon, 23 Jan 2023 21:25:15 +0900
Labels:           app=che
                  app.kubernetes.io/component=che-gateway
                  app.kubernetes.io/instance=che
                  app.kubernetes.io/managed-by=che-operator
                  app.kubernetes.io/name=che
                  app.kubernetes.io/part-of=che.eclipse.org
                  component=che-gateway
                  pod-template-hash=8667bd4c55
Annotations:      <none>
Status:           Running
IP:               10.1.0.30
IPs:
  IP:           10.1.0.30
Controlled By:  ReplicaSet/che-gateway-8667bd4c55
Containers:
  gateway:
    Container ID:   docker://f00e33d0a6805b058508ef92d9f85035f3cb6088794556701bfa9f81c34bef23
    Image:          quay.io/eclipse/che--traefik:v2.8.1-4e52a5e2495484f5e19a49edfd2f652b0bce7b3603fa0df545ed90168ffae1c3
    Image ID:       docker-pullable://quay.io/eclipse/che--traefik@sha256:4e52a5e2495484f5e19a49edfd2f652b0bce7b3603fa0df545ed90168ffae1c3
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 23 Jan 2023 21:25:15 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  4Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /dynamic-config from dynamic-config (rw)
      /etc/traefik from static-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtwfh (ro)
  configbump:
    Container ID:   docker://2d2a06174105844a5dcc35592c6ca78c15b0f283ae86a6d0d132343a95ca41db
    Image:          quay.io/che-incubator/configbump:0.1.4
    Image ID:       docker-pullable://quay.io/che-incubator/configbump@sha256:175ff2ba1bd74429de192c0a9facf39da5699c6da9f151bd461b3dc8624dd532
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 23 Jan 2023 21:25:15 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  256Mi
    Requests:
      cpu:     50m
      memory:  64Mi
    Environment:
      CONFIG_BUMP_DIR:        /dynamic-config
      CONFIG_BUMP_LABELS:     app=che,component=che-gateway-config
      CONFIG_BUMP_NAMESPACE:  eclipse-che (v1:metadata.namespace)
    Mounts:
      /dynamic-config from dynamic-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtwfh (ro)
  oauth-proxy:
    Container ID:  docker://db37aa660036b1c04cce1842eca3a9dc56c3371117e8eaf5b690f7c64cc2cc6c
    Image:         quay.io/oauth2-proxy/oauth2-proxy:v7.4.0
    Image ID:      docker-pullable://quay.io/oauth2-proxy/oauth2-proxy@sha256:393e63c3b924e3f78a5b592ad647417af4ea229398b7bebbbd7ef3d6181aceb5
    Port:          8080/TCP
    Host Port:     0/TCP
    Args:
      --config=/etc/oauth-proxy/oauth-proxy.cfg
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 24 Jan 2023 00:01:07 +0900
      Finished:     Tue, 24 Jan 2023 00:01:07 +0900
    Ready:          False
    Restart Count:  9
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:     100m
      memory:  64Mi
    Environment:
      http_proxy:   
      https_proxy:  
      no_proxy:     .svc
      CM_REVISION:  11370
    Mounts:
      /etc/oauth-proxy from oauth-proxy-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtwfh (ro)
  kube-rbac-proxy:
    Container ID:  docker://5b7536d225f9fd61b7afc5ec1fa0d721a3d62ad861d0c41cee93790ec3d7a195
    Image:         quay.io/brancz/kube-rbac-proxy:v0.11.0
    Image ID:      docker-pullable://quay.io/brancz/kube-rbac-proxy@sha256:b62289c3f3f883ee76dd4e8879042dd19abff743340e451cb59f9654fc472e4f
    Port:          <none>
    Host Port:     <none>
    Args:
      --insecure-listen-address=0.0.0.0:8089
      --upstream=http://127.0.0.1:8090/ping
      --logtostderr=true
      --config-file=/etc/kube-rbac-proxy/authorization-config.yaml
    State:          Running
      Started:      Mon, 23 Jan 2023 21:25:15 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:        100m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /etc/kube-rbac-proxy from kube-rbac-proxy-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtwfh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  static-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      che-gateway-config
    Optional:  false
  dynamic-config:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  oauth-proxy-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      che-gateway-config-oauth-proxy
    Optional:  false
  kube-rbac-proxy-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      che-gateway-config-kube-rbac-proxy
    Optional:  false
  kube-api-access-wtwfh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Started    157m                  kubelet            Started container configbump
  Normal   Scheduled  157m                  default-scheduler  Successfully assigned eclipse-che/che-gateway-8667bd4c55-4c5rr to docker-desktop
  Normal   Created    157m                  kubelet            Created container gateway
  Normal   Started    157m                  kubelet            Started container gateway
  Normal   Pulled     157m                  kubelet            Container image "quay.io/che-incubator/configbump:0.1.4" already present on machine
  Normal   Created    157m                  kubelet            Created container configbump
  Normal   Pulled     157m                  kubelet            Container image "quay.io/eclipse/che--traefik:v2.8.1-4e52a5e2495484f5e19a49edfd2f652b0bce7b3603fa0df545ed90168ffae1c3" already present on machine
  Normal   Started    157m                  kubelet            Started container kube-rbac-proxy
  Normal   Pulled     157m                  kubelet            Container image "quay.io/brancz/kube-rbac-proxy:v0.11.0" already present on machine
  Normal   Created    157m                  kubelet            Created container kube-rbac-proxy
  Normal   Created    156m (x4 over 157m)   kubelet            Created container oauth-proxy
  Normal   Pulled     156m (x4 over 157m)   kubelet            Container image "quay.io/oauth2-proxy/oauth2-proxy:v7.4.0" already present on machine
  Normal   Started    156m (x4 over 157m)   kubelet            Started container oauth-proxy
  Warning  BackOff    152m (x25 over 157m)  kubelet            Back-off restarting failed container
yuki-ohnaka-cv commented 1 year ago

If you want to pull through docker, you are missing --platform linux/amd64.

I understood that it was necessary to pull in advance because I was using --platform=docker-desktop instead of --platform=minikube.

sgaist commented 1 year ago

Ok, so in fact, your deployment is "working correctly". What you are missing now is Keycloak (or Dex but I haven't tested that one yet so I can't comment on it).

As stated earlier, you can reuse the gist code to deploy Keycloak on your cluster. Extract that part or nuke the unrelated parts.

yuki-ohnaka-cv commented 1 year ago

Manual setup of Keycloak was too difficult for me...๐Ÿซ 

yuki-ohnaka-cv commented 1 year ago

I am trying again with minikube. (without lima)

minikube start \
    --driver=docker \
    --cpus=6 \
    --memory=12G \
    --disk-size=50GB \
    --vm=true \
    --addons=ingress

# To prevent ImagePullBackOff from happening
minikube ssh
docker pull quay.io/eclipse/che-operator:7.59.0 --platform amd64
docker pull quay.io/eclipse/che-plugin-registry:7.59.0 --platform amd64
# To prevent ImagePullBackOff from happening

./chectl server:deploy \
    --platform=minikube \
    --k8spodreadytimeout=120000

And I have completed the setup che!!

  โœ” Show important messages
    โœ” Eclipse Che 7.59.0 has been successfully deployed.
    โœ” Documentation             : https://www.eclipse.org/che/docs/
    โœ” -------------------------------------------------------------------------------
    โœ” Users Dashboard           : https://192.168.49.2.nip.io/dashboard/
    โœ” -------------------------------------------------------------------------------
    โœ” Plug-in Registry          : https://192.168.49.2.nip.io/plugin-registry/v3/
    โœ” Devfile Registry          : https://192.168.49.2.nip.io/devfile-registry/
    โœ” -------------------------------------------------------------------------------
    โœ” Dex user credentials      : che@eclipse.org:admin
    โœ” Dex user credentials      : user1@che:password
    โœ” Dex user credentials      : user2@che:password
    โœ” Dex user credentials      : user3@che:password
    โœ” Dex user credentials      : user4@che:password
    โœ” Dex user credentials      : user5@che:password
    โœ” -------------------------------------------------------------------------------
yuki-ohnaka-cv commented 1 year ago

but I can not access https://192.168.49.2.nip.io/dashboard/ on my browser... Any advice on this issue?

yuki-ohnaka-cv commented 1 year ago

@AObuchow @sgaist cc. @l0rd โ†‘ Do you have any good solutions?

l0rd commented 1 year ago

@yuki-ohnaka-cv anything interesting in the container logs? Are the server, dashboard, postgres, gateway, plugin-registry and devfile-registry pods up and running? When you say that you cannot access the dashboard what error do you get? Can you share a screenshot with the browser web dev tools opened? Can you share your CheCluster CR?

yuki-ohnaka-cv commented 1 year ago

@l0rd

I got Error(ERR_CONNECTION_TIMED_OUT) on chrome (mac).

on Host (M1 mac)

% minikube version  
minikube version: v1.28.0
commit: 986b1ebd987211ed16f8cc10aed7d2c42fc8392f
% ./chectl --version 
chectl/7.59.0 darwin-arm64 node-v16.13.2
% nslookup 192.168.49.2.nip.io
Server:         192.168.0.1
Address:        192.168.0.1#53

Non-authoritative answer:
Name:   192.168.49.2.nip.io
Address: 192.168.49.2
 % ping 192.168.49.2
PING 192.168.49.2 (192.168.49.2): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
^C
--- 192.168.49.2 ping statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss
% ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
        options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
        inet 127.0.0.1 netmask 0xff000000 
        inet6 ::1 prefixlen 128 
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
        nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
anpi2: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:ca 
        inet6 fe80::14bd:e5ff:fea2:1aca%anpi2 prefixlen 64 scopeid 0x4 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
anpi1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:c9 
        inet6 fe80::14bd:e5ff:fea2:1ac9%anpi1 prefixlen 64 scopeid 0x5 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
anpi0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:c8 
        inet6 fe80::14bd:e5ff:fea2:1ac8%anpi0 prefixlen 64 scopeid 0x6 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:a8 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
en5: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:a9 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
en6: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether 16:bd:e5:a2:1a:aa 
        nd6 options=201<PERFORMNUD,DAD>
        media: none
        status: inactive
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
        options=460<TSO4,TSO6,CHANNEL_IO>
        ether 36:53:dc:5c:8e:80 
        media: autoselect <full-duplex>
        status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
        options=460<TSO4,TSO6,CHANNEL_IO>
        ether 36:53:dc:5c:8e:84 
        media: autoselect <full-duplex>
        status: inactive
en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
        options=460<TSO4,TSO6,CHANNEL_IO>
        ether 36:53:dc:5c:8e:88 
        media: autoselect <full-duplex>
        status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=63<RXCSUM,TXCSUM,TSO4,TSO6>
        ether 36:53:dc:5c:8e:80 
        Configuration:
                id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
                maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
                root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
                ipfilter disabled flags 0x0
        member: en1 flags=3<LEARNING,DISCOVER>
                ifmaxaddr 0 port 10 priority 0 path cost 0
        member: en2 flags=3<LEARNING,DISCOVER>
                ifmaxaddr 0 port 11 priority 0 path cost 0
        member: en3 flags=3<LEARNING,DISCOVER>
                ifmaxaddr 0 port 12 priority 0 path cost 0
        nd6 options=201<PERFORMNUD,DAD>
        media: <unknown type>
        status: inactive
ap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
        ether be:d0:74:11:d2:a3 
        inet6 fe80::bcd0:74ff:fe11:d2a3%ap1 prefixlen 64 scopeid 0xe 
        nd6 options=201<PERFORMNUD,DAD>
        media: autoselect (<unknown type>)
        status: inactive
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
        ether bc:d0:74:11:d2:a3 
        inet6 fe80::1895:50f:4a05:2f13%en0 prefixlen 64 secured scopeid 0xf 
        inet 192.168.0.124 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=201<PERFORMNUD,DAD>
        media: autoselect
        status: active
awdl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
        ether da:22:a5:25:e4:48 
        inet6 fe80::d822:a5ff:fe25:e448%awdl0 prefixlen 64 scopeid 0x10 
        nd6 options=201<PERFORMNUD,DAD>
        media: autoselect
        status: active
llw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        options=400<CHANNEL_IO>
        ether da:22:a5:25:e4:48 
        inet6 fe80::d822:a5ff:fe25:e448%llw0 prefixlen 64 scopeid 0x11 
        nd6 options=201<PERFORMNUD,DAD>
        media: autoselect
        status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
        inet6 fe80::919b:bfdb:5152:505e%utun0 prefixlen 64 scopeid 0x12 
        nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
        inet6 fe80::8af6:7c05:c2a6:817%utun1 prefixlen 64 scopeid 0x13 
        nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1000
        inet6 fe80::ce81:b1c:bd2c:69e%utun2 prefixlen 64 scopeid 0x14 
        nd6 options=201<PERFORMNUD,DAD>
% kubectl get po -A
NAMESPACE                 NAME                                               READY   STATUS      RESTARTS      AGE
cert-manager              cert-manager-7778d64785-xwr5p                      1/1     Running     1 (42m ago)   43m
cert-manager              cert-manager-cainjector-5c7b85f464-k8z5m           1/1     Running     3 (41m ago)   43m
cert-manager              cert-manager-webhook-58b97ccf69-qrm52              1/1     Running     2 (42m ago)   43m
devworkspace-controller   devworkspace-controller-manager-7dbbb94f8b-88hkr   2/2     Running     0             41m
devworkspace-controller   devworkspace-webhook-server-cc6669c97-mfb2p        2/2     Running     0             41m
dex                       dex-74bb9f7ddd-d8f7z                               1/1     Running     3 (42m ago)   42m
eclipse-che               che-b6d498546-5r5z6                                1/1     Running     0             37m
eclipse-che               che-dashboard-6dcc4d5d49-tdffd                     1/1     Running     0             38m
eclipse-che               che-gateway-7f9644d466-pqxzz                       4/4     Running     0             37m
eclipse-che               che-operator-586b744dc4-bgvlh                      1/1     Running     0             40m
eclipse-che               che-tls-job-r7w9k                                  0/1     Completed   0             40m
eclipse-che               devfile-registry-7f44bd9584-qtznv                  1/1     Running     0             39m
eclipse-che               plugin-registry-7fc7df746c-7hrpj                   1/1     Running     0             38m
eclipse-che               postgres-7cc469f89-qtdcw                           1/1     Running     0             39m
ingress-nginx             ingress-nginx-admission-create-9vd94               0/1     Completed   0             49m
ingress-nginx             ingress-nginx-admission-patch-vvnhl                0/1     Completed   1             49m
ingress-nginx             ingress-nginx-controller-5959f988fd-b255r          1/1     Running     2 (41m ago)   49m
kube-system               coredns-565d847f94-8w4sf                           1/1     Running     3 (41m ago)   49m
kube-system               etcd-minikube                                      1/1     Running     2 (42m ago)   49m
kube-system               kube-apiserver-minikube                            1/1     Running     2 (41m ago)   41m
kube-system               kube-controller-manager-minikube                   1/1     Running     2 (42m ago)   49m
kube-system               kube-proxy-jx4vm                                   1/1     Running     2 (42m ago)   49m
kube-system               kube-scheduler-minikube                            1/1     Running     2 (42m ago)   49m
kube-system               storage-provisioner                                1/1     Running     3 (42m ago)   49m
% kubectl get services -A
NAMESPACE                 NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
cert-manager              cert-manager                              ClusterIP   10.103.230.60    <none>        9402/TCP                     43m
cert-manager              cert-manager-webhook                      ClusterIP   10.102.156.133   <none>        443/TCP                      43m
default                   kubernetes                                ClusterIP   10.96.0.1        <none>        443/TCP                      49m
devworkspace-controller   devworkspace-controller-manager-service   ClusterIP   10.108.93.127    <none>        443/TCP                      41m
devworkspace-controller   devworkspace-controller-metrics           ClusterIP   10.106.85.61     <none>        8443/TCP                     41m
devworkspace-controller   devworkspace-webhookserver                ClusterIP   10.101.47.73     <none>        443/TCP,9443/TCP             41m
dex                       dex                                       ClusterIP   10.96.43.152     <none>        5556/TCP                     43m
eclipse-che               che-dashboard                             ClusterIP   10.109.220.141   <none>        8080/TCP                     38m
eclipse-che               che-gateway                               ClusterIP   10.105.134.145   <none>        8080/TCP,8089/TCP            38m
eclipse-che               che-host                                  ClusterIP   10.98.81.244     <none>        8080/TCP,8087/TCP,8000/TCP   40m
eclipse-che               che-operator-service                      ClusterIP   10.104.170.138   <none>        443/TCP                      41m
eclipse-che               devfile-registry                          ClusterIP   10.110.241.144   <none>        8080/TCP                     39m
eclipse-che               plugin-registry                           ClusterIP   10.104.158.197   <none>        8080/TCP                     38m
eclipse-che               postgres                                  ClusterIP   10.110.195.24    <none>        5432/TCP                     40m
ingress-nginx             ingress-nginx-controller                  NodePort    10.101.42.53     <none>        80:32678/TCP,443:30162/TCP   49m
ingress-nginx             ingress-nginx-controller-admission        ClusterIP   10.108.41.48     <none>        443/TCP                      49m
kube-system               kube-dns                                  ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       49m
% docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS          PORTS                                                                                                                        NAMES
73173965820c   gcr.io/k8s-minikube/kicbase:v0.0.36   "/usr/local/bin/entrโ€ฆ"   49 minutes ago   Up 49 minutes   0.0.0.0:60767->22/tcp, 0.0.0.0:60768->2376/tcp, 0.0.0.0:60770->5000/tcp, 0.0.0.0:60771->8443/tcp, 0.0.0.0:60769->32443/tcp   minikube
% docker network ls
NETWORK ID     NAME       DRIVER    SCOPE
0629100f0942   bridge     bridge    local
72afb4efe4ef   host       host      local
92535f90ba6e   minikube   bridge    local
bce7e988fb32   none       null      local
% docker network inspect minikube
[
    {
        "Name": "minikube",
        "Id": "92535f90ba6e5b5eecd78a5d5fc041f038c277d68fe59ee5f4926bb8ff1efb28",
        "Created": "2023-01-31T03:33:59.141151424Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.49.0/24",
                    "Gateway": "192.168.49.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "73173965820c145f16f2dc82ce7e37ccd045cf6c808837dc0dc9ee8af48eeab7": {
                "Name": "minikube",
                "EndpointID": "7a1a001be7197dfd8ca82c05737f0140ed0adc157c2a15603d1beff383efa7ff",
                "MacAddress": "02:42:c0:a8:31:02",
                "IPv4Address": "192.168.49.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "--icc": "",
            "--ip-masq": "",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {
            "created_by.minikube.sigs.k8s.io": "true",
            "name.minikube.sigs.k8s.io": "minikube"
        }
    }
]
% minikube ip
192.168.49.2

on Minikube Container

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:12:ad:de:9e brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
30: vetha59920e@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 2a:01:31:ef:b1:33 brd ff:ff:ff:ff:ff:ff link-netnsid 1
32: vethad975af@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 2e:79:84:23:32:ce brd ff:ff:ff:ff:ff:ff link-netnsid 2
36: veth779f14f@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 3e:3f:2a:9f:61:94 brd ff:ff:ff:ff:ff:ff link-netnsid 8
38: veth8dd8f39@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether f6:04:dd:20:c8:54 brd ff:ff:ff:ff:ff:ff link-netnsid 9
40: veth912882b@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether c2:77:c5:cf:10:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 7
41: eth0@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:31:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.49.2/24 brd 192.168.49.255 scope global eth0
       valid_lft forever preferred_lft forever
43: veth0d7fdb0@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 9a:37:ee:49:6a:68 brd ff:ff:ff:ff:ff:ff link-netnsid 10
45: vethcef4b3f@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 92:bf:4d:43:15:94 brd ff:ff:ff:ff:ff:ff link-netnsid 11
47: veth8c1a354@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether d2:8e:15:4c:6d:73 brd ff:ff:ff:ff:ff:ff link-netnsid 12
49: veth4921a80@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 36:8d:1e:7e:3c:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 5
53: veth41f0f7f@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ba:a8:c2:7d:90:7a brd ff:ff:ff:ff:ff:ff link-netnsid 3
57: vethe9f60ce@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 12:db:5b:05:13:3d brd ff:ff:ff:ff:ff:ff link-netnsid 6
59: veth5e0d255@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 4a:1c:a7:fc:fa:00 brd ff:ff:ff:ff:ff:ff link-netnsid 13
61: vethfd0598d@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether a6:5f:e6:b2:31:29 brd ff:ff:ff:ff:ff:ff link-netnsid 4
63: veth147d1b1@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether b6:47:ed:37:f4:7f brd ff:ff:ff:ff:ff:ff link-netnsid 14
65: veth5617ac9@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 7e:ce:f3:62:8c:4a brd ff:ff:ff:ff:ff:ff link-netnsid 15
yuki-ohnaka-cv commented 1 year ago
% minikube service list
|-------------------------|-----------------------------------------|--------------|-----|
|        NAMESPACE        |                  NAME                   | TARGET PORT  | URL |
|-------------------------|-----------------------------------------|--------------|-----|
| cert-manager            | cert-manager                            | No node port |
| cert-manager            | cert-manager-webhook                    | No node port |
| default                 | kubernetes                              | No node port |
| devworkspace-controller | devworkspace-controller-manager-service | No node port |
| devworkspace-controller | devworkspace-controller-metrics         | No node port |
| devworkspace-controller | devworkspace-webhookserver              | No node port |
| dex                     | dex                                     | No node port |
| eclipse-che             | che-dashboard                           | No node port |
| eclipse-che             | che-gateway                             | No node port |
| eclipse-che             | che-host                                | No node port |
| eclipse-che             | che-operator-service                    | No node port |
| eclipse-che             | devfile-registry                        | No node port |
| eclipse-che             | plugin-registry                         | No node port |
| eclipse-che             | postgres                                | No node port |
| ingress-nginx           | ingress-nginx-controller                | http/80      |     |
|                         |                                         | https/443    |     |
| ingress-nginx           | ingress-nginx-controller-admission      | No node port |
| kube-system             | kube-dns                                | No node port |
|-------------------------|-----------------------------------------|--------------|-----|
% minikube service  ingress-nginx-controller -n ingress-nginx --url
http://127.0.0.1:65463
http://127.0.0.1:65464
โ—  Docker ใƒ‰ใƒฉใ‚คใƒใƒผใ‚’ darwin ไธŠใงไฝฟ็”จใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ‚ฟใƒผใƒŸใƒŠใƒซใ‚’้–‹ใๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚

I accessed it and got a 404 error (POWERED by Nginx)...

l0rd commented 1 year ago

It looks like you are not able to connect to minikube IP address from your host. You should solve this.

yuki-ohnaka-cv commented 1 year ago

@l0rd I know... Do you have any good solutions?

l0rd commented 1 year ago

When I had those kind of problems I tried deleting (minikube stop && minikube delete) and restarting minikube (minikube start --memory=8192 --vm=true --cpus=4 --addons=ingress). Beyond that I would investigate if that's a firewall issue. But the fact that you are running on M1 makes things harder to reproduce for us (no one in the team has an M1 at the moment).

yuki-ohnaka-cv commented 1 year ago

@l0rd Thunks for your advice. I have tried many times. But the situation remains the same.

I think this is because minikube is running on Docker Desktop. (Not only M1 mac) Do you have any team members working well with intel mac, minikube, and docker (or Docker Desktop)?

(sorry for my bad english...)

sgaist commented 1 year ago

Just in case, the instructions I posted where tested with Docker Desktop on a M1 machine using nip.io for the IP addresses.

No use of minikube required.

yuki-ohnaka-cv commented 1 year ago

@sgaist Your method looks very attractive to me. But I'm not familiar with k8s so I don't know how. Is it possible to tell me specifically the procedure after enabling k8s in Docker and installing chectl?

sgaist commented 1 year ago

You have to install the ingress controller as explained in their docs which is basically: follow their quick start guide

Taking my remarks into account, the rest of the procedure is the same.

yuki-ohnaka-cv commented 1 year ago

@sgaist Manual setup of Keycloak was too difficult for me...๐Ÿซ  I think I'll wait until the documentation for setting up using docker desktop is in place.

yuki-ohnaka-cv commented 1 year ago

@l0rd What do you think? https://github.com/eclipse/che/issues/21908#issuecomment-1410126006

yuki-ohnaka-cv commented 1 year ago

I installed dnsmasq by brew.

And add dnsmasq Config

# for minikube
address=/192.168.49.2.nip.io/::1
address=/192.168.49.2.nip.io/127.0.0.1
address=/.192.168.49.2.nip.io/::1
address=/.192.168.49.2.nip.io/127.0.0.1

And Execute

sudo kubectl port-forward svc/ingress-nginx-controller -n ingress-nginx 443:443

I access https://192.168.49.2.nip.io/dashboard/, and I look Login Page (redirect to https://dex.192.168.49.2.nip.io/auth/local/login?back=&state=tyaktwisrt4satvnadbxhjpyy).

I login, but I was 500 Error Page.

ใ‚นใ‚ฏใƒชใƒผใƒณใ‚ทใƒงใƒƒใƒˆ 2023-02-01 13 37 48

% kubectl logs dex-74bb9f7ddd-b4sw6 -n dex
time="2023-02-01T01:38:49Z" level=info msg="config issuer: https://dex.192.168.49.2.nip.io"
time="2023-02-01T01:38:49Z" level=info msg="kubernetes client apiVersion = dex.coreos.com/v1"
time="2023-02-01T01:38:49Z" level=info msg="creating custom Kubernetes resources"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource authcodes.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource authcodes.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource authrequests.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource authrequests.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource oauth2clients.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource oauth2clients.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource signingkeies.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource signingkeies.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource refreshtokens.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource refreshtokens.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource passwords.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource passwords.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource offlinesessionses.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource offlinesessionses.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource connectors.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource connectors.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource devicerequests.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource devicerequests.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="checking if custom resource devicetokens.dex.coreos.com has already been created..."
time="2023-02-01T01:38:49Z" level=info msg="The custom resource devicetokens.dex.coreos.com already available, skipping create"
time="2023-02-01T01:38:49Z" level=info msg="config storage: kubernetes"
time="2023-02-01T01:38:49Z" level=info msg="config static client: Eclipse Che"
time="2023-02-01T01:38:49Z" level=info msg="config connector: local passwords enabled"
time="2023-02-01T01:38:49Z" level=info msg="config skipping approval screen"
time="2023-02-01T01:38:49Z" level=info msg="config refresh tokens rotation enabled: true"
time="2023-02-01T01:38:49Z" level=info msg="listening (http) on 0.0.0.0:5556"
time="2023-02-01T04:22:49Z" level=info msg="login successful: connector \"local\", username=\"user1\", preferred_username=\"\", email=\"user1@che\", groups=[]"
time="2023-02-01T04:25:08Z" level=info msg="login successful: connector \"local\", username=\"admin\", preferred_username=\"\", email=\"che@eclipse.org\", groups=[]"
time="2023-02-01T04:29:54Z" level=error msg="Invalid 'state' parameter provided: not found"
time="2023-02-01T04:32:22Z" level=error msg="Invalid 'state' parameter provided: not found"
time="2023-02-01T04:37:34Z" level=info msg="login successful: connector \"local\", username=\"admin\", preferred_username=\"\", email=\"che@eclipse.org\", groups=[]"
yuki-ohnaka-cv commented 1 year ago
% kubectl logs --tail=10 che-gateway-8586d86cd6-6tngw  -n eclipse-che -f -c oauth-proxy
[2023/02/02 14:09:55] [oauthproxy.go:823] Error redeeming code during OAuth2 callback: token exchange failed: Post "https://dex.192.168.49.2.nip.io/token": dial tcp 127.0.0.1:443: connect: connection refused
yuki-ohnaka-cv commented 1 year ago

I will list what I did before displaying the dashboard.

# versions
% sw_vers
ProductName:            macOS
ProductVersion:         13.2
BuildVersion:           22D49

% uname -m
arm64

% docker -v
Docker version 20.10.21, build baeda1f82a

 % minikube version
minikube version: v1.29.0
commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3

chectl --version
chectl/7.60.0 darwin-arm64 node-v16.13.2

deploy che

% minikube start \
    --driver=docker \
    --cpus=6 \
    --memory=12G \
    --disk-size=50GB \
    --vm=true \
    --addons=ingress

% minikube ssh

$ docker pull quay.io/eclipse/che-operator:7.60.0 --platform amd64
$ docker pull quay.io/eclipse/che-plugin-registry:7.60.0 --platform amd64

$ exit

% chectl server:deploy \
    --platform=minikube \
    --k8spodreadytimeout=120000 \
    --debug

fix hosts (on Mac)

# add
127.0.0.1       192.168.49.2.nip.io
127.0.0.1       dex.192.168.49.2.nip.io

fix CoreDNS

% kubectl edit configmap coredns -n kube-system 
---
        hosts {
           192.168.65.2 host.minikube.internal
           192.168.49.2 192.168.49.2.nip.io
           192.168.49.2 dex.192.168.49.2.nip.io
           fallthrough
        }
---
% kubectl rollout restart deploy coredns -n kube-system
% minikube tunnel  

and access to https://192.168.49.2.nip.io/

yuki-ohnaka-cv commented 1 year ago

I hope this step will help someone.

yuki-ohnaka-cv commented 1 year ago

@l0rd @sgaist @AObuchow Please help me if you have any further questions. Thank you for all your advice.

moonbse commented 1 year ago

@yuki-ohnaka-cv I followed the step mentioned by you above and downloaded images versioned 7.73.0 since that was the image chectl was trying to download, but running chectl server:deploy --platform=minikube --k8spodreadytimeout=120000 --debug, I am still getting Back-off pulling image error, withโฏ Eclipse Che Operator pod bootstrap โœ” Scheduling...[OK] โœ– Downloading images โ†’ Failed to download image, reason: ImagePullBackOff, message: Back-off pulling image โ€ฆ Starting Create ValidatingWebhookConfiguration org.eclipse.che Create MutatingWebhookConfiguration org.eclipse.che Create CheCluster Custom Resource Error: Command server:deploy failed with the error: Failed to download image, reason: ImagePullBackOff, message: Back-off pulling image "quay.io/eclipse/che-operator:7.73.0".`

If possible can you please look at it , what could be going wrong

moonbse commented 1 year ago

I think it has something to do with docker itself, I downloaded the images but when I do docker images , I don't see any

sgaist commented 1 year ago

@moonbse Are you using docker or minikube ? If using Docker Desktop, you can explicitly pass the operator image to the chectl command. You have to add the hash to the tag to force the use of the amd64 variant. I don't know whether it will also work with minikube.

moonbse commented 1 year ago

@sgaist I am using docker desktop to run minikube as suggested by @yuki-ohnaka-cv here

`% minikube start \ --driver=docker \ --cpus=6 \ --memory=12G \ --disk-size=50GB \ --vm=true \ --addons=ingress

% minikube ssh

$ docker pull quay.io/eclipse/che-operator:7.60.0 --platform amd64 $ docker pull quay.io/eclipse/che-plugin-registry:7.60.0 --platform amd64

$ exit

% chectl server:deploy \ --platform=minikube \ --k8spodreadytimeout=120000 \ --debug`

I tried this chectl server:deploy --platform=minikube --k8spodreadytimeout=120000 --debug --che-operator-image=quay.io/eclipse/che-operator:7.73.0sha256:e13de206addffc583ce7c0b3cbaeb54e8c23e9892134ff024a44adf7ad36c4e9 , I think I am doing something wrong in the way I am passing image hash? It is giving same ImagePullBackOff

moonbse commented 1 year ago

@sgaist Tried with this chectl server:deploy --platform=minikube --k8spodreadytimeout=120000 --debug --che-operator-image=quay.io/eclipse/che-operator:e13de206addffc583ce7c0b3cbaeb54e8c23e9892134ff024a44adf7ad36c4e9 which I think is the right way, getting a different error message which is a positive I guess,

โœ– Downloading images โ†’ Failed to download image, reason: ErrImagePull, message: rpc error: code = Unknown desc = โ€ฆ Starting Create ValidatingWebhookConfiguration org.eclipse.che Create MutatingWebhookConfiguration org.eclipse.che Create CheCluster Custom Resource Error: Command server:deploy failed with the error: Failed to download image, reason: ErrImagePull, message: rpc error: code = Unknown desc = no matching manifest for linux/arm64/v8 in the manifest list entries. It is still looking for arm64 images. Could this be the reason it is not picking up already downloaded images?

sgaist commented 1 year ago

@moonbse my setup is simpler: I use the Kubernetes cluster provided with Docker Desktop. You will have to go inside the Docker Desktop VM to configure the OIDC part for the kube apiserver though. Other than that I have successfully deployed Che on it. Docker Desktop can use AMD64 images if the hash is given alongside the tag.

moonbse commented 1 year ago

@sgaist thanks, I will give it a try, could you please provide a little more info about your setup, I am not very familiar with k8s , I can run Docker desktop K8s , where do I go from there?

sgaist commented 1 year ago

@moonbse there's not much more. Once the cluster is started it should be the default context and you can use kubectl with it.

moonbse commented 1 year ago

@sgaist thanks, Unrelated but since I am getting ImagePullOff error even though local registry has those image, I need to set ImagePullOff policy to IfNotPresent. I am not able to find command or file, where I can set this. My set up is minikube with docker driver

minikube start --driver=docker cpus=6 --memory=12G --disk-size=50GB --vm=true --addons=ingress

I can see minikube image running on docker desktop