Closed roullli closed 4 years ago
Is there a reason why you want to install Hono 1.0.2 and not the most recent 1.2.1?
It is just that is has been mentioned in https://www.eclipse.org/hono/docs/deployment/helm-based-deployment/ I have to run my own private image registry. I don't want to do that, that's the only reason.
so your kubernetes cluster cannot pull from the docker registry?
Yes, it can. but does this problem has anything to do with the hono version?
I guess this problem is related to "hono-service-device-registry-0" pod which is unable to start. I don't know the reason!!?
Yes, it can. but does this problem has anything to do with the hono version?
It does in so far as it is unnecessary for you to build the images locally and use the (very old) helm chart from the 1.0.2 code base. Instead, you should be able to use the current Hono helm chart and follow the instructions for installing it.
same problem with version 1.2.0!
Does kubernetes report any reason for the CrashLoopBackOff, i.e. what is the output of kubectl describe pod
?
Does your kubernetes cluster support load balancers for ingress? If not, i.e. if it only supports NodePort services, then you should turn off using the load balancers during installation.
How much RAM does the worker node have? If you're on a raspberry it is probably not much and thus you might want to deploy the bare minimum only, i.e. start with just a single protocol adapter and leave out the monitoring infrastructure (Prometheus and Grafana).
All this can be done by customizing some configuration properties of the helm chart. Create a yaml file named custom.yaml
:
useLoadBalancer: false
adapters:
amqp:
enabled: false
mqtt:
enabled: false
grafana:
enabled: false
prometheus:
createInstance: false
Then install using the -f
switch:
helm install -f /path/to/custom.yaml eclipse-hono eclipse-iot/hono
These are the output for each pod: eclipse-hono-adapter-amqp-vertx-7cdb58649b-686v2 eclipse-hono-adapter-http-vertx-77b8599c4-25hst eclipse-hono-adapter-mqtt-vertx-6f86cb74f5-5d9t2 eclipse-hono-artemis-554cb5df84-4jkwp eclipse-hono-dispatch-router-5788f959b7-9fvst eclipse-hono-grafana-796449745b-p67wc eclipse-hono-prometheus-server-776fdcf8cb-hf8hc eclipse-hono-service-auth-855bdf9bc9-2ncw8 eclipse-hono-service-device-registry-0 I'll try to deploy the customized deployment and update the results. Thank you @sophokles73
@sophokles73 same issue with customized deployment.
When I run kubectl get pvc
and it shows the following output:
and it has nothing to show for pv. I think the problem is that there is a missing Persistance volume. I'm not familiar with volumes. can someone guide me how should I write a yaml file to fix this issue? Thanks in advance.
what is the output of kubectl get storageclass
?
there isn't any storageclass available.
That seems to be the problem. Please consult the kubernetes documentation for details on how to configure dynamic volume provisioning in your cluster.
I have fixed the problem with the PersistentVolume, and it is bounded to the pod. But there is another issue which I'm pretty sure it is related to the Architecture. Because, when I'm trying to run the docker image separately with by this command:
docker run eclipse/hono-service-device-registry:1.0.0
this message prompts:
How can I build the image for arm64 ARCH? since the image in docker registry is amd64.
Good point. I totally forgot about that ... If you have the whole tool chain on your raspberries, you could build from scratch on your pi and use its local Docker registry to host the images.
what do you mean by tool chain? what are these tools? can you give me some tips about how can I customize the image for arm64?
Well, the JDK, maven, docker etc ... Building Hono is described in the Building from Source guide.
I built the Images. now I can run them inside my cluster successfully. I hope that this would be my final question (if any other issue doesn't happen)how can I edit the deployment files so that it uses my preferable images instead of the default ones?
That's great to hear :-) Not sure what you mean with deployment files, though. Are you referring to the Helm chart?
I mean that, how can I force the Kubernetes to use my local images in order to deploy Hono, instead of the default images from docker repository?
You can set the names of the container images that are being used for Hono's components as configuration properties when installing the chart using helm. This requires that you tag the images using a value that is specific to the arm64 platform, e.g. something like 1.2.2-arm64
instead of just 1.2.2 for the amd64 platform.
You can do that by means of defining the docker.image.additional.tag
maven property when building, e.g.
mvn clean install -Pbuild-docker-image -Ddocker.image.additional.tag=1.2.2-arm64
This will add the 1.2.2-arm64
tag to each of the container images being built.
After the build, create a custom.yaml file:
deviceRegistryExample:
imageName: eclipse/hono-service-device-registry-file:1.2.2-arm64
authServer:
imageName: eclipse/hono-service-auth:1.2.2-arm64
adapters:
amqp:
imageName: eclipse/hono-adapter-amqp-vertx:1.2.2-arm64
mqtt:
imageName: eclipse/hono-adapter-mqtt-vertx:1.2.2-arm64
http:
imageName: eclipse/hono-adapter-http-vertx:1.2.2-arm64
When installing Hono using the chart:
helm install -f /path/to/custom.yaml hono eclipse-iot/hono
Thank you so much. The problem with Hono images are fixed and containers are running. but I have the same issue with images from quay.io/repository : Any Idea about how can I also build those images locally?
That's a different story. However, I guess the general approach of building the corresponding images on/for arm64 should work as well. The build process will be a different one, though, because Apache ActiveMQ Artemis and Apache Qpid are separate projects. I think you should be able to find information regarding how to build those on their respective project home pages.
Finally, I succeeded in running images locally:
But, I'm facing a new problem which I think is related to the Artemis instance. here is the output of kubectl describe pod -hono
for hono registry container:
more info:
I built the docker image for ActiveMQ Artemis from here I think its because of this that default arguments they used in their Dockerfile mismatches with the Hono's settings
How can I force the Kubernetes (when I'm running Hono) to deploy Hono on a specific node in my cluster?
IMHO the Artemis image that we are using in Hono has been built from https://github.com/EnMasseProject/artemis-docker-base
I'm trying to run Hono inside my own Kubernetes cluster made up of 2 Raspberry Pis (one as a master, the other one as a worker node). The hono version which I'm trying to install is V 1.0.2. I ran this " helm install hono eclipse-hono/ --dependency-update --namespace hono " on the master node and it showed me the successful installation prompt. But when I'm running "kubectl get pods -n hono" to see the running pods, I'm getting this output: Also, here is the output for running "kubectl get svc -n hono": I'm stuck in this situation for 2 days and I didn't find any solution for that anywhere. I would thankful if anyone can help me.