Closed ahus1 closed 1 year ago
This is something we've had on the radar for a while, but I don't think our team has access to any M1 Macs or other arm64 devices that we could use to verify these images actually work, without resorting to some virtualization.
https://github.com/uraimo/run-on-arch-action looks useful for setting this up so upstream CI could produce different image archs.
@ebaron ^
This should be fairly easy on the operator side with Golang cross-compilation: https://sdk.operatorframework.io/docs/advanced-topics/multi-arch/
@ahus1 would you or somebody else working on keycloak-benchmark have the spare time to try out a Cryostat ARM image? I have the PR here: https://github.com/cryostatio/cryostat/pull/1352 . I can cross-arch build that and push it to my personal quay.io image repo, or I can help walk through the basics of setting up to build the Cryostat image directly. I'd just like confirmation from someone with access to real ARM hardware that the image actually works as expected.
Hi @andrewazores - I think @kami619 would be the one to try it on an M1.
I suggest the following:
Thanks!
@ahus1 @kami619 quay.io/andrewazores/cryostat:2.3.0-snapshot-linux-arm64
is up now for testing
thanks @andrewazores and @ahus1 , got some time to pick this up. will test the image and let you know.
@andrewazores looks like the image with that tag is not to be found in the quay.io/andrewzores registry. Would you please add that back, apologies in the first place to not have gotten to this sooner (right after you created an image for us)
Failed to pull image "quay.io/andrewazores/cryostat:2.3.0-snapshot-linux-arm64": rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/andrewazores/cryostat:2.3.0-snapshot-linux-arm64 not found: manifest unknown: manifest unknown
@kami619 ah right, sorry about that. I've rebuilt and pushed the image again.
thanks for that @andrewazores now I can pull and the pod gets created with the custom image,
> kubectl get events -n keycloak
LAST SEEN TYPE REASON OBJECT MESSAGE
6m18s Normal Scheduled pod/cryostat-7565c57798-rdjc2 Successfully assigned keycloak/cryostat-7565c57798-rdjc2 to minikube
6m16s Normal Pulling pod/cryostat-7565c57798-rdjc2 Pulling image "quay.io/andrewazores/cryostat:2.3.0-snapshot-linux-arm64"
5m40s Normal Pulled pod/cryostat-7565c57798-rdjc2 Successfully pulled image "quay.io/andrewazores/cryostat:2.3.0-snapshot-linux-arm64" in 35.883221762s
5m40s Normal Created pod/cryostat-7565c57798-rdjc2 Created container cryostat
5m40s Normal Started pod/cryostat-7565c57798-rdjc2 Started container cryostat
5m40s Normal Pulling pod/cryostat-7565c57798-rdjc2 Pulling image "quay.io/cryostat/cryostat-grafana-dashboard:2.1.0"
5m13s Normal Pulled pod/cryostat-7565c57798-rdjc2 Successfully pulled image "quay.io/cryostat/cryostat-grafana-dashboard:2.1.0" in 26.370057391s
5m6s Normal Created pod/cryostat-7565c57798-rdjc2 Created container cryostat-grafana
5m6s Normal Started pod/cryostat-7565c57798-rdjc2 Started container cryostat-grafana
5m13s Normal Pulling pod/cryostat-7565c57798-rdjc2 Pulling image "quay.io/cryostat/jfr-datasource:2.1.0"
5m7s Normal Pulled pod/cryostat-7565c57798-rdjc2 Successfully pulled image "quay.io/cryostat/jfr-datasource:2.1.0" in 6.556162334s
5m6s Normal Created pod/cryostat-7565c57798-rdjc2 Created container cryostat-jfr-datasource
5m6s Normal Started pod/cryostat-7565c57798-rdjc2 Started container cryostat-jfr-datasource
5m6s Normal Pulled pod/cryostat-7565c57798-rdjc2 Container image "quay.io/cryostat/cryostat-grafana-dashboard:2.1.0" already present on machine
5m6s Normal Pulled pod/cryostat-7565c57798-rdjc2 Container image "quay.io/cryostat/jfr-datasource:2.1.0" already present on machine
75s Warning BackOff pod/cryostat-7565c57798-rdjc2 Back-off restarting failed container
4m56s Warning BackOff pod/cryostat-7565c57798-rdjc2 Back-off restarting failed container
4m56s Warning Unhealthy pod/cryostat-7565c57798-rdjc2 Startup probe failed: Get "http://172.17.0.17:8181/health": dial tcp 172.17.0.17:8181: connect: connection refused
6m18s Normal SuccessfulCreate replicaset/cryostat-7565c57798 Created pod: cryostat-7565c57798-rdjc2
5m35s Normal Sync ingress/cryostat-grafana Scheduled for sync
5m36s Normal Sync ingress/cryostat-ingress Scheduled for sync
6m18s Normal ExternalProvisioning persistentvolumeclaim/cryostat waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
6m18s Normal Provisioning persistentvolumeclaim/cryostat External provisioner is provisioning volume for claim "keycloak/cryostat"
6m18s Normal ProvisioningSucceeded persistentvolumeclaim/cryostat Successfully provisioned volume pvc-128aa435-65c7-49fe-8d45-233af584a513
6m18s Normal ScalingReplicaSet deployment/cryostat Scaled up replica set cryostat-7565c57798 to 1
however, hitting a server side error for the a newish env variable which we seem to have not defined in our config @ahus1 CRYOSTAT_JMX_CREDENTIALS_DB_PASSWORD
. May be this was introduced after the 2.1.1 version which we were using in our tooling. As the custom image for ARM chipset is based off of 2.3.0
Apr 05, 2023 7:00:07 PM io.cryostat.core.log.Logger error
SEVERE: Exception thrown
java.lang.RuntimeException: javax.naming.ConfigurationException: Environment variable CRYOSTAT_JMX_CREDENTIALS_DB_PASSWORD must be set and non-blank
at io.cryostat.storage.StorageModule.provideEntityManagerFactory(StorageModule.java:95)
at io.cryostat.storage.StorageModule_ProvideEntityManagerFactoryFactory.provideEntityManagerFactory(StorageModule_ProvideEntityManagerFactoryFactory.java:44)
at io.cryostat.storage.StorageModule_ProvideEntityManagerFactoryFactory.get(StorageModule_ProvideEntityManagerFactoryFactory.java:35)
at io.cryostat.storage.StorageModule_ProvideEntityManagerFactoryFactory.get(StorageModule_ProvideEntityManagerFactoryFactory.java:13)
at dagger.internal.DoubleCheck.get(DoubleCheck.java:47)
at io.cryostat.storage.StorageModule_ProvideEntityManagerFactory.get(StorageModule_ProvideEntityManagerFactory.java:35)
at io.cryostat.storage.StorageModule_ProvideEntityManagerFactory.get(StorageModule_ProvideEntityManagerFactory.java:13)
at dagger.internal.DoubleCheck.get(DoubleCheck.java:47)
at io.cryostat.discovery.DiscoveryModule_ProvidePluginInfoDaoFactory.get(DiscoveryModule_ProvidePluginInfoDaoFactory.java:43)
at io.cryostat.discovery.DiscoveryModule_ProvidePluginInfoDaoFactory.get(DiscoveryModule_ProvidePluginInfoDaoFactory.java:14)
at dagger.internal.DoubleCheck.get(DoubleCheck.java:47)
at io.cryostat.discovery.DiscoveryModule_ProvideDiscoveryStorageFactory.get(DiscoveryModule_ProvideDiscoveryStorageFactory.java:76)
at io.cryostat.discovery.DiscoveryModule_ProvideDiscoveryStorageFactory.get(DiscoveryModule_ProvideDiscoveryStorageFactory.java:21)
at dagger.internal.DoubleCheck.get(DoubleCheck.java:47)
at dagger.internal.DelegateFactory.get(DelegateFactory.java:36)
at io.cryostat.configuration.ConfigurationModule_ProvideCredentialsManagerFactory.get(ConfigurationModule_ProvideCredentialsManagerFactory.java:68)
at io.cryostat.configuration.ConfigurationModule_ProvideCredentialsManagerFactory.get(ConfigurationModule_ProvideCredentialsManagerFactory.java:20)
at dagger.internal.DoubleCheck.get(DoubleCheck.java:47)
at dagger.internal.DelegateFactory.get(DelegateFactory.java:36)
at io.cryostat.DaggerCryostat_Client$ClientImpl.credentialsManager(DaggerCryostat_Client.java:1641)
at io.cryostat.Cryostat.start(Cryostat.java:80)
at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$5(DeploymentManager.java:196)
at io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:264)
at io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:246)
at io.vertx.core.impl.EventLoopContext.lambda$runOnContext$0(EventLoopContext.java:43)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: javax.naming.ConfigurationException: Environment variable CRYOSTAT_JMX_CREDENTIALS_DB_PASSWORD must be set and non-blank
... 33 more
Yes, that's right. This environment variable is used for at-rest encryption of JMX credentials that Cryostat uses to connect to target applications.
In a real production environment this should be a strong passphrase and stored in a Secret. For a testing harness in an ephemeral environment, you can just pick any value you like for that environment variable.
ok, thanks. I will try to get that secret into the config, if I am not wrong, this should be a good example for that right ? https://github.com/cryostatio/cryostat-operator/blob/main/docs/config.md?plain=1#L297
That's a good example for creating the Secret, but there is also some Operator automation happening there with the .spec.jmxCredentialsDatabaseOptions.databaseSecretName
which I think you will be missing. You would also need to configure env from secret to make that Secret visible to Cryostat via that environment variable.
For an initial pass just to prove the concept, I think it would suffice to just hardcode the environment variable in your Cryostat deployment. Once that hurdle is cleared and the Cryostat installation is running as expected you can circle back to clean it up with a mounted Secret.
Okey dokey. On it now. @andrewazores thanks much.
Ok, so looks like we are past that issue. But its probably failing to connect to a running Keycloak application to instrument itself, and alas Keycloak image is also plagued by a missing ARM image, so I will take this up with @ahus1. But sharing the log to confirm my theory from your end @andrewazores
> kubectl logs -n keycloak cryostat-7c7797d6d4-jjd6t
Defaulted container "cryostat" out of: cryostat, cryostat-grafana, cryostat-jfr-datasource
+------------------------------------------+
| Wed Apr 5 19:37:25 UTC 2023 |
| |
| /truststore is empty; no certificates to import |
+------------------------------------------+
+------------------------------------------+
| Wed Apr 5 19:37:25 UTC 2023 |
| |
| JMX Auth Disabled |
+------------------------------------------+
+------------------------------------------+
| Wed Apr 5 19:37:25 UTC 2023 |
| |
| SSL Disabled |
+------------------------------------------+
+ exec java -XX:+CrashOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=9091 -Dcom.sun.management.jmxremote.rmi.port=9091 -Djavax.net.ssl.trustStore=/opt/cryostat.d/truststore.p12 -Djavax.net.ssl.trustStorePassword=uRt7Nh2JM45qlWldJP3w98m-uxAUTyc8 -Dcom.sun.management.jmxremote.autodiscovery=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.registry.ssl=false -cp '/app/resources:/app/classes:/app/libs/cryostat-core-2.19.1.jar:/app/libs/common-7.1.1.jar:/app/libs/encoder-1.2.2.jar:/app/libs/flightrecorder-7.1.1.jar:/app/libs/flightrecorder.rules-7.1.1.jar:/app/libs/flightrecorder.rules.jdk-7.1.1.jar:/app/libs/nashorn-core-15.4.jar:/app/libs/asm-7.3.1.jar:/app/libs/asm-commons-7.3.1.jar:/app/libs/asm-analysis-7.3.1.jar:/app/libs/asm-tree-7.3.1.jar:/app/libs/asm-util-7.3.1.jar:/app/libs/openshift-client-6.3.1.jar:/app/libs/openshift-client-api-6.3.1.jar:/app/libs/openshift-model-6.3.1.jar:/app/libs/kubernetes-model-common-6.3.1.jar:/app/libs/jackson-annotations-2.14.1.jar:/app/libs/openshift-model-clusterautoscaling-6.3.1.jar:/app/libs/openshift-model-operator-6.3.1.jar:/app/libs/openshift-model-operatorhub-6.3.1.jar:/app/libs/openshift-model-machine-6.3.1.jar:/app/libs/openshift-model-whereabouts-6.3.1.jar:/app/libs/openshift-model-monitoring-6.3.1.jar:/app/libs/openshift-model-storageversionmigrator-6.3.1.jar:/app/libs/openshift-model-tuned-6.3.1.jar:/app/libs/openshift-model-console-6.3.1.jar:/app/libs/openshift-model-config-6.3.1.jar:/app/libs/openshift-model-machineconfig-6.3.1.jar:/app/libs/openshift-model-miscellaneous-6.3.1.jar:/app/libs/openshift-model-hive-6.3.1.jar:/app/libs/openshift-model-installer-6.3.1.jar:/app/libs/generex-1.0.2.jar:/app/libs/automaton-1.11-8.jar:/app/libs/kubernetes-client-6.3.1.jar:/app/libs/kubernetes-client-api-6.3.1.jar:/app/libs/kubernetes-model-core-6.3.1.jar:/app/libs/kubernetes-model-gatewayapi-6.3.1.jar:/app/libs/kubernetes-model-rbac-6.3.1.jar:/app/libs/kubernetes-model-admissionregistration-6.3.1.jar:/app/libs/kubernetes-model-apps-6.3.1.jar:/app/libs/kubernetes-model-autoscaling-6.3.1.jar:/app/libs/kubernetes-model-apiextensions-6.3.1.jar:/app/libs/kubernetes-model-batch-6.3.1.jar:/app/libs/kubernetes-model-certificates-6.3.1.jar:/app/libs/kubernetes-model-coordination-6.3.1.jar:/app/libs/kubernetes-model-discovery-6.3.1.jar:/app/libs/kubernetes-model-events-6.3.1.jar:/app/libs/kubernetes-model-extensions-6.3.1.jar:/app/libs/kubernetes-model-flowcontrol-6.3.1.jar:/app/libs/kubernetes-model-networking-6.3.1.jar:/app/libs/kubernetes-model-metrics-6.3.1.jar:/app/libs/kubernetes-model-policy-6.3.1.jar:/app/libs/kubernetes-model-scheduling-6.3.1.jar:/app/libs/kubernetes-model-storageclass-6.3.1.jar:/app/libs/kubernetes-model-node-6.3.1.jar:/app/libs/snakeyaml-1.33.jar:/app/libs/jackson-dataformat-yaml-2.14.1.jar:/app/libs/jackson-datatype-jsr310-2.14.1.jar:/app/libs/jackson-databind-2.14.1.jar:/app/libs/jackson-core-2.14.1.jar:/app/libs/kubernetes-httpclient-okhttp-6.3.1.jar:/app/libs/okhttp-3.12.12.jar:/app/libs/okio-1.15.0.jar:/app/libs/logging-interceptor-3.12.12.jar:/app/libs/zjsonpatch-0.3.0.jar:/app/libs/dagger-2.45.jar:/app/libs/javax.inject-1.jar:/app/libs/commons-lang3-3.12.0.jar:/app/libs/commons-codec-1.15.jar:/app/libs/commons-io-2.11.0.jar:/app/libs/commons-validator-1.7.jar:/app/libs/commons-beanutils-1.9.4.jar:/app/libs/commons-digester-2.1.jar:/app/libs/commons-logging-1.2.jar:/app/libs/commons-collections-3.2.2.jar:/app/libs/httpclient-4.5.13.jar:/app/libs/httpcore-4.4.13.jar:/app/libs/vertx-web-4.3.7.jar:/app/libs/vertx-web-common-4.3.7.jar:/app/libs/vertx-auth-common-4.3.7.jar:/app/libs/vertx-bridge-common-4.3.7.jar:/app/libs/vertx-core-4.3.7.jar:/app/libs/netty-common-4.1.86.Final.jar:/app/libs/netty-buffer-4.1.86.Final.jar:/app/libs/netty-transport-4.1.86.Final.jar:/app/libs/netty-handler-4.1.86.Final.jar:/app/libs/netty-transport-native-unix-common-4.1.86.Final.jar:/app/libs/netty-codec-4.1.86.Final.jar:/app/libs/netty-handler-proxy-4.1.86.Final.jar:/app/libs/netty-codec-socks-4.1.86.Final.jar:/app/libs/netty-codec-http-4.1.86.Final.jar:/app/libs/netty-codec-http2-4.1.86.Final.jar:/app/libs/netty-resolver-4.1.86.Final.jar:/app/libs/netty-resolver-dns-4.1.86.Final.jar:/app/libs/netty-codec-dns-4.1.86.Final.jar:/app/libs/vertx-web-client-4.3.7.jar:/app/libs/vertx-uri-template-4.3.7.jar:/app/libs/vertx-web-graphql-4.3.7.jar:/app/libs/graphql-java-19.2.jar:/app/libs/java-dataloader-3.2.0.jar:/app/libs/reactive-streams-1.0.3.jar:/app/libs/graphql-java-extended-scalars-19.0.jar:/app/libs/nimbus-jose-jwt-9.31.jar:/app/libs/jcip-annotations-1.0-1.jar:/app/libs/bcprov-jdk18on-1.71.jar:/app/libs/jasypt-1.9.3.jar:/app/libs/jasypt-hibernate5-1.9.3.jar:/app/libs/slf4j-jdk14-1.7.36.jar:/app/libs/slf4j-api-1.7.36.jar:/app/libs/gson-2.10.1.jar:/app/libs/caffeine-3.1.1.jar:/app/libs/jsoup-1.15.4.jar:/app/libs/hibernate-core-5.6.14.Final.jar:/app/libs/jboss-logging-3.4.3.Final.jar:/app/libs/javax.persistence-api-2.2.jar:/app/libs/byte-buddy-1.12.18.jar:/app/libs/antlr-2.7.7.jar:/app/libs/jboss-transaction-api_1.2_spec-1.1.1.Final.jar:/app/libs/jandex-2.4.2.Final.jar:/app/libs/classmate-1.5.1.jar:/app/libs/javax.activation-api-1.2.0.jar:/app/libs/hibernate-commons-annotations-5.1.2.Final.jar:/app/libs/jaxb-api-2.3.1.jar:/app/libs/jaxb-runtime-2.3.1.jar:/app/libs/txw2-2.3.1.jar:/app/libs/istack-commons-runtime-3.0.7.jar:/app/libs/stax-ex-1.8.jar:/app/libs/FastInfoset-1.2.15.jar:/app/libs/hibernate-types-55-2.21.1.jar:/app/libs/h2-2.1.214.jar:/app/libs/postgresql-42.5.1.jar:/opt/cryostat.d/clientlib.d/*' @/app/jib-main-class-file
Apr 05, 2023 7:37:26 PM io.cryostat.core.log.Logger info
INFO: Local config path set as /opt/cryostat.d/conf.d
Apr 05, 2023 7:37:26 PM org.hibernate.jpa.internal.util.LogHelper logPersistenceUnitInformation
INFO: HHH000204: Processing PersistenceUnitInfo [name: io.cryostat]
Apr 05, 2023 7:37:26 PM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate ORM core version 5.6.14.Final
Apr 05, 2023 7:37:26 PM org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit>
INFO: HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
Apr 05, 2023 7:37:26 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure
WARN: HHH10001002: Using Hibernate built-in connection pool (not for production use!)
Apr 05, 2023 7:37:26 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001005: using driver [org.h2.Driver] at URL [jdbc:h2:mem:cryostat;DB_CLOSE_DELAY=-1;INIT=create domain if not exists jsonb as varchar]
Apr 05, 2023 7:37:26 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001001: Connection properties: {password=****, user=cryostat}
Apr 05, 2023 7:37:26 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001003: Autocommit mode: false
Apr 05, 2023 7:37:26 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl$PooledConnections <init>
INFO: HHH000115: Hibernate connection pool size: 20 (min=1)
Apr 05, 2023 7:37:27 PM org.hibernate.dialect.Dialect <init>
INFO: HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
Apr 05, 2023 7:37:28 PM org.hibernate.resource.transaction.backend.jdbc.internal.DdlTransactionIsolatorNonJtaImpl getIsolatedConnection
INFO: HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess@529ba931] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode.
Apr 05, 2023 7:37:28 PM org.hibernate.resource.transaction.backend.jdbc.internal.DdlTransactionIsolatorNonJtaImpl getIsolatedConnection
INFO: HHH10001501: Connection obtained from JdbcConnectionAccess [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess@298dff00] for (non-JTA) DDL execution was not in auto-commit mode; the Connection 'local transaction' will be committed and the Connection will be set into auto-commit mode.
Apr 05, 2023 7:37:28 PM org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator initiateService
INFO: HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger info
INFO: cryostat started, version: v2.0.0-SNAPSHOT-777-g51525c14.
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger info
INFO: Selected NoSSL strategy
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger warn
WARNING: No available SSL certificates. Fallback to plain HTTP.
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.net.HttpServer Verticle
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger info
INFO: HTTPS service running on https://cryostat.192.168.105.6.nip.io:443
Apr 05, 2023 7:37:28 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.net.HttpServer Verticle [bbee9086-cf71-4df9-a0d8-999e549e8b24]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Selecting platform default AuthManager "io.cryostat.net.NoopAuthManager"
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Local save path for flight recordings set as /opt/cryostat.d/recordings.d
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.net.web.WebServer Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.net.web.WebServer Verticle [ebe093d3-8f3b-4873-9472-4f69fe6089e1]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.messaging.MessagingServer Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Max concurrent WebSocket connections: 2147483647
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.messaging.MessagingServer Verticle [d5b766f1-9e6f-4f88-89e9-1765dd05cbb9]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.rules.RuleProcessor Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.rules.RuleProcessor Verticle [583acfb3-5788-4d19-9d61-dd183916d649]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.recordings.RecordingMetadataManager Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.recordings.RecordingMetadataManager Verticle [5611537b-06de-44e2-a4c9-1f2ce4fb116e]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.discovery.DiscoveryStorage Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Selected KubeApiPlatformStrategy Strategy
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deploying io.cryostat.discovery.BuiltInDiscovery Verticle
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Starting built-in discovery with KubeApiPlatformClient
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Starting built-in discovery with CustomTargetPlatformClient
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.discovery.BuiltInDiscovery Verticle [22ac34a0-3637-445f-b712-d4f2bce3f148]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.discovery.DiscoveryStorage Verticle [81a7b859-d8de-4f94-b25a-f217185d089f]
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Starting archive migration
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Skipping archive migration: appears to be a special location: file-uploads
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Successfully migrated archives
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Beginning to prune potentially stale metadata...
Apr 05, 2023 7:37:29 PM io.cryostat.core.log.Logger info
INFO: Successfully pruned all stale metadata
Apr 05, 2023 7:37:53 PM io.cryostat.core.log.Logger warn
WARNING: Exception thrown
java.io.IOException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:8080
at io.cryostat.net.web.http.generic.HealthGetHandler.lambda$checkUri$0(HealthGetHandler.java:170)
at io.vertx.ext.web.client.impl.HttpContext.handleFailure(HttpContext.java:393)
at io.vertx.ext.web.client.impl.HttpContext.execute(HttpContext.java:387)
at io.vertx.ext.web.client.impl.HttpContext.next(HttpContext.java:362)
at io.vertx.ext.web.client.impl.HttpContext.fire(HttpContext.java:329)
at io.vertx.ext.web.client.impl.HttpContext.fail(HttpContext.java:310)
That looks like Cryostat is trying to talk to its jfr-datasource
instance (project) and failing. That container image is not built for ARM yet either, so I think that container is probably failing to run in your environment.
You can try to temporarily work around that issue by removing that env var definition so that Cryostat knows it shouldn't expect to find a jfr-datasource
service. Likewise, you probably also need to remove the Grafana Dashboard definition for now for similar reasons. This will disable the feature to View in Grafana ...
in the Cryostat UI, but other core Cryostat features will still work.
@andrewazores
I could see the the cryostat container is in running state now, the other two are the jfr-datasource
and cryostat-grafana
which are expected to be in a failure condition.
keycloak cryostat-864b8bfd6d-z6klm 1/3 CrashLoopBackOff 32 (23s ago) 58m
and I can see the logs to be looking normal
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Starting built-in discovery with CustomTargetPlatformClient
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.discovery.BuiltInDiscovery Verticle [3a67eb89-3c48-4343-b5b0-06c09e33504e]
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Deployed io.cryostat.discovery.DiscoveryStorage Verticle [f3105476-7cbc-4592-8a35-e50dd90e41d0]
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Starting archive migration
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Skipping archive migration: appears to be a special location: file-uploads
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Successfully migrated archives
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Beginning to prune potentially stale metadata...
Apr 05, 2023 8:30:40 PM io.cryostat.core.log.Logger info
INFO: Successfully pruned all stale metadata
Apr 05, 2023 8:30:56 PM org.slf4j.impl.JDK14LoggerAdapter fillCallerData
INFO: 172.17.0.1 - - [Wed, 5 Apr 2023 20:30:56 GMT] 10ms "GET /health HTTP/1.1" 200 213 bytes "-" "kube-probe/1.25"
Apr 05, 2023 8:31:06 PM org.slf4j.impl.JDK14LoggerAdapter fillCallerData
INFO: 172.17.0.1 - - [Wed, 5 Apr 2023 20:31:06 GMT] 31ms "GET /health HTTP/1.1" 200 213 bytes "-" "kube-probe/1.25"
Apr 05, 2023 8:31:16 PM org.slf4j.impl.JDK14LoggerAdapter fillCallerData
INFO: 172.17.0.1 - - [Wed, 5 Apr 2023 20:31:16 GMT] 1ms "GET /health HTTP/1.1" 200 213 bytes "-" "kube-probe/1.25"
Apr 05, 2023 8:31:26 PM org.slf4j.impl.JDK14LoggerAdapter fillCallerData
however, I am getting a 503 when I try to access to GUI of the application. Any idea how I could find more info on this ?
I think that's OpenShift/k8s responding 503 to you when you try to access Cryostat via its Service/Route. It thinks the whole application is unavailable because the jfr-datasource
and grafana-dashboard
are part of the same Pod as the Cryostat server, so only 1/3 of the Pod's containers are up, therefore k8s is refusing to send traffic into that Pod. If you remove the environment variables from the Cryostat container spec in the Deployment, as well as remove the Container definitions for those two services here and here, then the Pod should only expect to have one Cryostat container, and k8s will send your traffic to it.
In total I think the revised testing Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cryostat
app.kubernetes.io/name: cryostat
kind: cryostat
name: cryostat
namespace: keycloak
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: cryostat
kind: cryostat
template:
metadata:
labels:
app: cryostat
kind: cryostat
name: cryostat
namespace: keycloak
spec:
containers:
- env:
- name: CRYOSTAT_JMX_CREDENTIALS_DB_PASSWORD
value: "mystrongpassword"
- name: CRYOSTAT_SSL_PROXIED
value: "true"
- name: CRYOSTAT_ALLOW_UNTRUSTED_SSL
value: "true"
- name: CRYOSTAT_WEB_PORT
value: "8181"
- name: CRYOSTAT_LISTEN_PORT
value: "9090"
- name: CRYOSTAT_CONFIG_PATH
value: /opt/cryostat.d/conf.d
- name: CRYOSTAT_ARCHIVE_PATH
value: /opt/cryostat.d/recordings.d
- name: CRYOSTAT_PROBE_TEMPLATE_PATH
value: /opt/cryostat.d/templates.d
- name: CRYOSTAT_CLIENTLIB_PATH
value: /opt/cryostat.d/clientlib.d
- name: CRYOSTAT_DISABLE_SSL
value: "true"
- name: GRAFANA_DASHBOARD_EXT_URL
value: https://cryostat-grafana.{{ .Values.hostname }}/
- name: CRYOSTAT_WEB_HOST
value: cryostat.{{ .Values.hostname }}
- name: CRYOSTAT_EXT_WEB_PORT
value: '443'
- name: CRYOSTAT_DISABLE_JMX_AUTH
value: 'true'
image: quay.io/cryostat/cryostat:2.1.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: cryostat
ports:
- containerPort: 8181
protocol: TCP
- containerPort: 9090
protocol: TCP
- containerPort: 9091
protocol: TCP
resources: {}
startupProbe:
failureThreshold: 18
httpGet:
path: /health
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /opt/cryostat.d/conf.d
name: cryostat
subPath: config
- mountPath: /opt/cryostat.d/recordings.d
name: cryostat
subPath: flightrecordings
- mountPath: /opt/cryostat.d/templates.d
name: cryostat
subPath: templates
- mountPath: /opt/cryostat.d/clientlib.d
name: cryostat
subPath: clientlib
- mountPath: truststore
name: cryostat
subPath: truststore
securityContext:
fsGroup: 18500
serviceAccountName: cryostat
terminationGracePeriodSeconds: 30
volumes:
- name: cryostat
persistentVolumeClaim:
claimName: cryostat
thanks @andrewazores I will test this out and get back to you.
@andrewazores I can confirm now that the cryostat app comes up without issue. I will try to test it against a jvm target and see if its working as expected. But I think it looks great so for no issues in bringing up the app itself.
And it works against a sample target and generates jfr, does jvm analysis etc., without issue.
A little bit more about the hardware its tested upon:
Hardware Overview:
Model Name: MacBook Pro Model Identifier: MacBookPro18,3 Chip: Apple M1 Pro Total Number of Cores: 10 (8 performance and 2 efficiency) Memory: 32 GB System Firmware Version: 8422.120.33 OS Loader Version: 8422.120.33
Awesome, thanks for the help testing out the image @kami619 . I'll start moving forward with a setup to get our CI to build and publish images like this one. Would you mind lending a hand to verify the jfr-datasource
and grafana-dashboard
images down the line as well, once those are ready? It would just entail running the same test setup again, but without the Deployment modifications.
sure thing, let me know when those images are ready. I'll keep an eye on this ticket for any updates.
cryostat
cryostat-reports
cryostat-grafana-dashboard
jfr-datasource
cryostat-operator
@kami619 @ahus1 all of Cryostat's upstream container images are now published on quay.io
as multiarch (amd64 + arm64) Linux images. Please do test them out and let me know if anything doesn't work as expected on your benchmarking setup.
Describe the feature
At the moment, the images of Cryostat seem to be only amd64 images. For a good developer experience on M1 macs, it would be great to have also arm64 image.
Any other information?
An image can support multiple architectures, like for example the Keycloak image -> looking at the images, they show two penguins, one for arm64, one for amd64.
https://quay.io/repository/keycloak/keycloak?tab=tags