stackabletech / issues

This repository is only for issues that concern multiple repositories or don't fit into any specific repository
2 stars 0 forks source link

chore(tracking): Test demos on nightly versions for 24.11 #658

Closed NickLarsenNZ closed 6 days ago

NickLarsenNZ commented 3 weeks ago

Pre-Release Demo Testing on Nightly

Part of https://github.com/stackabletech/issues/issues/647

This is testing:

  1. The demos documented in nightly (with the updated product versions) still work.
  2. That the operators can be upgraded from the current release to the nightly release and do not negatively impact the products.

[!NOTE] Record any issues or anomalies during the process in a comment on this issue. Eg:

:green_circle: **airflow-scheduled-job**

The CRD had been updated and I needed to change the following in the manifest:
...

Replace the items in the task lists below with the applicable Pull Requests (if any).

### Testing Demos on Nightly
- [ ] https://github.com/stackabletech/demos/pull/119
- [x] [airflow-scheduled-job](https://docs.stackable.tech/home/nightly/demos/airflow-scheduled-job) @xeniape
- [ ] https://github.com/stackabletech/demos/pull/127
- [ ] https://github.com/stackabletech/demos/pull/128
- [ ] https://github.com/stackabletech/hive-operator/pull/539
- [ ] https://github.com/stackabletech/demos/pull/129
- [x] [jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data](https://docs.stackable.tech/home/nightly/demos/jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data) @adwk67
- [x] [logging](https://docs.stackable.tech/home/nightly/demos/logging) @Techassi
- [x] https://github.com/stackabletech/demos/pull/124
- [ ] https://github.com/stackabletech/demos/pull/126
- [x] [signal-processing](https://docs.stackable.tech/home/nightly/demos/signal-processing) @Techassi
- [x] [spark-k8s-anomaly-detection-taxi-data](https://docs.stackable.tech/home/nightly/demos/spark-k8s-anomaly-detection-taxi-data) @adwk67
- [x] [trino-iceberg](https://docs.stackable.tech/home/nightly/demos/trino-iceberg) @adwk67
- [x] [trino-taxi-data](https://docs.stackable.tech/home/nightly/demos/trino-taxi-data) @labrenbe
- [x] Update this template with hints for upgrading helm charts easily (@Techassi)
- [ ] https://github.com/stackabletech/demos/pull/120
- [ ] https://github.com/stackabletech/demos/pull/121
- [ ] https://github.com/stackabletech/demos/pull/130
- [x] After all demo PRs are merged, quickly render the nightly docs to check for any `adoc` formatting issues. @NickLarsenNZ

Instructions

These instructions are for deploying the nightly demo, as well as upgrading the operators and CRDS.

# Install demo (stable operators) for the previous release (24.7)
# For now, we have to deploy from the release branch, otherwise we get new changes.
# Stackablectl doesn't yet support deploying a demo from a branch
git checkout release-24.7
git pull
stackablectl --stack-file=stacks/stacks-v2.yaml --demo-file=demos/demos-v2.yaml demo install <DEMO_NAME>

# --- IMPORTANT ---
# Run through the nightly demo instructions (refer to the tasklist below).

# Get a list of installed operators
stackablectl operator installed --output=plain

# --- OPTIONAL ---
# Sometimes it is necessary to upgrade Helm charts. Look for other Helm Charts
# which might need updating.

# First, see which charts are installed. You can ignore the stackable-operator
# charts, or anything that might have been installed outside of this demo.
helm list

# Next, add the applicable Helm Chart repositories. For example:
helm repo add minio https://charts.min.io/
helm repo add bitnami https://charts.bitnami.com/bitnami

# Finally, upgrade the Charts to what is defined in `main`. 
# These are being done in https://github.com/stackabletech/demos/pull/119
# For example:
helm upgrade minio minio/minio --version x.x.x
helm upgrade postgresql-hive bitnami/postgresql --version x.x.x
# --- OPTIONAL END ---

# Uninstall operators
stackablectl release uninstall 24.7

# Update CRDs to nightly version (on main)
# Repeat this for every operator used by the demo (use the list from the earlier step before deleting the operators)
kubectl replace -f https://raw.githubusercontent.com/stackabletech/commons-operator/main/deploy/helm/commons-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/...-operator/main/deploy/helm/...-operator/crds/crds.yaml

# Install nightly version of operators (use the list from the earlier step before deleting the operators)
stackablectl operator install commons ...

# Optionally update the product versions in the CRDs (to the latest non-experimental version for the new release), e.g.:
kubectl patch hbaseclusters/hbase --type='json' -p='[{"op": "replace", "path": "/spec/image/productVersion", "value":"x.x.x"}]' # changed
xeniape commented 1 week ago

:green_circle: airflow-scheduled-job

Anomalies during upgrade process:

Anomalies during clean installation of nightly version:

sparkapp_dag issue fixed by https://github.com/stackabletech/demos/pull/125

NickLarsenNZ commented 1 week ago

Anomalies during upgrade process: ... after increasing the memory resources, everything loaded

~@xeniape, is there a PR for the resource increases? I have seen the OOM problem before, and I believe @sbernauer resolved it with more resources.~ I see that the clean nightly deployment didn't have this problem, so I guess there is no PR required. But perhaps we need something in the release notes about resources needing bumping?

sbernauer commented 1 week ago

I have seen the OOM problem before, and I believe @sbernauer resolved it with more resources

Yes, but I can not find any commit for this any more... I would be in favor of bumping the memory, maybe even the default resources of airflow-operator. I have seen so many customer requests because of OOM, I would like to give the best experience to especially new users trying out demos (and playing around with them)

Techassi commented 1 week ago

🟢 signal-processing

Findings during initial installation:

Findings after upgrade:

IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "_hyper_2_2_chunk_idx_scores_sr"
DETAIL:  Key ("time")=(2024-11-12 08:02:01.188596+00) already exists.

It seems like there is a conflict in the data from the first run (before the upgrade). This is expected behaviour according to @adwk67. The notebook keeps running and produces data.

NickLarsenNZ commented 1 week ago

🟢 data-lakehouse-iceberg-trino-spark

🟢 Release 24.7 and upgrade to dev

I believe the demo to be a little flakey:

Attempt 1 - Relatively large cluster provisioned (12 nodes, each with 4 cores, 20GB RAM, 30GB HDD). - Weird errors appearing when installing the demo (see below). - Then cluster provisioning failure after failure after failure. Using a different provisioner.
weird errors ``` ❯ stackablectl --version stackablectl 24.7.1 ``` ``` ❯ stackablectl --stack-file=stacks/stacks-v2.yaml --demo-file=demos/demos-v2.yaml demo install data-lakehouse-iceberg-trino-spark ERROR Go wrapper function go_install_helm_release encountered an error: create: failed to create: Post "https://cp-d9963670-2e8c-43bc-b2ca-bf27294144f8.k8s.de-fra.ionos.com:13748/api/v1/namespaces/stackable-operators/secrets": unexpected EOF at src/helm.rs:290 An unrecoverable error occured: demo command error Caused by these errors (recent errors listed first): 1: failed to install demo "data-lakehouse-iceberg-trino-spark" 2: failed to install stack 3: failed to install release 4: failed to install release using Helm 5: failed to install Helm release 6: helm FFI library call failed (create: failed to create: Post "https://cp-d9963670-2e8c-43bc-b2ca-bf27294144f8.k8s.de-fra.ionos.com:13748/api/v1/namespaces/stackable-operators/secrets": unexpected EOF) ERROR Go wrapper function go_install_helm_release encountered an error: could not get server version from Kubernetes: Get "https://cp-d9963670-2e8c-43bc-b2ca-bf27294144f8.k8s.de-fra.ionos.com:13748/version?timeout=32s": net/http: TLS handshake timeout - error from a previous attempt: unexpected EOF at src/helm.rs:290 ^C ``` ``` ❯ kubectl get ns NAME STATUS AGE default Active 11h kube-node-lease Active 11h kube-public Active 11h kube-system Active 11h stackable-operators Active 2m16s ``` Then again ``` ❯ stackablectl --stack-file=stacks/stacks-v2.yaml --demo-file=demos/demos-v2.yaml demo install data-lakehouse-iceberg-trino-spark WARN Unsuccessful data error parse: 404 page not found at src/client/mod.rs:467 An unrecoverable error occured: demo command error Caused by these errors (recent errors listed first): 1: failed to create Kubernetes client 2: failed to run GVK discovery 3: ApiError: "404 page not found\n": Failed to parse error data (ErrorResponse { status: "404 Not Found", message: "\"404 page not found\\n\"", reason: "Failed to parse error data", code: 404 }) 4: "404 page not found\n": Failed to parse error data ```
Attempt 2 - Provisioned a cluster via Replicated (AKS, Standard_DS5_v2, 6 nodes, each with 56GB RAM, 50GB disk). - ⚠ I couldn't use the exact instance type in Replicated, so I just chose a suitable one that gave the equivalent amount of RAM as per docs. - Deployment successful, however I noticed the script could do with some tidying up. - ⚠ PR with cleaned up load-data job. - Replicated creates nodes on private IPs, so the `stackablectl stacklet list` output isn't useful. - ⚠ I tried using `kubectl port-forward`, and I can get to Minio, but not Nifi. - ⚠ I tried using `replicated tunnel port expose`, but it is not supported in the cloud instances (AKS, EKS, GKE). - I tried with RKE2 as well, but had the same port forwarding issues. - ⚠ I tried using `kubectl port-forward`, and I can get to Minio, but not Nifi. - ⚠ I tried using `replicated tunnel port expose`, and I could get to Minio but was not able to list objects. Couldn't connect to Nifi at all.
Steps ```shell export DEMO=data-lakehouse-iceberg-trino-spark WHO=$(whoami) # Spec'd for the demo replicated cluster create \ --name "$DEMO" \ --distribution rke2 \ --instance-type r1.large \ --version 1.30 \ --disk 50 \ --nodes 8 \ --ttl 12h \ --tag "owner=$WHO" \ --wait 20m; play -n synth 0.4 tri 1000.0 export CLUSTER_ID=$(replicated cluster ls --output json | jq -r --arg name "$DEMO" '.[] | select(.name == $name) | .id') replicated cluster shell --id "$CLUSTER_ID" stackablectl --stack-file=stacks/stacks-v2.yaml --demo-file=demos/demos-v2.yaml demo install "$DEMO" # Demo bits stackablectl stacklet list # Instead of stacklet list, I need to port-foward. k8s port forwards drop instantly (depending on the service), so... # NOTE: this command requires the cluster ID, not name replicated cluster port expose "$CLUSTER_ID" --port 31427 --protocol http # minio replicated cluster port expose "$CLUSTER_ID" --port 31110 --protocol https # nifi replicated cluster rm --name "$CLUSTER_ID" ```

Attempt 3

Postgres upgrade instructions ```sh # https://github.com/bitnami/charts/issues/14926#issuecomment-1937770421 kubectl --namespace default exec --stdin postgresql-hive-0 -- sh -c "PGPASSWORD=$FROM_SECRET pg_dumpall --username=postgres --host=127.0.0.1 --port=5432 | base64" > /tmp/postgresql-hive.sql.base64 kubectl --namespace default exec --stdin postgresql-hive-iceberg-0 -- sh -c "PGPASSWORD=$FROM_SECRET pg_dumpall --username=postgres --host=127.0.0.1 --port=5432 | base64" > /tmp/postgresql-hive-iceberg.sql.base64 kubectl --namespace default exec --stdin postgresql-superset-0 -- sh -c "PGPASSWORD=$FROM_SECRET pg_dumpall --username=postgres --host=127.0.0.1 --port=5432 | base64" > /tmp/postgresql-superset.sql.base64 kubectl --namespace default scale --replicas=0 statefulsets/postgresql-hive kubectl --namespace default scale --replicas=0 statefulsets/postgresql-hive-iceberg kubectl --namespace default scale --replicas=0 statefulsets/postgresql-superset kubectl --namespace default delete pvc/data-postgresql-hive-0 kubectl --namespace default delete pvc/data-postgresql-hive-iceberg-0 kubectl --namespace default delete pvc/data-postgresql-superset-0 helm upgrade postgresql-hive bitnami/postgresql --version 16.1.2 helm upgrade postgresql-hive-iceberg bitnami/postgresql --version 16.1.2 helm upgrade postgresql-superset bitnami/postgresql --version 16.1.2 kubectl --namespace default exec --stdin postgresql-hive-0 -- sh -c "base64 -d | PGPASSWORD=$FROM_SECRET psql --username=postgres --host=127.0.0.1 --port=5432 -f -" < /tmp/postgresql-hive.sql.base64 kubectl --namespace default exec --stdin postgresql-hive-iceberg-0 -- sh -c "base64 -d | PGPASSWORD=$FROM_SECRET psql --username=postgres --host=127.0.0.1 --port=5432 -f -" < /tmp/postgresql-hive-iceberg.sql.base64 kubectl --namespace default exec --stdin postgresql-superset-0 -- sh -c "base64 -d | PGPASSWORD=$FROM_SECRET psql --username=postgres --host=127.0.0.1 --port=5432 -f -" < /tmp/postgresql-superset.sql.base64 ```

🟢 Clean install of dev

Spark restart instructions Spark waits until Kafka is healthy, but doesn't wait for the topics to exist. NiFi creates them when it sends data. To restart Spark: ```sh kubectl delete -f demos/data-lakehouse-iceberg-trino-spark/create-spark-ingestion-job.yaml kubectl delete sparkapplication spark-ingest-into-lakehouse kubectl delete job spark-ingest-into-lakehouse kubectl apply -f demos/data-lakehouse-iceberg-trino-spark/create-spark-ingestion-job.yaml ```
Techassi commented 1 week ago

🟢 logging

labrenbe commented 1 week ago

🟢 trino-taxi-data

nightkr commented 1 week ago

🟢 nifi-kafka-druid-earthquake-data

Cluster environment: k3s v1.31.0-k3s1 (via k3d)

Upgrade

Upgraded to:

Notes:

Clean install

Notes:

adwk67 commented 1 week ago

🟢 jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data

adwk67 commented 1 week ago

🟢 spark-k8s-anomaly-detection-taxi-data

adwk67 commented 1 week ago

🟢 trino-iceberg

nightkr commented 1 week ago

🟢 nifi-kafka-druid-water-level-data

Cluster environment: k3s v1.31.0-k3s1 (via k3d)

Upgrade

Upgraded to:

Notes:

Clean install

Notes:

NickLarsenNZ commented 1 week ago

🟢 hbase-hdfs-load-cycling-data

distcp-cycling-data-x6zmf 🟢 This has been resolved. ``` Error: Unable to initialize main class org.apache.hadoop.tools.DistCp ``` ``` WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete. log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete. Error: Unable to initialize main class org.apache.hadoop.tools.DistCp Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/Job Stream closed EOF for default/distcp-cycling-data-6cb49 (distcp-cycling-data) ```
create-hfile-and-import-to-hbase-cg7wg 🟢 This has been resolved by the previous fix. I believe this just fails because the first job fails. ``` Input path does not exist: hdfs://hdfs/data/raw/demo-cycling-tripdata.csv.gz ``` ``` 2024-11-15 07:20:11,856 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2024-11-15 07:20:12,113 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC 2024-11-15 07:20:12,113 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:host.name=create-hfile-and-import-to-hbase-djmgw 2024-11-15 07:20:12,113 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.version=11.0.25 2024-11-15 07:20:12,113 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.vendor=Eclipse Adoptium 2024-11-15 07:20:12,113 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.home=/usr/lib/jvm/temurin-11-jre 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.class.path=/stackable/conf/:/usr/lib/jvm/temurin-11-jre/lib/tools.jar:/stackable/hbase/bin/..:/stackable/hbase/bin/../lib/HikariCP-java7-2.4.12.jar:/stackable/hbase/bin/../lib/animal-sniffer-annotations-1.17.jar:/stackable/hbase/bin/../lib/aopalliance-1.0.jar:/stackable/hbase/bin/../lib/asm-commons-9.4.jar:/stackable/hbase/bin/../lib/asm-tree-9.4.jar:/stackable/hbase/bin/../lib/avro-1.7.7.jar:/stackable/hbase/bin/../lib/byte-buddy-1.14.9.jar:/stackable/hbase/bin/../lib/byte-buddy-1.9.10.jar:/stackable/hbase/bin/../lib/byte-buddy-agent-1.9.10.jar:/stackable/hbase/bin/../lib/caffeine-2.8.8.jar:/stackable/hbase/bin/../lib/checker-qual-2.5.2.jar:/stackable/hbase/bin/../lib/checker-qual-3.8.0.jar:/stackable/hbase/bin/../lib/commons-beanutils-1.9.4.jar:/stackable/hbase/bin/../lib/commons-cli-1.5.0.jar:/stackable/hbase/bin/../lib/commons-codec-1.15.jar:/stackable/hbase/bin/../lib/commons-collections-3.2.2.jar:/stackable/hbase/bin/../lib/commons-compress-1.21.jar:/stackable/hbase/bin/../lib/commons-configuration2-2.8.0.jar:/stackable/hbase/bin/../lib/commons-crypto-1.0.0.jar:/stackable/hbase/bin/../lib/commons-csv-1.9.0.jar:/stackable/hbase/bin/../lib/commons-daemon-1.0.13.jar:/stackable/hbase/bin/../lib/commons-io-2.11.0.jar:/stackable/hbase/bin/../lib/commons-lang3-3.9.jar:/stackable/hbase/bin/../lib/commons-math3-3.6.1.jar:/stackable/hbase/bin/../lib/commons-net-3.9.0.jar:/stackable/hbase/bin/../lib/commons-text-1.10.0.jar:/stackable/hbase/bin/../lib/curator-client-4.2.0.jar:/stackable/hbase/bin/../lib/curator-framework-4.2.0.jar:/stackable/hbase/bin/../lib/curator-recipes-4.2.0.jar:/stackable/hbase/bin/../lib/disruptor-3.4.4.jar:/stackable/hbase/bin/../lib/dnsjava-2.1.7.jar:/stackable/hbase/bin/../lib/ehcache-3.3.1.jar:/stackable/hbase/bin/../lib/error_prone_annotations-2.26.1.jar:/stackable/hbase/bin/../lib/failureaccess-1.0.jar:/stackable/hbase/bin/../lib/fst-2.50.jar:/stackable/hbase/bin/../lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/stackable/hbase/bin/../lib/gson-2.9.0.jar:/stackable/hbase/bin/../lib/guava-27.0-jre.jar:/stackable/hbase/bin/../lib/guice-4.0.jar:/stackable/hbase/bin/../lib/guice-servlet-4.0.jar:/stackable/hbase/bin/../lib/hadoop-annotations-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-auth-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-azure-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-common-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-distcp-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-hdfs-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-hdfs-client-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-app-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-common-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-core-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-hs-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-jobclient-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-mapreduce-client-shuffle-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-minicluster-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-registry-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-shaded-guava-1.1.1.jar:/stackable/hbase/bin/../lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/stackable/hbase/bin/../lib/hadoop-yarn-api-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-client-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-common-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-applicationhistoryservice-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-common-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-nodemanager-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-resourcemanager-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-tests-3.3.6-tests.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-timelineservice-3.3.6.jar:/stackable/hbase/bin/../lib/hadoop-yarn-server-web-proxy-3.3.6.jar:/stackable/hbase/bin/../lib/hbase-annotations-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-annotations-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-asyncfs-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-asyncfs-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-client-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-common-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-common-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-endpoint-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-examples-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-external-blockcache-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-hadoop-compat-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-hadoop-compat-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-hadoop2-compat-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-hadoop2-compat-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-hbtop-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-http-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-it-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-it-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-logging-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-mapreduce-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-mapreduce-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-metrics-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-metrics-api-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-opa-authorizer.jar:/stackable/hbase/bin/../lib/hbase-procedure-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-procedure-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-protocol-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-protocol-shaded-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-replication-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-resource-bundle-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-rest-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-rsgroup-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-rsgroup-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-server-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-server-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-shaded-gson-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-jackson-jaxrs-json-provider-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-jersey-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-jetty-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-miscellaneous-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-netty-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shaded-protobuf-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-shell-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-testing-util-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-thrift-2.4.18.jar:/stackable/hbase/bin/../lib/hbase-unsafe-4.1.7.jar:/stackable/hbase/bin/../lib/hbase-zookeeper-2.4.18-tests.jar:/stackable/hbase/bin/../lib/hbase-zookeeper-2.4.18.jar:/stackable/hbase/bin/../lib/httpclient-4.5.14.jar:/stackable/hbase/bin/../lib/httpcore-4.4.16.jar:/stackable/hbase/bin/../lib/j2objc-annotations-1.1.jar:/stackable/hbase/bin/../lib/jackson-annotations-2.17.0.jar:/stackable/hbase/bin/../lib/jackson-core-2.17.0.jar:/stackable/hbase/bin/../lib/jackson-core-asl-1.9.13.jar:/stackable/hbase/bin/../lib/jackson-databind-2.17.0.jar:/stackable/hbase/bin/../lib/jackson-jaxrs-base-2.12.7.jar:/stackable/hbase/bin/../lib/jackson-jaxrs-json-provider-2.12.7.jar:/stackable/hbase/bin/../lib/jackson-mapper-asl-1.9.13.jar:/stackable/hbase/bin/../lib/jackson-module-jaxb-annotations-2.12.7.jar:/stackable/hbase/bin/../lib/jackson-module-jaxb-annotations-2.17.0.jar:/stackable/hbase/bin/../lib/jakarta.inject-2.6.1.jar:/stackable/hbase/bin/../lib/jakarta.validation-api-2.0.2.jar:/stackable/hbase/bin/../lib/jamon-runtime-2.4.1.jar:/stackable/hbase/bin/../lib/java-util-1.9.0.jar:/stackable/hbase/bin/../lib/javassist-3.30.2-GA.jar:/stackable/hbase/bin/../lib/javax-websocket-client-impl-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/javax-websocket-server-impl-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/javax.activation-api-1.2.0.jar:/stackable/hbase/bin/../lib/javax.annotation-api-1.2.jar:/stackable/hbase/bin/../lib/javax.el-3.0.1-b12.jar:/stackable/hbase/bin/../lib/javax.servlet-api-3.1.0.jar:/stackable/hbase/bin/../lib/javax.servlet.jsp-2.3.4.jar:/stackable/hbase/bin/../lib/javax.servlet.jsp-api-2.3.1.jar:/stackable/hbase/bin/../lib/javax.websocket-api-1.0.jar:/stackable/hbase/bin/../lib/javax.websocket-client-api-1.0.jar:/stackable/hbase/bin/../lib/jaxb-api-2.3.1.jar:/stackable/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/stackable/hbase/bin/../lib/jcip-annotations-1.0-1.jar:/stackable/hbase/bin/../lib/jcl-over-slf4j-1.7.36.jar:/stackable/hbase/bin/../lib/jcodings-1.0.58.jar:/stackable/hbase/bin/../lib/jersey-json-1.20.jar:/stackable/hbase/bin/../lib/jettison-1.5.4.jar:/stackable/hbase/bin/../lib/jetty-annotations-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-client-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-http-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-io-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-jndi-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-plus-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-security-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-server-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-servlet-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-util-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-util-ajax-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-webapp-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jetty-xml-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/jline-3.9.0.jar:/stackable/hbase/bin/../lib/jna-5.2.0.jar:/stackable/hbase/bin/../lib/joni-2.1.48.jar:/stackable/hbase/bin/../lib/jsch-0.1.55.jar:/stackable/hbase/bin/../lib/json-io-2.5.1.jar:/stackable/hbase/bin/../lib/jul-to-slf4j-1.7.36.jar:/stackable/hbase/bin/../lib/kerb-admin-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-client-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-common-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-core-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-crypto-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-identity-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-server-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-simplekdc-1.0.1.jar:/stackable/hbase/bin/../lib/kerb-util-1.0.1.jar:/stackable/hbase/bin/../lib/kerby-asn1-1.0.1.jar:/stackable/hbase/bin/../lib/kerby-config-1.0.1.jar:/stackable/hbase/bin/../lib/kerby-pkix-1.0.1.jar:/stackable/hbase/bin/../lib/kerby-util-1.0.1.jar:/stackable/hbase/bin/../lib/kerby-xdr-1.0.1.jar:/stackable/hbase/bin/../lib/kotlin-stdlib-1.4.10.jar:/stackable/hbase/bin/../lib/kotlin-stdlib-common-1.4.10.jar:/stackable/hbase/bin/../lib/leveldbjni-all-1.8.jar:/stackable/hbase/bin/../lib/libthrift-0.14.2.jar:/stackable/hbase/bin/../lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/stackable/hbase/bin/../lib/metrics-core-3.2.6.jar:/stackable/hbase/bin/../lib/mssql-jdbc-6.2.1.jre7.jar:/stackable/hbase/bin/../lib/netty-3.10.6.Final.jar:/stackable/hbase/bin/../lib/netty-all-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-buffer-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-dns-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-haproxy-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-http-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-http2-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-memcache-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-mqtt-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-redis-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-smtp-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-socks-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-stomp-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-codec-xml-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-common-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-handler-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-handler-proxy-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-handler-ssl-ocsp-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-resolver-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-resolver-dns-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-resolver-dns-classes-macos-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-resolver-dns-native-macos-4.1.111.Final-osx-aarch_64.jar:/stackable/hbase/bin/../lib/netty-resolver-dns-native-macos-4.1.111.Final-osx-x86_64.jar:/stackable/hbase/bin/../lib/netty-transport-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-classes-epoll-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-classes-kqueue-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-native-epoll-4.1.111.Final-linux-aarch_64.jar:/stackable/hbase/bin/../lib/netty-transport-native-epoll-4.1.111.Final-linux-riscv64.jar:/stackable/hbase/bin/../lib/netty-transport-native-epoll-4.1.111.Final-linux-x86_64.jar:/stackable/hbase/bin/../lib/netty-transport-native-epoll-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-native-kqueue-4.1.111.Final-osx-aarch_64.jar:/stackable/hbase/bin/../lib/netty-transport-native-kqueue-4.1.111.Final-osx-x86_64.jar:/stackable/hbase/bin/../lib/netty-transport-native-unix-common-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-rxtx-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-sctp-4.1.111.Final.jar:/stackable/hbase/bin/../lib/netty-transport-udt-4.1.111.Final.jar:/stackable/hbase/bin/../lib/nimbus-jose-jwt-9.8.1.jar:/stackable/hbase/bin/../lib/objenesis-2.5.1.jar:/stackable/hbase/bin/../lib/objenesis-2.6.jar:/stackable/hbase/bin/../lib/okhttp-4.9.3.jar:/stackable/hbase/bin/../lib/okio-2.8.0.jar:/stackable/hbase/bin/../lib/paranamer-2.3.jar:/stackable/hbase/bin/../lib/phoenix-server-hbase-2.4.jar:/stackable/hbase/bin/../lib/protobuf-java-2.5.0.jar:/stackable/hbase/bin/../lib/re2j-1.1.jar:/stackable/hbase/bin/../lib/snappy-java-1.1.10.5.jar:/stackable/hbase/bin/../lib/spymemcached-2.12.3.jar:/stackable/hbase/bin/../lib/stax2-api-4.2.1.jar:/stackable/hbase/bin/../lib/token-provider-1.0.1.jar:/stackable/hbase/bin/../lib/websocket-api-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/websocket-client-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/websocket-common-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/websocket-server-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/websocket-servlet-9.4.51.v20230217.jar:/stackable/hbase/bin/../lib/woodstox-core-5.4.0.jar:/stackable/hbase/bin/../lib/zookeeper-3.8.4.jar:/stackable/hbase/bin/../lib/zookeeper-jute-3.8.4.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/audience-annotations-0.13.0.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/jcl-over-slf4j-1.7.36.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/jul-to-slf4j-1.7.36.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/reload4j-1.2.25.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.36.jar:/stackable/hbase/bin/../lib/jdk11/FastInfoset-1.2.16.jar:/stackable/hbase/bin/../lib/jdk11/commonj.sdo-2.1.1.jar:/stackable/hbase/bin/../lib/jdk11/gmbal-4.0.0.jar:/stackable/hbase/bin/../lib/jdk11/ha-api-3.1.12.jar:/stackable/hbase/bin/../lib/jdk11/istack-commons-runtime-3.0.8.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.activation-api-1.2.1.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.annotation-api-1.3.4.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.jws-api-1.1.1.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.mail-api-1.6.3.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.persistence-api-2.2.2.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.xml.bind-api-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.xml.soap-api-1.4.1.jar:/stackable/hbase/bin/../lib/jdk11/jakarta.xml.ws-api-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/javax.activation-1.2.0.jar:/stackable/hbase/bin/../lib/jdk11/jaxb-jxc-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jaxb-runtime-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jaxb-xjc-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jaxws-eclipselink-plugin-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jaxws-ri-2.3.2.pom:/stackable/hbase/bin/../lib/jdk11/jaxws-rt-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/jaxws-tools-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/management-api-3.2.1.jar:/stackable/hbase/bin/../lib/jdk11/mimepull-1.9.11.jar:/stackable/hbase/bin/../lib/jdk11/org.eclipse.persistence.asm-2.7.4.jar:/stackable/hbase/bin/../lib/jdk11/org.eclipse.persistence.core-2.7.4.jar:/stackable/hbase/bin/../lib/jdk11/org.eclipse.persistence.moxy-2.7.4.jar:/stackable/hbase/bin/../lib/jdk11/org.eclipse.persistence.sdo-2.7.4.jar:/stackable/hbase/bin/../lib/jdk11/pfl-asm-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/pfl-basic-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/pfl-basic-tools-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/pfl-dynamic-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/pfl-tf-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/pfl-tf-tools-4.0.1.jar:/stackable/hbase/bin/../lib/jdk11/policy-2.7.6.jar:/stackable/hbase/bin/../lib/jdk11/release-documentation-2.3.2-docbook.zip:/stackable/hbase/bin/../lib/jdk11/saaj-impl-1.5.1.jar:/stackable/hbase/bin/../lib/jdk11/samples-2.3.2.zip:/stackable/hbase/bin/../lib/jdk11/sdo-eclipselink-plugin-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/stax-ex-1.8.1.jar:/stackable/hbase/bin/../lib/jdk11/stax2-api-4.2.1.jar:/stackable/hbase/bin/../lib/jdk11/streambuffer-1.5.7.jar:/stackable/hbase/bin/../lib/jdk11/txw2-2.3.2.jar:/stackable/hbase/bin/../lib/jdk11/woodstox-core-5.4.0.jar:/stackable/hbase/bin/../lib/client-facing-thirdparty/slf4j-reload4j-1.7.36.jar 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.io.tmpdir=/tmp 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:java.compiler= 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.name=Linux 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.arch=amd64 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.version=6.8.0-47-generic 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:user.name=stackable 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:user.home=/stackable 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:user.dir=/stackable/hbase-2.4.18 2024-11-15 07:20:12,114 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.memory.free=462MB 2024-11-15 07:20:12,115 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.memory.max=820MB 2024-11-15 07:20:12,115 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (Environment.java:logEnv(98)) - Client environment:os.memory.total=502MB 2024-11-15 07:20:12,119 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (ZooKeeper.java:(637)) - Initiating client connection, connectString=zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$73/0x00000001001a8c40@2ccf166a 2024-11-15 07:20:12,123 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] common.X509Util (X509Util.java:(78)) - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2024-11-15 07:20:12,127 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ClientCnxnSocket (ClientCnxnSocket.java:initProperties(239)) - jute.maxbuffer value is 1048575 Bytes 2024-11-15 07:20:12,134 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ClientCnxn (ClientCnxn.java:initRequestTimeout(1747)) - zookeeper.request.timeout value is 0. feature enabled=false 2024-11-15 07:20:12,145 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5-SendThread(zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1177)) - Opening socket connection to server zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local/100.105.153.161:2282. 2024-11-15 07:20:12,145 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5-SendThread(zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1179)) - SASL config status: Will not attempt to authenticate using SASL (unknown error) 2024-11-15 07:20:12,151 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5-SendThread(zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(1013)) - Socket connection established, initiating session, client: /100.102.90.125:40922, server: zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local/100.105.153.161:2282 2024-11-15 07:20:12,158 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5-SendThread(zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1453)) - Session establishment complete on server zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local/100.105.153.161:2282, session id = 0x1000c4b9c440010, negotiated timeout = 60000 2024-11-15 07:20:13,138 INFO [main] mapreduce.HFileOutputFormat2 (HFileOutputFormat2.java:configureIncrementalLoad(643)) - bulkload locality sensitive enabled 2024-11-15 07:20:13,138 INFO [main] mapreduce.HFileOutputFormat2 (HFileOutputFormat2.java:getRegionStartKeys(507)) - Looking up current regions for table cycling-tripdata 2024-11-15 07:20:13,178 INFO [main] mapreduce.HFileOutputFormat2 (HFileOutputFormat2.java:configureIncrementalLoad(663)) - Configuring 1 reduce partitions to match current region count for all tables 2024-11-15 07:20:13,178 INFO [main] mapreduce.HFileOutputFormat2 (HFileOutputFormat2.java:writePartitions(531)) - Writing partition information to /user/stackable/hbase-staging/partitions_83f417dc-d5be-426f-83e9-ae2165964c06 2024-11-15 07:20:13,284 INFO [main] compress.CodecPool (CodecPool.java:getCompressor(153)) - Got brand-new compressor [.deflate] 2024-11-15 07:20:13,398 INFO [main] mapreduce.HFileOutputFormat2 (HFileOutputFormat2.java:configureIncrementalLoad(683)) - Incremental output configured for tables: cycling-tripdata 2024-11-15 07:20:13,409 INFO [main] client.ConnectionImplementation (ConnectionImplementation.java:closeMasterService(1973)) - Closing master protocol: MasterService 2024-11-15 07:20:13,446 WARN [main] impl.MetricsConfig (MetricsConfig.java:loadFirst(136)) - Cannot locate configuration: tried hadoop-metrics2-jobtracker.properties,hadoop-metrics2.properties 2024-11-15 07:20:13,503 INFO [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(378)) - Scheduled Metric snapshot period at 10 second(s). 2024-11-15 07:20:13,503 INFO [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - JobTracker metrics system started 2024-11-15 07:20:13,513 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5] zookeeper.ZooKeeper (ZooKeeper.java:close(1232)) - Session: 0x1000c4b9c440010 closed 2024-11-15 07:20:13,514 INFO [ReadOnlyZKClient-zookeeper-server-default-0.zookeeper-server-default.default.svc.cluster.local:2282@0x043b9fd5-EventThread] zookeeper.ClientCnxn (ClientCnxn.java:run(569)) - EventThread shut down for session: 0x1000c4b9c440010 2024-11-15 07:20:13,645 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(260)) - Cleaning up the staging area file:/tmp/hadoop/mapred/staging/stackable890660764/.staging/job_local890660764_0001 Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://hdfs/data/raw/demo-cycling-tripdata.csv.gz at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:340) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:279) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:404) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1678) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1675) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1675) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1696) at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:772) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:784) Caused by: java.io.IOException: Input path does not exist: hdfs://hdfs/data/raw/demo-cycling-tripdata.csv.gz at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313) ... 15 more Stream closed EOF for default/create-hfile-and-import-to-hbase-djmgw (create-hfile-and-import-to-hbase) ```
xeniape commented 6 days ago

🟡 end-to-end-security

Anomalies during upgrade process:

Anomalies during clean installation of nightly version:

NickLarsenNZ commented 6 days ago

Thanks everyone for the help in getting the demos tested. This issue is now resolved. 🚀