Open damian-aifinyo opened 2 months ago
Did you change the log4jConfig
value in the values.yaml
file?
Thanks fore the hint! I found this on airbyte-commons/src/main/resources/log4j2-s3.xml
<!-- Note that logging to S3 will leverage the DefaultAWSCredentialsProviderChain for auth. -->
<Property name="s3-bucket">${sys:STORAGE_BUCKET_LOG:-${env:STORAGE_BUCKET_LOG:-}}</Property>
<Property name="s3-region">${sys:AWS_DEFAULT_REGION:-${env:AWS_DEFAULT_REGION:-}}</Property>
So, it is using the AWS_DEFAULT_REGION
value, but is not set anywhere on the guide. I added this:
worker:
extraEnv:
- name: AWS_DEFAULT_REGION
valueFrom:
secretKeyRef:
name: airbyte-logs-secrets
key: S3_LOG_BUCKET_REGION
and it worked, but the next issue was the default value for the logs, it was trying to use this:
STORAGE_BUCKET_ACTIVITY_PAYLOAD: airbyte-storage
STORAGE_BUCKET_LOG: airbyte-storage
STORAGE_BUCKET_STATE: airbyte-storage
STORAGE_BUCKET_WORKLOAD_OUTPUT: airbyte-storage
Ignoring the values I set for the logs, I change that manually on the configMap airbyte-airbyte-env
with my values and now is working.
again, thanks for the support!
@damian-aifinyo Just faced the same issue and this config section helped to update these envs:
global:
####
storage:
type: "S3"
bucket:
activityPayload: <bucket_name>
log: <bucket_name>
state: <bucket_name>
workloadOutput: <bucket_name>
These steps help get past deploy, but in my scenario, any sync attempts fail with a 500 error whose response cites "java.lang.IllegalArgumentException: region must not be blank or empty."
I've used the Airbyte helm chart on and off for months, and every time it comes to install or update, it requires a willingness to dive deep into the source itself and debug the setup; the S3 installation instructions are horrible, the default overrides are buggy. It feels like one of those setups that's just supported enough to claim it exists, while also supported so poorly that users get pushed into the Cloud product.
Both silazare and damian-aifinyo solutions got me moving again - thanks all
Hi Team, any suggestion for adding the bucket path in the s3 configuration for external logging.
If bucket path added in below format
global:
storage:
type: "S3"
bucket:
activityPayload:
provides an below error
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided.
In my case, I need provide for server
as well, so like
global:
storage:
type: S3
bucket:
activityPayload: production-hm-airbyte
log: production-hm-airbyte
state: production-hm-airbyte
workloadOutput: production-hm-airbyte
server:
extraEnv:
- name: AWS_DEFAULT_REGION
valueFrom:
secretKeyRef:
name: hm-airbyte-secret
key: AIRBYTE_LOG_S3_BUCKET_REGION
worker:
extraEnv:
- name: AWS_DEFAULT_REGION
valueFrom:
secretKeyRef:
name: hm-airbyte-secret
key: AIRBYTE_LOG_S3_BUCKET_REGION
Helm Chart Version
0.64.185
What step the error happened?
On deploy
Revelant information
I'm trying to deploy airbyte on my EKS cluster (I followed indications from https://docs.airbyte.com/deploying-airbyte/on-kubernetes-via-helm), I configure and external S3 bucket and an external DB (hosted on RDS). Everything seems to be correct during deploy I can even access the web panel. But the problem is the worker, it never finish to start, the error mention that the region should not be empty. I configure it as indicated. I don't know if I need to define it somewhere else. Here is my values file and the extra resources (secrets)
All resources were deploy on the same namespace, connection to db seems to be correct (it was populated correctly)
Relevant log output