Open po3na4skld opened 3 years ago
@sherifnada @po3na4skld my initial guess seem right that this is the same issue we have in https://github.com/airbytehq/airbyte/pull/4175/files I think the fix can be the same
@sherifnada updated the title and the description
Related oncall issues:
Is this still an issue?
Is this still an issue?
Yes, unless someone explicitly fixed it recently. I haven't seen any related changes or tests.
Possible Solutions:
There's some good additional information (Temporal message size for one) in https://github.com/airbytehq/airbyte/issues/3943, which has been closed as a duplicate to this issue.
@evantahler you could also add support for $ref
and optimize the payload
Linking https://github.com/airbytehq/airbyte/pull/15888 which is part of the way we solve this problem
Any update on this or is the process still to create a seperate user with limited scope?
cc @malikdiarra, as the Compose team is looking into this
Any update on this or is the process still to create a seperate user with limited scope?
+1 on this. Also having the same issue trying to fetch 1200+ tables on Oracle DB.
+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.
+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.
@malikdiarra Any update for this?
Same boat as @arthurbarros
Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.
I also tried setting the worker's missing environment variables in .env
and linking them in docker-compose.yaml for the worker
# Worker
ACTIVITY_CHECK_TIMEOUT=15
ACTIVITY_DISCOVERY_TIMEOUT=30
There's also the BASIC_AUTH_PROXY_TIMEOUT
environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to 900
so 15 minutes.
I also tried setting WORKLOAD_API_READ_TIMEOUT_SECONDS=1200
with no success;
Using Airbyte v0.50.34
@malikdiarra are you'll looking into this case or is it on hold?
Same boat as @arthurbarros
Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.
I also tried setting the worker's missing environment variables in
.env
and linking them in docker-compose.yaml for the worker# Worker ACTIVITY_CHECK_TIMEOUT=15 ACTIVITY_DISCOVERY_TIMEOUT=30
There's also the
BASIC_AUTH_PROXY_TIMEOUT
environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to900
so 15 minutes.I also tried setting
WORKLOAD_API_READ_TIMEOUT_SECONDS=1200
with no success;Using Airbyte
v0.50.34
The only workaround I found working, is to create multiple users on Oracle DB and give permission to list just a subset of tables. With that have multiple Oracle DB connections for each of those users.
It's ugly but works.
cc @pmossman - we've made some discovery/temporal improvments lately
@evantahler Such as? I'm looking at the release changelogs, what kind of improvements are we looking for?
Part of the problem here was that until recently, the Airbyte platform could not handle discovered catalogs over ~4mb, due to a limitation in Temporal. We've changed up how we pass information between jobs recently which might help alleviate this.
Currently, for me on cloud, discover_schema api gave below response (final line): Discover primary keys for tables: [.....]
We had 2500+ tables on MYSQL.
But after that screen is freezed:
For now, only solution which works is create multiple users on DB with limited access as @arthurbarros said.
Any dates when will changes be released to stable @evantahler
@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.
Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at parker@airbyte.io
or message it to me on the Airbyte Slack, I'm @Parker Mossman
there)
@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.
Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at
parker@airbyte.io
or message it to me on the Airbyte Slack, I'm@Parker Mossman
there)
Wrote an email to you @pmossman
Thanks @surajmaurya14, I was able to reproduce the issue and investigate our Temporal cluster while the discovery job was running to see where the failure originated from.
In this case, we set a hard cap of 9 minute execution time for Discover jobs in Airbyte Cloud. I think this catalog is so large that it is taking longer than 9 minutes to generate, so Temporal terminates the job at the 9 minute mark before it finishes. I can follow up with a few folks internally to see if we can either: (a): increase the 9 minute threshold to give cases like this more time (b): investigate this particular source to see if there's an optimization we can make for large table counts (since 9+ minutes is obviously a poor user experience!)
@surajmaurya14 I passed along this feedback to our database sources team, and they recommended we try increasing the 9 minute timeout to 30 minutes to see if your use case eventually succeeds.
I made this change today, so our Temporal workers in Airbyte Cloud will now keep Discover jobs running up to 30 minutes before terminating.
Obviously this is just a stop gap and it'd be ideal to optimize this source for cases where we have thousands of tables, but I'm hoping this unblocks you and gives us some more insight into where the bottleneck may be.
Can you give things another try and let me know how it goes? If the job still freezes/times out after 30 minutes, we'll likely need to do more investigation into the particular source connector to see where things are getting stuck.
@pmossman Server temporarily unavailable error. Wrote a reply on same email to you.
Thanks @surajmaurya14, I did some more digging and here's what I found:
Your discover catalog jobs are now able to finish in Temporal, so the increase to 30 minutes on the Temporal side helped. However, our Load Balancer is configured with a maximum request time of 10 minutes, which means that even if the Discover job eventually succeeds, the server issues a 502 before it can finish processing the network request.
I also observed 502 errors before the 10 minute mark is reached, which seems to correspond with new code deploys that cause server pods to restart. So even if we could raise the maximum request time from 10 minutes to 30 minutes, we deploy code so often that there's a high likelihood the server would drop the request before it could complete.
We have an ongoing project to convert the Discover API to an async model, so that our server no longer needs to keep an active thread open for the entire duration of the Discover job. This project should address the fundamental issue at hand here, I'll make a note to tag you when it lands so we can make sure your use case is finally unblocked.
Thanks for the back and forth here, I know it must be frustrating that the app isn't working for you now but this iteration is extremely helpful for improving the platform and I really appreciate your involvement!
Same issue here, Trying to sync MySql to Snowflake
Same here with an Oracle Database
Same here, we have tried increasing temporal message limit and http timeouts to no effect.
Trying to sync MSSQL to Databricks.
Though we have an old version of Airbyte(0.44.2) running and don't have this issue on that version with the same MSSQL Database and Databricks. Both are running on K8s
Also experiencing this issue with large Oracle db
Hey @evantahler is there any ETA? :) thanks
cc @davinchia There's no ETA as of yet, but dealing with large catalogs is on our near-term roadmap.
2 problems here:
@marcosmarxm not sure what latest version you mean - i just tested airbyte 0.60.1 and oracle connector and i still get this error message:
2024-05-24 16:06:21 platform > Attempt 0 to call to write discover schema result error: io.airbyte.api.client.invoker.generated.ApiException: writeDiscoverCatalogResult call failed with: 413 - {"message":"Request Entity Too Large","_links":{"self":{"href":"/api/v1/sources/write_discover_catalog_result","templated":false}},"_embedded":{"errors":[{"message":"The content length [10546869] exceeds the maximum allowed content length [10485760]","_links":{},"_embedded":{}}]}}
I am currently facing this issue as well. I just tested Airbyte 0.63.2 (OSS version on WSL) with the Postgres connector, and I am still encountering the same 413 error. It seems that the size problem with the catalog has not been resolved in this version.
airbyte-worker | 2024-06-24 01:58:28 INFO i.a.c.t.HeartbeatUtils(withBackgroundHeartbeat):64 - Stopping temporal heartbeating...
airbyte-worker | 2024-06-24 01:58:28 INFO i.a.c.t.HeartbeatUtils(withBackgroundHeartbeat):73 - Temporal heartbeating stopped.
airbyte-worker | 2024-06-24 01:58:28 WARN i.t.i.a.ActivityTaskExecutors$BaseActivityTaskExecutor(execute):126 - Activity failure. ActivityId=cfcf798c-1ca9-35a3-af14-e14852c013fd, activityType=Run, attempt=1
airbyte-worker | java.lang.RuntimeException: io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker | at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:62) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.run(DiscoverCatalogActivityImpl.java:183) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?]
airbyte-worker | at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
airbyte-worker | at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
airbyte-worker | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
airbyte-worker | at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
airbyte-worker | Caused by: io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker | at io.temporal.serviceclient.CheckedExceptionWrapper.wrap(CheckedExceptionWrapper.java:57) ~[temporal-serviceclient-1.22.3.jar:?]
airbyte-worker | at io.temporal.internal.sync.WorkflowInternal.wrap(WorkflowInternal.java:539) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.temporal.activity.Activity.wrap(Activity.java:52) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:143) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker | ... 14 more
airbyte-worker | Caused by: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker | at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:136) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:52) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:138) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker | ... 14 more
airbyte-worker | Caused by: org.openapitools.client.infrastructure.ClientException: Client error : 413 Request Entity Too Large
airbyte-worker | at io.airbyte.api.client.generated.SourceApi.writeDiscoverCatalogResult(SourceApi.kt:1163) ~[io.airbyte-airbyte-api-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:125) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:52) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:138) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker | at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker | ... 14 more
@myu65 i managed to overcome a similar error thanks to this suggestion https://github.com/airbytehq/airbyte/issues/37463#issuecomment-2073396085
@myu65 i managed to overcome a similar error thanks to this suggestion #37463 (comment)
Thank you for your reply.
I managed to resolve my error by following your suggestion.
I added the following values to the .env
file and the docker-compose
file, which resolved the error in fetching the schema when making connection:
.env
MICRONAUT_SERVER_MAX_REQUEST_SIZE=52428800
docker-compose.yaml
environment:
# ADD THIS ENVIRONMENT VARIABLE TO THE APPROPRIATE CONTAINER
# I AM UNSURE WHICH CONTAINER REQUIRES THIS SETTING
- MICRONAUT_SERVER_MAX_REQUEST_SIZE=${MICRONAUT_SERVER_MAX_REQUEST_SIZE}
To reduce the number of similar issues being raised, I suggest updating the following files:
https://github.com/airbytehq/airbyte-platform/blob/main/.env
https://github.com/airbytehq/airbyte-platform/blob/main/docker-compose.yaml
Enviroment
Current Behavior
When fetching the large schema from the source, it fails with the next error I found in doker-compose up logs output: It has nothing to do with the source itself. I used this just to catch this issue
It runs unlit the end of the daily request quota. Then schema fetching fails due to limits.
Expected Behavior
The size of the schema shouldn't affect the UI work.
Logs
error_log.txt
Steps to Reproduce
I also tried to do this with filtered out streams and it worked well.
Are you willing to submit a PR?
No