airbytehq / airbyte

The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
https://airbyte.com
Other
15.46k stars 3.98k forks source link

🐛 Airbyte Core: Large schema fetching failure #4564

Open po3na4skld opened 3 years ago

po3na4skld commented 3 years ago

Enviroment

Current Behavior

When fetching the large schema from the source, it fails with the next error I found in doker-compose up logs output: It has nothing to do with the source itself. I used this just to catch this issue

org.glassfish.jersey.server.ServerRuntime$Responder writeResponse
SEVERE: An I/O error has occurred while writing a response message entity to the container output stream.

It runs unlit the end of the daily request quota. Then schema fetching fails due to limits.

Expected Behavior

The size of the schema shouldn't affect the UI work.

Logs

error_log.txt

Steps to Reproduce

  1. docker compose up airbyte project
  2. create source Salesforce
  3. create local CSV or BigQuery destination
  4. set up a connection
  5. see the error in docker compose up command output

I also tried to do this with filtered out streams and it worked well.

Are you willing to submit a PR?

No

keu commented 3 years ago

@sherifnada @po3na4skld my initial guess seem right that this is the same issue we have in https://github.com/airbytehq/airbyte/pull/4175/files I think the fix can be the same

po3na4skld commented 3 years ago

@sherifnada updated the title and the description

grishick commented 2 years ago

Related oncall issues:

evantahler commented 2 years ago

Is this still an issue?

grishick commented 2 years ago

Is this still an issue?

Yes, unless someone explicitly fixed it recently. I haven't seen any related changes or tests.

evantahler commented 2 years ago

Possible Solutions:

There's some good additional information (Temporal message size for one) in https://github.com/airbytehq/airbyte/issues/3943, which has been closed as a duplicate to this issue.

keu commented 2 years ago

@evantahler you could also add support for $ref and optimize the payload

evantahler commented 1 year ago

Linking https://github.com/airbytehq/airbyte/pull/15888 which is part of the way we solve this problem

cody-scott commented 11 months ago

Any update on this or is the process still to create a seperate user with limited scope?

evantahler commented 11 months ago

cc @malikdiarra, as the Compose team is looking into this

arthurbarros commented 10 months ago

Any update on this or is the process still to create a seperate user with limited scope?

+1 on this. Also having the same issue trying to fetch 1200+ tables on Oracle DB.

surajmaurya14 commented 10 months ago

+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.

surajmaurya14 commented 10 months ago

+1 Even I am having issue for MYSQL DB when it is trying discover_schema: 502 Bad Gateway. I had 1000+ tables in DB.

@malikdiarra Any update for this?

philippeboyd commented 9 months ago

Same boat as @arthurbarros

Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.

image

I also tried setting the worker's missing environment variables in .env and linking them in docker-compose.yaml for the worker

# Worker
ACTIVITY_CHECK_TIMEOUT=15
ACTIVITY_DISCOVERY_TIMEOUT=30

image

There's also the BASIC_AUTH_PROXY_TIMEOUT environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to 900 so 15 minutes.

image

I also tried setting WORKLOAD_API_READ_TIMEOUT_SECONDS=1200 with no success;

Using Airbyte v0.50.34

surajmaurya14 commented 9 months ago

@malikdiarra are you'll looking into this case or is it on hold?

arthurbarros commented 9 months ago

Same boat as @arthurbarros

Any news on this? I tried everything and no environment variables is changing the fact that Discovery fails after 5 minutes.

image

I also tried setting the worker's missing environment variables in .env and linking them in docker-compose.yaml for the worker

# Worker
ACTIVITY_CHECK_TIMEOUT=15
ACTIVITY_DISCOVERY_TIMEOUT=30

image

There's also the BASIC_AUTH_PROXY_TIMEOUT environment variable (seen in linked issue airbytehq/airbyte#19201) for nginx timeouts, but that's set to 900 so 15 minutes.

image

I also tried setting WORKLOAD_API_READ_TIMEOUT_SECONDS=1200 with no success;

Using Airbyte v0.50.34

The only workaround I found working, is to create multiple users on Oracle DB and give permission to list just a subset of tables. With that have multiple Oracle DB connections for each of those users.

It's ugly but works.

evantahler commented 9 months ago

cc @pmossman - we've made some discovery/temporal improvments lately

philippeboyd commented 9 months ago

@evantahler Such as? I'm looking at the release changelogs, what kind of improvements are we looking for?

evantahler commented 9 months ago

Part of the problem here was that until recently, the Airbyte platform could not handle discovered catalogs over ~4mb, due to a limitation in Temporal. We've changed up how we pass information between jobs recently which might help alleviate this.

surajmaurya14 commented 9 months ago

Currently, for me on cloud, discover_schema api gave below response (final line): Discover primary keys for tables: [.....]

We had 2500+ tables on MYSQL.

But after that screen is freezed: image

For now, only solution which works is create multiple users on DB with limited access as @arthurbarros said.

Any dates when will changes be released to stable @evantahler

pmossman commented 8 months ago

@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.

Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at parker@airbyte.io or message it to me on the Airbyte Slack, I'm @Parker Mossman there)

surajmaurya14 commented 8 months ago

@surajmaurya14 the change to how we pass Temporal data is live on Cloud, but there may be another bottleneck somewhere in our system when handling such a large catalog.

Could you share your Cloud workspace ID and source name where you see the frozen screen so we can investigate where the bottleneck is? (Feel free to email it to me at parker@airbyte.io or message it to me on the Airbyte Slack, I'm @Parker Mossman there)

Wrote an email to you @pmossman

pmossman commented 8 months ago

Thanks @surajmaurya14, I was able to reproduce the issue and investigate our Temporal cluster while the discovery job was running to see where the failure originated from.

In this case, we set a hard cap of 9 minute execution time for Discover jobs in Airbyte Cloud. I think this catalog is so large that it is taking longer than 9 minutes to generate, so Temporal terminates the job at the 9 minute mark before it finishes. I can follow up with a few folks internally to see if we can either: (a): increase the 9 minute threshold to give cases like this more time (b): investigate this particular source to see if there's an optimization we can make for large table counts (since 9+ minutes is obviously a poor user experience!)

pmossman commented 8 months ago

@surajmaurya14 I passed along this feedback to our database sources team, and they recommended we try increasing the 9 minute timeout to 30 minutes to see if your use case eventually succeeds.

I made this change today, so our Temporal workers in Airbyte Cloud will now keep Discover jobs running up to 30 minutes before terminating.

Obviously this is just a stop gap and it'd be ideal to optimize this source for cases where we have thousands of tables, but I'm hoping this unblocks you and gives us some more insight into where the bottleneck may be.

Can you give things another try and let me know how it goes? If the job still freezes/times out after 30 minutes, we'll likely need to do more investigation into the particular source connector to see where things are getting stuck.

surajmaurya14 commented 8 months ago

@pmossman Server temporarily unavailable error. Wrote a reply on same email to you.

pmossman commented 8 months ago

Thanks @surajmaurya14, I did some more digging and here's what I found:

Your discover catalog jobs are now able to finish in Temporal, so the increase to 30 minutes on the Temporal side helped. However, our Load Balancer is configured with a maximum request time of 10 minutes, which means that even if the Discover job eventually succeeds, the server issues a 502 before it can finish processing the network request.

I also observed 502 errors before the 10 minute mark is reached, which seems to correspond with new code deploys that cause server pods to restart. So even if we could raise the maximum request time from 10 minutes to 30 minutes, we deploy code so often that there's a high likelihood the server would drop the request before it could complete.

We have an ongoing project to convert the Discover API to an async model, so that our server no longer needs to keep an active thread open for the entire duration of the Discover job. This project should address the fundamental issue at hand here, I'll make a note to tag you when it lands so we can make sure your use case is finally unblocked.

Thanks for the back and forth here, I know it must be frustrating that the app isn't working for you now but this iteration is extremely helpful for improving the platform and I really appreciate your involvement!

pedrohsroque commented 5 months ago

Same issue here, Trying to sync MySql to Snowflake

shmf commented 4 months ago

Same here with an Oracle Database

plenti-jacob-roe commented 4 months ago

Same here, we have tried increasing temporal message limit and http timeouts to no effect.

Trying to sync MSSQL to Databricks.

Though we have an old version of Airbyte(0.44.2) running and don't have this issue on that version with the same MSSQL Database and Databricks. Both are running on K8s

jonodutch commented 4 months ago

Also experiencing this issue with large Oracle db

shmf commented 4 months ago

Hey @evantahler is there any ETA? :) thanks

evantahler commented 4 months ago

cc @davinchia There's no ETA as of yet, but dealing with large catalogs is on our near-term roadmap.

marcosmarxm commented 3 months ago

2 problems here:

  1. size problem of the catalog: this one is almost fixed. Updating to the latest version you shouldn't have this problem anymore.
  2. time problem (how long it take to discover your schema): today there is a timeout of 10 min to discover the catalog. A possible is planned to be released in near future.
shmf commented 3 months ago

@marcosmarxm not sure what latest version you mean - i just tested airbyte 0.60.1 and oracle connector and i still get this error message: 2024-05-24 16:06:21 platform > Attempt 0 to call to write discover schema result error: io.airbyte.api.client.invoker.generated.ApiException: writeDiscoverCatalogResult call failed with: 413 - {"message":"Request Entity Too Large","_links":{"self":{"href":"/api/v1/sources/write_discover_catalog_result","templated":false}},"_embedded":{"errors":[{"message":"The content length [10546869] exceeds the maximum allowed content length [10485760]","_links":{},"_embedded":{}}]}}

myu65 commented 2 months ago

I am currently facing this issue as well. I just tested Airbyte 0.63.2 (OSS version on WSL) with the Postgres connector, and I am still encountering the same 413 error. It seems that the size problem with the catalog has not been resolved in this version.

airbyte-worker                    | 2024-06-24 01:58:28 INFO i.a.c.t.HeartbeatUtils(withBackgroundHeartbeat):64 - Stopping temporal heartbeating...
airbyte-worker                    | 2024-06-24 01:58:28 INFO i.a.c.t.HeartbeatUtils(withBackgroundHeartbeat):73 - Temporal heartbeating stopped.
airbyte-worker                    | 2024-06-24 01:58:28 WARN i.t.i.a.ActivityTaskExecutors$BaseActivityTaskExecutor(execute):126 - Activity failure. ActivityId=cfcf798c-1ca9-35a3-af14-e14852c013fd, activityType=Run, attempt=1
airbyte-worker                    | java.lang.RuntimeException: io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker                    |     at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:62) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.run(DiscoverCatalogActivityImpl.java:183) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?]
airbyte-worker                    |     at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
airbyte-worker                    |     at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
airbyte-worker                    |     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
airbyte-worker                    |     at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
airbyte-worker                    | Caused by: io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker                    |     at io.temporal.serviceclient.CheckedExceptionWrapper.wrap(CheckedExceptionWrapper.java:57) ~[temporal-serviceclient-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.internal.sync.WorkflowInternal.wrap(WorkflowInternal.java:539) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.temporal.activity.Activity.wrap(Activity.java:52) ~[temporal-sdk-1.22.3.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:143) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker                    |     ... 14 more
airbyte-worker                    | Caused by: io.airbyte.workers.exception.WorkerException: Error while discovering schema
airbyte-worker                    |     at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:136) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:52) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:138) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker                    |     ... 14 more
airbyte-worker                    | Caused by: org.openapitools.client.infrastructure.ClientException: Client error : 413 Request Entity Too Large
airbyte-worker                    |     at io.airbyte.api.client.generated.SourceApi.writeDiscoverCatalogResult(SourceApi.kt:1163) ~[io.airbyte-airbyte-api-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:125) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.general.DefaultDiscoverCatalogWorker.run(DefaultDiscoverCatalogWorker.java:52) ~[io.airbyte-airbyte-commons-worker-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.TemporalAttemptExecution.get(TemporalAttemptExecution.java:138) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.workers.temporal.discover.catalog.DiscoverCatalogActivityImpl.lambda$run$1(DiscoverCatalogActivityImpl.java:198) ~[io.airbyte-airbyte-workers-0.63.2.jar:?]
airbyte-worker                    |     at io.airbyte.commons.temporal.HeartbeatUtils.withBackgroundHeartbeat(HeartbeatUtils.java:57) ~[io.airbyte-airbyte-commons-temporal-core-0.63.2.jar:?]
airbyte-worker                    |     ... 14 more
shmf commented 2 months ago

@myu65 i managed to overcome a similar error thanks to this suggestion https://github.com/airbytehq/airbyte/issues/37463#issuecomment-2073396085

myu65 commented 2 months ago

@myu65 i managed to overcome a similar error thanks to this suggestion #37463 (comment)

Thank you for your reply.
I managed to resolve my error by following your suggestion.

I added the following values to the .env file and the docker-compose file, which resolved the error in fetching the schema when making connection:

.env

MICRONAUT_SERVER_MAX_REQUEST_SIZE=52428800

docker-compose.yaml

    environment:
      # ADD THIS ENVIRONMENT VARIABLE TO THE APPROPRIATE CONTAINER
      # I AM UNSURE WHICH CONTAINER REQUIRES THIS SETTING
      - MICRONAUT_SERVER_MAX_REQUEST_SIZE=${MICRONAUT_SERVER_MAX_REQUEST_SIZE}

To reduce the number of similar issues being raised, I suggest updating the following files:

https://github.com/airbytehq/airbyte-platform/blob/main/.env
https://github.com/airbytehq/airbyte-platform/blob/main/docker-compose.yaml