Open rcaillon-Iliad opened 1 year ago
What K8s cluster do you have? Is it provided by Google, AWS, ... or self-hosted?
The cluster I use is provided by Scaleway (Kubernetes Kapsule)
Even I am facing async timeout error when streaming logs from the pod. It times out exactly after 5 mins. Did you get any workaround to fix this? We are using AWS hosted K8. The kubectl can stream logs for an hour without any issues.
I don't have any workaround unfortunately... still hoping for a fix
Just sharing in case it helps somebody:
We are not using watch.Watch().stream
, simply using the raw API call & then streaming ahead to the client, here we set the timeout during the api call, it works in our case.
eg:
resp = await client.read_namespaced_pod_log(
pod,
namespace,
container=container,
follow=True,
_preload_content=False,
timestamps=True,
tail_lines=0,
_request_timeout=3600
)
This allows us to stream logs without any connection hiccups for an hour.
Hi @rcaillon-Iliad @ajinkya-takle , I met the same problem https://github.com/tomplus/kubernetes_asyncio/issues/325#issuecomment-2259998184 and you could try pass the _request_timeout
to watch.Watch().stream
like below
async def watch_endpoints():
async with client.ApiClient() as api:
v1 = client.CoreV1Api(api)
async with watch.Watch().stream(v1.list_namespaced_endpoints, "XXX", _request_timeout=3600, timeout_seconds=3600) as stream:
async for event in stream:
evt, obj = event["type"], event["object"]
ips = []
if obj.subsets:
for ep in obj.subsets:
for addr in ep.addresses:
ips.append(addr.ip)
print(
"{} {}/{} endpoints {}".format(
evt, obj.metadata.namespace, obj.metadata.name, ips
)
)
When setting a
timeout_seconds
greater than 5 minutes, a TimeoutError is raised after 5 minutes.