I would expect that I will observe longer response time (above 2 seconds) - but nothing like that happens:
k -n cluster-operators logs http-getter-7c95c7cd56-8cn45 --since=45s
Thu, 31 Oct 2024 12:52:47 UTC example.com status code: 200. Duration 13.527294ms
Thu, 31 Oct 2024 12:52:47 UTC endless.horse status code: 200. Duration 335.341592ms
Thu, 31 Oct 2024 12:52:57 UTC example.com status code: 200. Duration 14.206385ms
Thu, 31 Oct 2024 12:52:57 UTC endless.horse status code: 200. Duration 101.305447ms
Thu, 31 Oct 2024 12:53:07 UTC example.com status code: 200. Duration 13.992599ms
Thu, 31 Oct 2024 12:53:07 UTC endless.horse status code: 200. Duration 164.544507ms
Thu, 31 Oct 2024 12:53:17 UTC example.com status code: 200. Duration 14.968083ms
Thu, 31 Oct 2024 12:53:17 UTC endless.horse status code: 200. Duration 181.037902ms
Invoking curl from inside the pod to https://example.com:443 also returns the response < 50ms
What you expected to happen:
Invoking curl from inside the pod to https://example.com:443 also returns the response after more than 2000ms
Where can this issue be corrected? (optional)
How to reproduce it (as minimally and precisely as possible):
I think I described reproduction scenario in the first section
What happened: I'm running pod network http latency on port 443. Here is the
Chaosexperiment
yaml:Looking at helper pods - they targeted correct containers, and ran all commands successfully:
My pod is just doing http requests, here is the code of it:
I would expect that I will observe longer response time (above 2 seconds) - but nothing like that happens:
Invoking
curl
from inside the pod to https://example.com:443 also returns the response < 50msWhat you expected to happen: Invoking
curl
from inside the pod to https://example.com:443 also returns the response after more than 2000msWhere can this issue be corrected? (optional)
How to reproduce it (as minimally and precisely as possible): I think I described reproduction scenario in the first section
Anything else we need to know?: