Closed cpilson closed 10 months ago
Update: same output on an EC2 t2.medium
runner. Connects, plans, errors with the above output.
Memory of 4GB hit about 46% max utilization, so the container shouldn't have burst.
Hi, @cpilson. Have you resolved the issue?
Well, it feels like a scenario where logging may be insufficient, but I’ve not looked to see if the agent binary has a debug/trace flag for logging.
The issue resolved when I went away from using local_file
to copy files (in RAM), but I’m still trying to explore how 896K of files could make things collapse on an EC2 runner with 4GB RAM.
So, no, but I’m not sure ultimately just how much insight into the state of a container the binary would even have, or if that responsibility lies with the agent binary, or if anything can even be done here.
Hello.
I have a terraform plan that runs to what looks like completion, even giving outputs, then suddenly fails.
The k8s log (k8s set up via
helm install --set agent.token=$SCALR_TOKEN --set agent.url=$SCALR_URL scalr-agent-helm/agent-k8s --generate-name
) follows, and I'm left wondering if this is an architecture thing (I'm on an M1 CPU, which is ARM).In the UI, here's what I see: