Open mieubrisse opened 10 months ago
Confirmed that it's not a problem of switching contexts; I just switched to my cloud context and back again and the logs-aggregator
was started along with the engine
(though these silly logs-collectors are still hanging around..)
status codes of the exited containers are 137
. According to chat gpt:
An exit status of 137 typically indicates that a process was terminated by a signal. In the context of Docker, specifically, this often happens when a container is killed due to exceeding its allocated memory limit.
I wonder if this is caused by the logs collector resource leak.
Update - this likely wasn't being caused by the log collector memory leak. Going to look deeper into memory limits on the logs aggregator.
What's your CLI version?
0.85.29
Description & steps to reproduce
I tried a
kurtosis run
today, and got this:I dumped the engine logs, and found:
I checked, and my logs aggregator was indeed stopped, though I'm not sure why:
I also have a logs-collector somehow hanging around, even though I don't have any enclaves running:
This is the output of
docker container inspect kurtosis-logs-aggregator
:And the container logs:
Desired behavior
I never get this error; Kurtosis correctly manages the logs-aggregator
What is the severity of this bug?
Critical; I am blocked and Kurtosis is unusable for me because of this bug.
What area of the product does this pertain to?
CLI: the Command Line Interface