Open hexash42 opened 3 months ago
what is the status of the logs aggregator docker container (the image it runs is timberio/vector
). I think i ran into something like this before on fedora, but not sure it was the same issue:
Looks like it is up
846a688ecd4f fluent/fluent-bit:1.9.7 "/fluent-bit/bin/flu…" 37 seconds ago Up 36 seconds 2020/tcp kurtosis-logs-collector--0ce18db33a5d4a28b6216331c96a684f
And here are the logs from docker log
Fluent Bit v1.9.7
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2024/07/03 14:56:10] [ info] Configuration:
[2024/07/03 14:56:10] [ info] flush time | 1.000000 seconds
[2024/07/03 14:56:10] [ info] grace | 5 seconds
[2024/07/03 14:56:10] [ info] daemon | 0
[2024/07/03 14:56:10] [ info] ___________
[2024/07/03 14:56:10] [ info] inputs:
[2024/07/03 14:56:10] [ info] forward
[2024/07/03 14:56:10] [ info] ___________
[2024/07/03 14:56:10] [ info] filters:
[2024/07/03 14:56:10] [ info] ___________
[2024/07/03 14:56:10] [ info] outputs:
[2024/07/03 14:56:10] [ info] forward.0
[2024/07/03 14:56:10] [ info] ___________
[2024/07/03 14:56:10] [ info] collectors:
[2024/07/03 14:56:10] [ info] [fluent bit] version=1.9.7, commit=265783ebe9, pid=1
[2024/07/03 14:56:10] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2024/07/03 14:56:10] [ info] [storage] created root path /fluent-bit/etc/storage/
[2024/07/03 14:56:10] [ info] [storage] version=1.2.0, type=memory+filesystem, sync=normal, checksum=disabled, max_chunks_up=128
[2024/07/03 14:56:10] [ info] [storage] backlog input plugin: storage_backlog.1
[2024/07/03 14:56:10] [ info] [cmetrics] version=0.3.5
[2024/07/03 14:56:10] [debug] [forward:forward.0] created event channels: read=21 write=22
[2024/07/03 14:56:10] [debug] [in_fw] Listen='0.0.0.0' TCP_Port=9713
[2024/07/03 14:56:10] [ info] [input:forward:forward.0] listening on 0.0.0.0:9713
[2024/07/03 14:56:10] [debug] [storage_backlog:storage_backlog.1] created event channels: read=24 write=25
[2024/07/03 14:56:10] [ info] [input:storage_backlog:storage_backlog.1] queue memory limit: 95.4M
[2024/07/03 14:56:10] [debug] [forward:forward.0] created event channels: read=26 write=27
[2024/07/03 14:56:10] [debug] [router] match rule forward.0:forward.0
[2024/07/03 14:56:10] [debug] [router] match rule storage_backlog.1:forward.0
[2024/07/03 14:56:10] [ info] [output:forward:forward.0] worker #0 started
[2024/07/03 14:56:10] [ info] [output:forward:forward.0] worker #1 started
[2024/07/03 14:56:10] [ info] [http_server] listen iface=0.0.0.0 tcp_port=9712
[2024/07/03 14:56:10] [ info] [sp] stream processor started
yeah nvm then, this has similar symptoms to but is not the same issue I experienced on aarch64 Fedora, must be something else. I have an amd64 Manjaro machine I will test on later today
What's your CLI version?
0.90.1
Description & steps to reproduce
Downloaded the amd64 binary on arch linux. Run the quickstart example (Or basically anything else)
kurtosis run github.com/kurtosis-tech/basic-service-package --enclave quickstart
This is the output (hangs forever)
This is the engine logs:
Enclave log from kurtosis-dump:
The engine is 100% stuck and does not respond to stuff like
kurtosis enclave ls
- it must first be restartedDesired behavior
I expect this to work just like the quickstart example
What is the severity of this bug?
Critical; I am blocked and Kurtosis is unusable for me because of this bug.
What area of the product does this pertain to?
CLI: the Command Line Interface