ShadowTraffic / requests

Issues/feature requests for ShadowTraffic
0 stars 0 forks source link

Collect full log output #6

Closed obiseankenobi closed 1 month ago

obiseankenobi commented 1 month ago

I am trying to output my startup log to a file and seeing that the output file itself is only capturing limited information:

Docker run command:

docker run \
  --env-file license.env \
  -v /Volumes/Stardust/ShadowTraffic/cloud_kafka.json:/home/config.json \
  shadowtraffic/shadowtraffic:latest \
  --config /home/config.json > CCloud_kafka_testing.txt

Output file attached: CCloud_kafka_testing.txt

❯ cat CCloud_kafka_testing.txt
✔ Verified ShadowTraffic Developer license
✔ Running with seed 618336494. You can repeat this run by setting --seed 618336494.
✔ Configuration validated
✔ Generating 2 streams of data
✔ Now running

Expected output in full logs:

❯ docker run \
  --env-file license.env \
  -v /Volumes/Stardust/ShadowTraffic/cloud_kafka.json:/home/config.json \
  shadowtraffic/shadowtraffic:latest \
  --config /home/config.json > CCloud_kafka_testing.txt
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
    acks = -1
    auto.include.jmx.reporter = true
    batch.size = 16384
    bootstrap.servers = [pkc-#####us-west-2.aws.confluent.cloud:9092]
    buffer.memory = 33554432
    client.dns.lookup = use_all_dns_ips
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = true
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.adaptive.partitioning.enable = true
    partitioner.availability.timeout.ms = 0
    partitioner.class = null
    partitioner.ignore.keys = false
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = [hidden]
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism = PLAIN
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    security.protocol = SASL_SSL
    security.providers = null
    send.buffer.bytes = 131072
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
<truncated>
...
<truncated>
[main] WARN org.apache.kafka.clients.admin.AdminClientConfig - These configurations '[value.serializer, key.serializer]' were supplied but are not used yet.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 7.4.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 30969fa33c185e88
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1729787453467
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: lkc-qw96qp
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 7761642 with epoch 0
[kafka-producer-network-thread | producer-2] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-2] Cluster 
<truncated>
MichaelDrogalis commented 1 month ago

Hey @obiseankenobi, for this one I think you just want &> CCloud_kafka_testing.txt instead of > CCloud_kafka_testing.txt.

Your OS isn't capturing standard error, which is where those logs are being printed to.

MichaelDrogalis commented 1 month ago

In retrospect, this is pretty unintuitive. As of 0.11.3, ShadowTraffic now logs all output to standard out. The standard error thing was just a weird side effect of how I had the underlying Log4J in Kafka configured.

So this should now be simple to capture with > again.