Open katronquillo opened 3 months ago
Hello @ytsarev @bobh66, I'm wondering if anyone would be able to share a timeline for the review of this PR? Thanks so much!
Sorry for the delay, the kubecon took some energy :) I plan to take care of the review this week
/test-examples="examples/workspace-inline-aws.yaml"
@katronquillo Could you provide an example of what this log output looks like? Is it essentially just typical terraform apply
output?
I'm coming into this conversation late (sorry!) but I'm pretty wary of writing logs to file paths inside the container, and especially of implementing our own log rotation logic. Ideally we'd outsource log processing to the many existing tools that handle it already (systemd etc).
I'm wondering if there's a more lightweight option here.
For example, I'm wondering if it would be possible to emit Terraform logs to the provider's container stdout. I'm imagining something like this:
{"ts": "...", "level": "info", "workspace": "some-ws", "source": "tf-cli", "msg": "<terraform log line goes here>"}
If we took this approach Terraform CLI logs would be interspersed with the providers other logs, but given we already support structured logging it would be possible for any log parsing backend to reconstruct the Terraform logs for a particular workspace.
If we took this approach Terraform CLI logs would be interspersed with the providers other logs, but given we already support structured logging it would be possible for any log parsing backend to reconstruct the Terraform logs for a particular workspace.
If we take this approach we may also want to be able to control the log output at the Workspace
level to limit the amount of data being sent to the pod logs.
If we take this approach we may also want to be able to control the log output at the Workspace level to limit the amount of data being sent to the pod logs.
Using an MR or ProviderConfig to control how a provider process logs to stdout does seem a little unusual to me. My inclination would be for it to be an all or nothing CLI flag.
What would be the consequences of lots of Terraform logs being written to stdout?
My inclination would be for it to be an all or nothing CLI flag.
Meaning it gets set by a DeploymentRuntimeConfig
? The downside of that is it requires a pod restart to enable/disable, and for large numbers of Workspaces
a pod restart can take a long time to recover since it has to run terraform init
on every Workspace
. This can be mitigated by using a PVC for /tf
so the workspace directories persist across the reboot, but that's not the default configuration and I suspect most users don't do that.
What would be the consequences of lots of Terraform logs being written to stdout?
Mostly the sheer volume of data to sift through and the tendency of the pod logs to roll over just when you need them. There are ways to work around this but typical development debugging scenarios will just be using kubectl logs
and not relying on external log collection systems.
Hi @negz @bobh66 @ytsarev! Thank you for reviewing our PR. We agree with your feedback/concerns and we'll go ahead and implement the changes you've proposed. To be specific, we'll create a new PR where we will...
spec
to enable/disable loggingTrue
, we will write the Terraform CLI logs to container stdout as structured records, similar to what @negz suggested hi @negz @ytsarev @bobh66 We have created this new PR based on the above discussions. Kindly review when you get a chance.
One other note on this implementation - for Remote
Workspaces
we delete the directory on every reconcile due to issues with go-getter
so the logs for the previous reconcile will be removed and the current logs will only persist until the next reconciliation happens. I'm thinking that may be another reason to go with #258 instead
One other note on this implementation - for
Remote
Workspaces
we delete the directory on every reconcile due to issues withgo-getter
so the logs for the previous reconcile will be removed and the current logs will only persist until the next reconciliation happens. I'm thinking that may be another reason to go with #258 instead
This is no longer true after #276
Description of your changes
Added optional
spec.logConfig
to theProviderConfig
resource to enable writing of Terraform Plan and Apply output to log filespec.logConfig.enableLogging
: Specifies whether logging is enabled (true
) or disabled (false
). When enabled, Terraform CLI command logs will be written to a file. Default isfalse
.spec.logConfig.backupLogFilesCount
: Specifies the number of archived log files to retain. When a new log file is created due to a change detected byterraform diff
, the previous log file is archived and renamed with a timestamp. This parameter controls how many archived log files are kept before older ones are deleted. Default is0
By default, Terraform CLI stores log files in the workspace directory. The default log file name is
terraform.log
. When a backup is taken (e.g., due to a new change detected byterraform diff
), the current log file is renamed toterraform.log.<timestamp>
, where<timestamp>
is a placeholder for the actual timestamp of the backup.Fixes #163
I have:
make reviewable
to ensure this PR is ready for review.make e2e
withUPTEST_EXAMPLE_LIST="examples/workspace-random-generator.yaml"
How has this code been tested
make run
, applied the followingProviderConfig
:To the same cluster, applied the following
Workspace
:terraform.log
file was populated with the expected Terraform Plan and Apply logsWorkspace
YAML and re-appliedterraform.log
file was updated with the new Terraform Plan and Apply logsspec.logConfig.backupLogFilesCount
was set to1
terraform.log
file was created with the most recent Terraform Plan and Apply logsbackupLogFilesCount