Closed dormullor closed 2 months ago
I think this would require something like a sidecar container that could attach to the process output and stream the logs "somewhere".
@bobh66 @ytsarev I hope this message finds you well. We have observed that this particular issue has garnered significant interest and upvotes.
Presently, debugging Terraform issues is challenging as the provider-terraform exposes only brief errors. It would be beneficial to have the option to write CLI logs to a file if necessary.
We tried the sidecar container approach but since the CLI logs are not stored anywhere, it is difficult to read them from the sidecar container. There is a possibility that we can use STRACE to intercept the terraform process but it requires escalated privileges and may have performance implications.
We propose exposing a field called "logPath" in the provider configuration or workspace. If a value is provided for this field, all Terraform plan (if changes exist), apply, and destroy CLI logs can be directed to that file. We can probably use a DeploymentRuntimeConfig to attach a volume to the pod, enabling the writing of the file to persistent storage like EBS.
We are keen to contribute to the implementation of this feature. Please inform us if we may proceed with the mentioned approach or if you prefer alternative solutions.
Thank you.
@suramasamy I think a local log file is a good start to tackle this long-term issue 👍 Thank you so much for your willingness to contribute the solution, I am keen to review the PR!
One concern when generating files is cleanup - would you expect that the file(s) would be deleted automatically when the Workspace
is deleted?
Thank You @ytsarev , We will create the PR and let you know.
@bobh66 Yes, we can add the logic to delete the files when the Workspace
is deleted.
We are starting to consider how these logs can be viewed by kubernetes users as well as from CICD systems once they are stored to files so if there are preferred approaches please share them so that we can consider them in our brainstorming.
If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?
Could these logs not be sent to OTEL collector? Then the operator could configure any backend for logs - Grafana Loki, ELK, Datadog etc.
With OTEL we also get standard way for passing metadata - attributes and labels such as Workspace name, namespace, ID etc
If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?
EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.
EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.
The implementation in this PR for sending CLI logs to a log file within the workspace folder is not cloud specific. The final choice on how to export the logs out of the workDir to the platforms logging provider(via agents) or to stream the logs out to an external file storage is dependent on the users choice. Does this address the concern of the implementation being cloud specific or pls let me know if there any other concerns. Thanks
@bobh66 @ytsarev Could you please look at this PR for this issue when you get a chance?
What problem are you facing?
It would be helpful to see the terraform apply logs to get the progress for the Workspace
How could Official Terraform Provider help solve your problem?
Sometimes, terraform apply can take a while ( 10 - 20 minutes ), during this time, it is hard to tell whats the status of the terraform apply command.
It would be great to output the terraform apply logs to the user to get more visibility.