upbound / provider-terraform

A Crossplane provider for Terraform
https://marketplace.upbound.io/providers/upbound/provider-terraform/
Apache License 2.0
150 stars 59 forks source link

Terraform apply logs #163

Closed dormullor closed 2 months ago

dormullor commented 1 year ago

What problem are you facing?

It would be helpful to see the terraform apply logs to get the progress for the Workspace

How could Official Terraform Provider help solve your problem?

Sometimes, terraform apply can take a while ( 10 - 20 minutes ), during this time, it is hard to tell whats the status of the terraform apply command.

It would be great to output the terraform apply logs to the user to get more visibility.

bobh66 commented 1 year ago

I think this would require something like a sidecar container that could attach to the process output and stream the logs "somewhere".

suramasamy commented 9 months ago

@bobh66 @ytsarev I hope this message finds you well. We have observed that this particular issue has garnered significant interest and upvotes.

Presently, debugging Terraform issues is challenging as the provider-terraform exposes only brief errors. It would be beneficial to have the option to write CLI logs to a file if necessary.

We tried the sidecar container approach but since the CLI logs are not stored anywhere, it is difficult to read them from the sidecar container. There is a possibility that we can use STRACE to intercept the terraform process but it requires escalated privileges and may have performance implications.

We propose exposing a field called "logPath" in the provider configuration or workspace. If a value is provided for this field, all Terraform plan (if changes exist), apply, and destroy CLI logs can be directed to that file. We can probably use a DeploymentRuntimeConfig to attach a volume to the pod, enabling the writing of the file to persistent storage like EBS.

We are keen to contribute to the implementation of this feature. Please inform us if we may proceed with the mentioned approach or if you prefer alternative solutions.

Thank you.

ytsarev commented 9 months ago

@suramasamy I think a local log file is a good start to tackle this long-term issue 👍 Thank you so much for your willingness to contribute the solution, I am keen to review the PR!

bobh66 commented 9 months ago

One concern when generating files is cleanup - would you expect that the file(s) would be deleted automatically when the Workspace is deleted?

suramasamy commented 9 months ago

Thank You @ytsarev , We will create the PR and let you know. @bobh66 Yes, we can add the logic to delete the files when the Workspace is deleted.

ccrockatt commented 8 months ago

We are starting to consider how these logs can be viewed by kubernetes users as well as from CICD systems once they are stored to files so if there are preferred approaches please share them so that we can consider them in our brainstorming.

balu-ce commented 8 months ago

If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?

PavelPikat commented 8 months ago

Could these logs not be sent to OTEL collector? Then the operator could configure any backend for logs - Grafana Loki, ELK, Datadog etc.

With OTEL we also get standard way for passing metadata - attributes and labels such as Workspace name, namespace, ID etc

bobh66 commented 8 months ago

If we write logs to EBS, S3 or EFS then it would become a cloud specific, right ?

EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.

rvnyk commented 8 months ago

EBS and EFS are storage CSIs so they would be invisible to the pod because all it sees is the associated PVC. S3 would require a sidecar or additional AWS-specific code so yes that would be cloud-specific.

The implementation in this PR for sending CLI logs to a log file within the workspace folder is not cloud specific. The final choice on how to export the logs out of the workDir to the platforms logging provider(via agents) or to stream the logs out to an external file storage is dependent on the users choice. Does this address the concern of the implementation being cloud specific or pls let me know if there any other concerns. Thanks

suramasamy commented 8 months ago

@bobh66 @ytsarev Could you please look at this PR for this issue when you get a chance?