akshaykarle / terraform-provider-mongodbatlas

Terraform provider for MongoDB Atlas
Mozilla Public License 2.0
123 stars 54 forks source link

Resizing a cluster forces new resource #9

Closed kristofferahl closed 6 years ago

kristofferahl commented 6 years ago

Hi, Thanks for your work on a much needed terraform provider. Seems to work great for us so far! However...

Changing the cluster size (eg. changing from M10 to M20) forces a new resource which is not expected behavior. This is a supported action by the MongoDB Atlas user interface and it would be great if you could resize the cluster using your terraform provider.

Is there a reasoning behind forcing a new resource?

akshaykarle commented 6 years ago

Hey @kristofferahl, thank you for pointing that out. I had never really tried updating the instance size of a mongo atlas cluster and I wasn't sure if it would recreate it. But now, based on your feedback I can update the cluster resource to not attempt to recreate a new cluster but just update the existing one.

Shouldn't take much time, so if you want to give it a shot I'm happy to accept a PR or I will work on implementing this soon when I get some free time.

kristofferahl commented 6 years ago

Thanks, I might give it a try. I'll let you know if I do! Do you think it will require any change to the client library (go-mongodbatlas) or should it just be a change in the provider?

kristofferahl commented 6 years ago

Update: I think we figured out what is required to do this:

  1. Add support for updating the cluster size
  2. Ensure cluster size property does not force a new resource
  3. Add support for specifying timeouts

The last step is also needed as changing the cluster size or re-sizing the disk may take some time (in our case it took about 30 minutes).

If this sounds reasonable to you I can provide a pull-request!?

akshaykarle commented 6 years ago

Great! For 1 & 2, the size attribute has a ForceNew parameter that you could just make false or remove it (since default is false) and terraform will take care of updating the cluster instead of recreating it.

For 3, I would recommend updating the default timeout in update to 30 minutes. Have a look at the resource_timeout.go if you need help.

kristofferahl commented 6 years ago

Great! I'll have a look at it and get back to you with a pull-request. Hopefully sometime next week...

kristofferahl commented 6 years ago

Is there a reason you don't like enabling specific timeouts for each operation (create, update, delete)? I won't add much overhead to the cluster resource...

Timeouts: &schema.ResourceTimeout{
    Create: schema.DefaultTimeout(40 * time.Minute),
    Update: schema.DefaultTimeout(40 * time.Minute),
    Delete: schema.DefaultTimeout(40 * time.Minute),
},
akshaykarle commented 6 years ago

Sure, looks good to me. Let's add operation specific timeouts :)