mitchellh / vagrant-google

Vagrant provider for GCE.
Apache License 2.0
334 stars 100 forks source link

Creating additional disks #102

Open porviss opened 9 years ago

porviss commented 9 years ago

Is it possible to create instances with multiple disks created / attached?

Temikus commented 9 years ago

Hi @porviss! Thanks for pointing this out!

This is not currently possible, however, looking at the code in run_instance action it could be implemented without changing the current logic, since a disks array is actually passed to fog config:

defaults = {
    ...
    :disks               => [disk.get_as_boot_disk(true, autodelete_disk)],
    ...
            }

I :heart: feedback, so can you describe your use-case a bit? What do you want to use multiple disks for and how? That may help the implementation, since I haven't run into the need myself before.

Cheers, A.

porviss commented 9 years ago

Hi @Temikus

Thank you for your response :-)

In most cases I only need a single drive, but In this case I am trying to replicate an existing Hadoop cluster setup I have which has the OS on one drive, and storage on another. So basically its just to keep the data separate from the OS drive, which makes it easier to export and attach to another instance, or download for local use.

Temikus commented 9 years ago

@porviss I see. Just to clarify: Do you plan to use local_ssd resources or just attach a second standard disk?

porviss commented 9 years ago

In this case it would be just a second standard disk. Later on though it may be beneficial for using local ssd due for the sake of performance but haven't really thought that far ahead.

On Mon, Aug 17, 2015 at 5:57 PM, Artem Yakimenko notifications@github.com wrote:

@porviss https://github.com/porviss I see. Just to claridy: Do you plan to use local_ssd https://cloud.google.com/compute/docs/disks/local-ssd resources or just attach a second standard disk?

— Reply to this email directly or view it on GitHub https://github.com/mitchellh/vagrant-google/issues/102#issuecomment-131872368 .

utdrmac commented 7 years ago

Old feature request but +1 for it. This helps make GCE provisioning more like AWS. Any database system should be using a separate drive for data storage.

forrayz commented 6 years ago

Hi all I am having the same issue I am trying to migrate vagrant/aws to vagrant/gcp but need multiple disks provisioned for data nodes. Zoltan

seanmalloy commented 6 years ago

I'm considering implementing this new feature. Just want to get some feedback from people on how they envision this working.

Here are some thoughts ...

  1. allow creating one or more new persistent disks
  2. persistent disks can be type standard or SSD (no local SSD support)

Here are some questions ...

  1. do we want to support attaching existing disks to the GCE instance?
  2. if we need support for attaching existing disks what should be the default behaviour when destroy is run? (delete the disks or keep them)
  3. do we want to have a config option for controlling the deletion behaviour? similar to the autodelete_disk config option for the root disk.
  4. do we need a config option for selecting the mode for the disk(read/write vs read only)?

I think the new config option would be an Array of Hashes to allow easily adding support for some of these feature in the future.

Maybe something like below.

google.additional_persistent_disks = [{:type => 'ssd', :size => 200, mode => 'rw', delete => false, :name => 'my_awesome_new_disk'}]
Temikus commented 6 years ago

@seanmalloy on your questions

  1. It depends. Do you have a good use-case? I was thinking about this but I failed to come up with one. People did request adding snapshots though #176, so if we can reuse the logic - why not?
  2. If we're creating new ones - delete (follow the default behavior). If attaching an existing one - keep.
  3. Probably yes.
  4. Depends if there is a use-case. If it's only for creating new disks, unless there's an image/snapshot involved read-only is probably not going to be very useful. Or am I missing something?

Let me know on your thoughts.

Metric-nz commented 6 years ago

My use case is quite simple, I want to use an ansible playbook that installs Zenoss on a second HD (https://github.com/sergevs/ansible-serviced-zenoss) and fails without it. I went looking for how to add the second disk and found this enhancement.

Immediately, my requirement is exactly as you describe @seanmalloy.

And while obvious in this context and essentially mirroring @Temikus' answers, mine are:

  1. Not yet
  2. Delete new, keep existing.
  3. Yes.
  4. Not at this stage.

Now to re-work the playbook so I don't need the second disk, or maybe I'll just manually add a disk to the instance to get moving.

whynick1 commented 5 years ago

For a similar use case, where we want to a secondary disk for our storage node machine, I am trying to implement this feature.

Base on @Temikus , this implementation should be pretty straight forward by adding a 2nd well-configured disk to disks list as follow

disks = [disk.get_as_boot_disk(true, autodelete_disk), second_disk.get_object(true, false, nil, autodelete_disk)]

Below are source code of get_object() and get_as_boot_disk() from disk.rb

def get_object(writable = true, boot = false, device_name = nil, auto_delete = false)
    ...
end
def get_as_boot_disk(writable = true, auto_delete = false)
  get_object(writable, true, nil, auto_delete)
end

Unfortunately, get_object() was removed from fog-goole since version 1.0.0. Thus, there is no way to pass in a secondary "non-boot disk".

If passing two boot disks (make non sense), it will throw error with Invalid value for field 'resource.disks[1]': ''. Boot disk must be the first disk attached to the instance. (Google::Apis::ClientError)

I don't see a walk around unless we update disk.rb

Temikus commented 5 years ago

@whynick1 I maintain fog-google as well, so feel free to CC me on the patch, I can quickly release and patch dependencies through.

Temikus commented 5 years ago

@whynick1 Ok, looking closer this is already implemented in the lib. The method seems to have just moved to collection method attached_disk_obj

Example:

[6] pry(#<TestDisks>)> disk.get_as_boot_disk
=> {:auto_delete=>false,
 :boot=>true,
 :source=>"https://www.googleapis.com/compute/v1/projects/graphite-sandbox/zones/us-central1-f/disks/fog-test-1-testdisks-test-get-as-configs",
 :mode=>"READ_WRITE",
 :type=>"PERSISTENT"}
[7] pry(#<TestDisks>)> disk.attached_disk_obj(boot:true,writable:true)
=> {:auto_delete=>false,
 :boot=>true,
 :mode=>"READ_WRITE",
 :source=>"https://www.googleapis.com/compute/v1/projects/graphite-sandbox/zones/us-central1-f/disks/fog-test-1-testdisks-test-get-as-configs",
 :type=>"PERSISTENT"}
[8] pry(#<TestDisks>)> assert_equal(disk.get_as_boot_disk, disk.attached_disk_obj(boot:true,writable:true))
=> true

This is probably what you're looking for, right?

Temikus commented 5 years ago

Actually looking into the instance handling logic:

https://github.com/fog/fog-google/blob/3741053202f7c856fbe9fa5ed715bd213f48e236/lib/fog/compute/google/requests/insert_server.rb#L20-L23

You should probably be able to just pass the disk object as it is, i.e.:

:disks               => [disk1, disk2]

(First disk in the array should be automatically marked as boot)

Sorry for missing this :| I'll be in my shame cube if you need me.

whynick1 commented 5 years ago

@Temikus You are right! Based on attached_disk_obj, I try to add a additional_disks option to allow attaching additional disks to GCE instance. Here is an example.

  attached_disk_obj = [{
   :image_family => "google-image-family",
   :image => nil,
   :image_project_id => "google-project-id",
   :disk_size => 20,
   :disk_name => "google-additional-disk-0",
   :disk_type => "pd-standard",
   :autodelete_disk => true
  }]

Please refer to this PR, https://github.com/mitchellh/vagrant-google/pull/210/files Please let me know what you think! 😃

konstantin-recurly commented 4 years ago

Hi, Full example This is what I have right now

  config.vm.define "centos7-gcp" do |gcp|
    gcp.vm.box = "google/gce"

    gcp.vm.provider :google do |google, override|
      google.google_project_id = GOOGLE_PROJECT_ID
      google.google_json_key_location = GOOGLE_JSON_KEY_LOCATION

      google.image_family = "centos-7"
      google.name = "ansible"
      google.disk_size = 20
      google.zone = "us-central1-a"
      google.network = "shared"
      google.network_project_id = "12c34z2d3"
      google.subnetwork = "subnet"
      google.use_private_ip = true
      google.external_ip = false
      google.tags = ["allow-ssh"]
      google.additional_disks =  [{
        :image_project_id => GOOGLE_PROJECT_ID,
        :disk_size => 20,
        :disk_name => "google-additional-disk-1",
        :disk_type => "pd-standard",
        :autodelete_disk => true
      }]

      override.ssh.username = "user_name"
      override.ssh.private_key_path = "~/.ssh/id_rsa"
    end
  end