Open porviss opened 9 years ago
Hi @porviss! Thanks for pointing this out!
This is not currently possible, however, looking at the code in run_instance
action it could be implemented without changing the current logic, since a disks
array is actually passed to fog config:
defaults = {
...
:disks => [disk.get_as_boot_disk(true, autodelete_disk)],
...
}
I :heart: feedback, so can you describe your use-case a bit? What do you want to use multiple disks for and how? That may help the implementation, since I haven't run into the need myself before.
Cheers, A.
Hi @Temikus
Thank you for your response :-)
In most cases I only need a single drive, but In this case I am trying to replicate an existing Hadoop cluster setup I have which has the OS on one drive, and storage on another. So basically its just to keep the data separate from the OS drive, which makes it easier to export and attach to another instance, or download for local use.
@porviss I see. Just to clarify: Do you plan to use local_ssd resources or just attach a second standard disk?
In this case it would be just a second standard disk. Later on though it may be beneficial for using local ssd due for the sake of performance but haven't really thought that far ahead.
On Mon, Aug 17, 2015 at 5:57 PM, Artem Yakimenko notifications@github.com wrote:
@porviss https://github.com/porviss I see. Just to claridy: Do you plan to use local_ssd https://cloud.google.com/compute/docs/disks/local-ssd resources or just attach a second standard disk?
— Reply to this email directly or view it on GitHub https://github.com/mitchellh/vagrant-google/issues/102#issuecomment-131872368 .
Old feature request but +1 for it. This helps make GCE provisioning more like AWS. Any database system should be using a separate drive for data storage.
Hi all I am having the same issue I am trying to migrate vagrant/aws to vagrant/gcp but need multiple disks provisioned for data nodes. Zoltan
I'm considering implementing this new feature. Just want to get some feedback from people on how they envision this working.
Here are some thoughts ...
Here are some questions ...
destroy
is run? (delete the disks or keep them)autodelete_disk
config option for the root disk.I think the new config option would be an Array of Hashes to allow easily adding support for some of these feature in the future.
Maybe something like below.
google.additional_persistent_disks = [{:type => 'ssd', :size => 200, mode => 'rw', delete => false, :name => 'my_awesome_new_disk'}]
@seanmalloy on your questions
Let me know on your thoughts.
My use case is quite simple, I want to use an ansible playbook that installs Zenoss on a second HD (https://github.com/sergevs/ansible-serviced-zenoss) and fails without it. I went looking for how to add the second disk and found this enhancement.
Immediately, my requirement is exactly as you describe @seanmalloy.
And while obvious in this context and essentially mirroring @Temikus' answers, mine are:
Now to re-work the playbook so I don't need the second disk, or maybe I'll just manually add a disk to the instance to get moving.
For a similar use case, where we want to a secondary disk for our storage node machine, I am trying to implement this feature.
Base on @Temikus , this implementation should be pretty straight forward by adding a 2nd well-configured disk to disks
list as follow
disks = [disk.get_as_boot_disk(true, autodelete_disk), second_disk.get_object(true, false, nil, autodelete_disk)]
Below are source code of get_object() and get_as_boot_disk() from disk.rb
def get_object(writable = true, boot = false, device_name = nil, auto_delete = false)
...
end
def get_as_boot_disk(writable = true, auto_delete = false)
get_object(writable, true, nil, auto_delete)
end
Unfortunately, get_object()
was removed from fog-goole since version 1.0.0. Thus, there is no way to pass in a secondary "non-boot disk".
If passing two boot disks (make non sense), it will throw error with Invalid value for field 'resource.disks[1]': ''. Boot disk must be the first disk attached to the instance. (Google::Apis::ClientError)
I don't see a walk around unless we update disk.rb
@whynick1 I maintain fog-google
as well, so feel free to CC me on the patch, I can quickly release and patch dependencies through.
@whynick1 Ok, looking closer this is already implemented in the lib. The method seems to have just moved to collection method attached_disk_obj
Example:
[6] pry(#<TestDisks>)> disk.get_as_boot_disk
=> {:auto_delete=>false,
:boot=>true,
:source=>"https://www.googleapis.com/compute/v1/projects/graphite-sandbox/zones/us-central1-f/disks/fog-test-1-testdisks-test-get-as-configs",
:mode=>"READ_WRITE",
:type=>"PERSISTENT"}
[7] pry(#<TestDisks>)> disk.attached_disk_obj(boot:true,writable:true)
=> {:auto_delete=>false,
:boot=>true,
:mode=>"READ_WRITE",
:source=>"https://www.googleapis.com/compute/v1/projects/graphite-sandbox/zones/us-central1-f/disks/fog-test-1-testdisks-test-get-as-configs",
:type=>"PERSISTENT"}
[8] pry(#<TestDisks>)> assert_equal(disk.get_as_boot_disk, disk.attached_disk_obj(boot:true,writable:true))
=> true
This is probably what you're looking for, right?
Actually looking into the instance handling logic:
You should probably be able to just pass the disk object as it is, i.e.:
:disks => [disk1, disk2]
(First disk in the array should be automatically marked as boot)
Sorry for missing this :| I'll be in my shame cube if you need me.
@Temikus You are right! Based on attached_disk_obj
, I try to add a additional_disks
option to allow attaching additional disks to GCE instance. Here is an example.
attached_disk_obj = [{
:image_family => "google-image-family",
:image => nil,
:image_project_id => "google-project-id",
:disk_size => 20,
:disk_name => "google-additional-disk-0",
:disk_type => "pd-standard",
:autodelete_disk => true
}]
Please refer to this PR, https://github.com/mitchellh/vagrant-google/pull/210/files Please let me know what you think! 😃
Hi, Full example This is what I have right now
config.vm.define "centos7-gcp" do |gcp|
gcp.vm.box = "google/gce"
gcp.vm.provider :google do |google, override|
google.google_project_id = GOOGLE_PROJECT_ID
google.google_json_key_location = GOOGLE_JSON_KEY_LOCATION
google.image_family = "centos-7"
google.name = "ansible"
google.disk_size = 20
google.zone = "us-central1-a"
google.network = "shared"
google.network_project_id = "12c34z2d3"
google.subnetwork = "subnet"
google.use_private_ip = true
google.external_ip = false
google.tags = ["allow-ssh"]
google.additional_disks = [{
:image_project_id => GOOGLE_PROJECT_ID,
:disk_size => 20,
:disk_name => "google-additional-disk-1",
:disk_type => "pd-standard",
:autodelete_disk => true
}]
override.ssh.username = "user_name"
override.ssh.private_key_path = "~/.ssh/id_rsa"
end
end
Is it possible to create instances with multiple disks created / attached?