Open donovanmuller opened 8 years ago
node.set['gluster']['server']['volumes'][volume_name]['bricks']
does not seem to be set:
It is set here: https://github.com/shortdudey123/chef-gluster/blob/master/recipes/server_setup.rb#L52
Can you verify node['gluster']['server']['volumes']['ose3-vol']['peers']
contains the FQDN or hostname of the node?
It does, I left it unexpanded for the screenshot but it was definitely populated.
Can you post the context of the failed run? (not just the exception)
Hi @donovanmuller!
Sorry to hear you are having issues. This appears that your node is trying to load another chef-client that doesn't have that attribute set. Could you please confirm that the same cookbook was run on all nodes that are in your peer list, that the chef node name is the same as the peer name that gluster is using (sometimes when the chef node name is an FQDN and not a hostname or vice versa this can cause a problem like this).
What would really help is the output of your node['gluster']['server']['volumes'] entry in your cookbook attributes file, and the attribute node['gluster']['server']['volumes']['ose3-vol'] from each of your peers.
Thanks in advance!
Andy
@Seth-Karlo Below is my complete gluster attributes:
default['gluster']['version'] = '3.7'
default['gluster']['server']['brick_mount_path'] = '/data'
default['gluster']['server']['disks'] = []
default['gluster']['server']['volumes'] = {
'ose3' => {
'peers' => ['master01.bison.pi.b','node01.bison.pi.b'],
'replica_count' => 2,
'volume_type' => 'replicated',
'disks' => ['/dev/sda4'],
'size' => '10G'
}
}
Is there anything else you need?
Thank you for your report, I apologise for taking so long to respond. I'll see if I can reproduce at this end and get back to you.
Any news about this ? I'm experiencing the same problem on opsworks
@alez007 can you verify the cookbook version you are using so that we make sure we are looking at the same thing?
I'm pretty confident this is caused by chef_node not being set. I've been a bit distracted lately, but I'll try and look into this.
I am using OpsWorks and experiencing this problem, I am wondering if it could be OpsWorks' fault and the way it updates the cookbooks on each node such that every time the "custom cookbooks" are updated, it wipes the node's attributes?
@LorenzoPetite possibly? i don't use OpsWorks and am not too familiar with it @Seth-Karlo you use OpsWorks at all and might be able to shed light here?
@shortdudey123 @LorenzoPetite Sorry no, I've never used Opsworks before. We could possibly test this by adding some echo statements into the cookbook in print out those attributes during compile time. If they report as empty we can then start looking into whether or not they are set properly.
Following @Seth-Karlo's suggestion, I tested with some echo statements. In the server_setup recipe, i found that:
node['gluster']['server']['volumes'][volume_name]['bricks']
produces: ["/gluster/servu/brick"]
However in the server_extend recipe, the reason chef_node['gluster']['server']['volumes'][volume_name]['bricks']
causes an error undefined method '[]' for nil:NilClass
is because chef_node['gluster']
is somehow nil. Strangely, echoing chef_node
produces node[gluster1]
I realise now this is actually a slightly different error than @donovanmuller's, but in both cases there seems to be a problem with attribute persistence.
How could this be possible?
chef_node - iterates over all nodes in cluster. So - it won't iterate when on any node bricks are empty. I have the same problem on one of my test environment. I'm not sure but it can be connected with any chef error during setup cluster, when bricks aren't propagated to chef server, methinks.
I stumbled upon this today too.
On the initial run of the chef-client, the cookbook failed due to an error in the configuration on my side. The chef-client was able to create the volume on the first run though. Executing knife node show <NODE NAME> -a gluster
confirmed that ['gluster]['server']['volumes']['myvolume']['bricks']
was empty.
Subsequent runs of chef-client failed with the error stated in the first comment of this issue.
As far as I know a chef-client persists its attributes on the Chef server only after a successful run. No run of the chef-client completed successfully so the bricks
attribute can never be saved.
My workaround was to set ['gluster']['server']['server_extend_enabled']
to false
, trigger a run of the chef-client (which succeeded) and set ['gluster']['server']['server_extend_enabled']
back to true
.
Initial run of
gluster::server
is successful. Volume created and started. Whengluster::server
runs again, the following gets vomited out: