Open marcheschi opened 3 years ago
Did you deploy the first compute node (HN) or you've joined the node to existing DC?
You can explicitly call VM harvest by API command:
./es post /node/<nodename>/vm-harvest
Jan
Hi, I have joined the node to existing DC. I do not know how to call ./es, I tried : [root@node01 (pacs) /opt/erigones/bin]# ./es post /node/m2c/vm-harvest ERROR: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/node/m2c/vm-harvest/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xfffffc7fee6a5b10>: Failed to establish a new connection: [Errno 146] Connection refused',))
Can you tell me how to authenticate? I tried ./es login -username user -password pasword but i got ERROR: Login problem
Paolo
cp /opt/erigones/bin/es /root/es
vim /root/es
Replace:
API_URL = 'https://localhost/api' # or URL to your DC installation if accessing remotely
API_KEY = '......' # You can find `API_KEY` under your profile in DC GUI, section 'API Keys'.
Most probable reason for not finding the VMs is that DC doesn't know the images. You can verify this by looking into /opt/erigones/log/mgmt.log
and looking for DoesNotExist: Image matching query does not exist.
If that's the case, you need to import the images into DC before calling vm-harvest
. (Datacenter -> Images -> Import disk Image from repository)
Ok find out the error 2020-10-08 10:48:57,926: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d2-c16a2870-a315-4811-95e0]: Image matching query does not exist. Traceback (most recent call last): File "/opt/erigones/api/node/vm/tasks.py", line 67, in harvest_vm_cb update_ips=True, update_dns=True) File "/opt/erigones/api/node/vm/utils.py", line 126, in vm_from_json Image.objects.get(uuid=img_uuid, dc=dc) # May raise Image.DoesNotExist File "/opt/erigones/envs/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method return getattr(self.get_queryset(), name)(args, kwargs) File "/opt/erigones/envs/lib/python2.7/site-packages/django/db/models/query.py", line 334, in get self.model._meta.object_name DoesNotExist: Image matching query does not exist. [2020-10-08 10:48:57,927: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d2-c16a2870-a315-4811-95e0]: Could not load VM from json: """{u'customer_metadata': {}, u'hvm': False, u'zfs_io_priority': 100, u'pid': 13071, u'dns_domain': u'srv.pi.fgm', u'max_physical_memory': 2048, u'create_timestamp': u'2016-05-16T11:21:11.311Z', u'server_uuid': u'ff200008-ffff-ffff-ffff-bc99a64f1400', u'image_uuid': u'163cd9fe-0c90-11e6-bd05-afd50e5961b6', u'boot_timestamp': u'2020-10-08T09:47:37.000Z', u'firewall_enabled': False, u'tmpfs': 2048, u'datacenter_name': u'pacs', u'uuid': u'd9fffba3-a4cd-c18b-9bd4-ff121940f85f', u'nics': [{u'ip': u'10.96.24.44', u'nic_tag': u'clinica', u'netmask': u'255.255.255.0', u'primary': True, u'ips': [u'10.96.24.44/24'], u'mac': u'b2:fb:ea:42:b1:ea', u'gateways': [u'10.96.24.1'], u'interface': u'net0', u'gateway': u'10.96.24.1', u'vlan_id': 24}], u'hostname': u'hybrid', u'max_sem_ids': 4096, u'init_restarts': 0, u'state': u'running', u'max_shm_ids': 4096, u'zonepath': u'/zones/d9fffba3-a4cd-c18b-9bd4-ff121940f85f', u'zonename': u'd9fffba3-a4cd-c18b-9bd4-ff121940f85f', u'max_swap': 2048, u'zfs_root_recsize': 131072, u'tags': {}, u'brand': u'joyent', u'quota': 10, u'zone_state': u'running', u'max_shm_memory': 2048, u'autoboot': True, u'owner_uuid': u'00000000-0000-0000-0000-000000000000', u'snapshots': [], u'billing_id': u'00000000-0000-0000-0000-000000000000', u'zoneid': 2, u'resolvers': [u'10.96.0.96', u'10.96.0.69'], u'max_lwps': 2000, u'datasets': [u'zones/d9fffba3-a4cd-c18b-9bd4-ff121940f85f/data'], u'limit_priv': u'default', u'zfs_filesystem': u'zones/d9fffba3-a4cd-c18b-9bd4-ff121940f85f', u'max_locked_memory': 2048, u'zonedid': 2, u'alias': u'hybrid', u'zfs_data_recsize': 131072, u'last_modified': u'2020-10-08T09:42:30.000Z', u'internal_metadata': {}, u'v': 1, u'routes': {}, u'zpool': u'zones', u'max_msg_ids': 4096, u'platform_buildstamp': u'20200715T230801Z', u'cpu_shares': 100}""" [2020-10-08 10:48:57,939: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d2-c16a2870-a315-4811-95e0]: {u'meta': {u'node_uuid': u'ff200008-ffff-ffff-ffff-bc99a64f1400', u'finish_time': u'2020-10-08T10:48:54.406851', 'caller': u'9e1d2-f5c4b486-6a9b-42c0-9c17', u'apiview': {u'hostname': u'm2c.hypervisor.pi.fgm', u'view': u'harvest_vm', u'method': u'POST', u'vm': None}, u'nolog': False, u'exec_time': u'2020-10-08T10:48:52.329841', u'msg': u'Harvest servers'}, 'detail': 'Could not find or load any server'} Traceback (most recent call last): File "/opt/erigones/api/task/utils.py", line 250, in inner return fun(result, task_id, args, kwargs) File "/opt/erigones/api/node/vm/tasks.py", line 94, in harvest_vm_cb raise TaskException(result, 'Could not find or load any server') TaskException: {u'meta': {u'node_uuid': u'ff200008-ffff-ffff-ffff-bc99a64f1400', u'finish_time': u'2020-10-08T10:48:54.406851', 'caller': u'9e1d2-f5c4b486-6a9b-42c0-9c17', u'apiview': {u'hostname': u'm2c.hypervisor.pi.fgm', u'view': u'harvest_vm', u'method': u'POST', u'vm': None}, u'nolog': False, u'exec_time': u'2020-10-08T10:48:52.329841', u'msg': u'Harvest servers'}, 'detail': 'Could not find or load any server'} [2020-10-08 10:48:57,939: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d2-c16a2870-a315-4811-95e0]: Task 9e1d2-f5c4b486-6a9b-42c0-
Most probable reason for not finding the VMs is that DC doesn't know the images. You can verify this by looking into
/opt/erigones/log/mgmt.log
and looking forDoesNotExist: Image matching query does not exist.
If that's the case, you need to import the images into DC before calling
vm-harvest
. (Datacenter -> Images -> Import disk Image from repository)
I tried to import them but nothing is changed
Just to be sure: you have imported exactly the same images that are used by the servers on the node (the exact image uuids), they are attached in the main datacenter and you are switched in GUI inside the main datacenter, right?
In that case you shouldn't get the image not found error.
Yes, I tried to do that, this is the list of images: And these are the uuid images I have in m2c : [root@m2c (pacs) ~]# vmadm get d9fffba3-a4cd-c18b-9bd4-ff121940f85f|grep image_uuid "image_uuid": "163cd9fe-0c90-11e6-bd05-afd50e5961b6", [root@m2c (pacs) ~]# vmadm get 3cc0ccca-6884-42d6-94c6-901aa666b7c4|grep image_uuid "image_uuid": "32de63f8-8b6f-11e6-beb6-b3e46c186cc2", [root@m2c (pacs) ~]# vmadm get 6375c23d-4875-ccf1-df8a-8f817c7e7c7b|grep image_uuid "image_uuid": "088b97b0-e1a1-11e5-b895-9baa2086eb33", And are present in the list
Now It does not find any server but the log message is slightly changed: [2020-10-08 15:53:50,663: INFO/MainProcess] Received task: api.vm.replica.tasks.vm_replica_sync_cb[7i7d1-7fcf9a1c-53eb-40f1-85dd] expires:[2020-10-08 15:55:50.349964+00:00] [2020-10-08 15:53:50,686: INFO/MainProcess] Task api.vm.replica.tasks.vm_replica_sync_cb[7i7d1-7fcf9a1c-53eb-40f1-85dd] succeeded in 0.0199657580815s: {u'synced_disks': [[u'zones/7279d3b1-1de9-4a41-a805-e612865685cc-disk0', u'zones/5c497530-742d-437c-811b-668500927af6-disk0']],... [2020-10-08 15:53:53,017: INFO/MainProcess] Received task: api.node.vm.tasks.harvest_vm_cb[9e1d1-4f147906-ec7e-4975-8b2e] [2020-10-08 15:53:53,030: INFO/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d1-4f147906-ec7e-4975-8b2e]: Parent task 9e1d1-096114f9-d6e4-414b-bf62 has finished with status=SUCCESS. Running harvest_vm_cb [2020-10-08 15:53:53,045: WARNING/MainProcess] Alias for new VM race2 could not be auto-detected. Fallback to alias=race2 [2020-10-08 15:53:53,045: WARNING/MainProcess] OS type for new VM race2 could not be auto-detected. Fallback to ostype=6 [2020-10-08 15:53:53,048: WARNING/MainProcess] Owner for new VM race2 could not be auto-detected. Fallback to owner=admin [2020-10-08 15:53:53,053: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d1-4f147906-ec7e-4975-8b2e]: Image matching query does not exist. Traceback (most recent call last): File "/opt/erigones/api/node/vm/tasks.py", line 67, in harvest_vm_cb update_ips=True, update_dns=True) File "/opt/erigones/api/node/vm/utils.py", line 126, in vm_from_json Image.objects.get(uuid=img_uuid, dc=dc) # May raise Image.DoesNotExist File "/opt/erigones/envs/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/opt/erigones/envs/lib/python2.7/site-packages/django/db/models/query.py", line 334, in get self.model._meta.object_name DoesNotExist: Image matching query does not exist. [2020-10-08 15:53:53,054: ERROR/MainProcess] api.node.vm.tasks.harvest_vm_cb[9e1d1-4f147906-ec7e-4975-8b2e]: Could not load VM from json: """{u'customer_metadata': {u'user-script': u'/usr/sbin/mdata-get root_authorized_keys > ~root/.ssh/authorized_keys ; /usr/sbin/mdata-get root_authorized_keys > ~admin/.ssh/authorized_keys', u10-25T14:15:20.433Z', u'server_uuid': u'ff200008-ffff-ffff-ffff-bc99a64f1400', u'image_uuid': u'32de63f8-8b6f-11e6-beb6-b3e46c186cc2', u'boot_timestamp': u'2020-10-08T09:47:36.000Z', u'firewall_enabled': False, u'tmpfs': 4096, u'datacenter_name': u'pacs', u'uuid': u'3cc0ccca-6884-42d6-94c6-901aa666b7c4', u'alias': u'race2', u'nics': [{u'ip': u'10.99.88.12', u'nic_tag': u'clinica', u'netmask': u'255.255.255.0', u'primary': True, u'ips': [u'10.99.88.12/24'], u'mac': u'42:9f:82:d1:ac:88', u'gateways': [u'10.99.88.1'], u'interface': u'eth0', u'gateway': u'10.99.88.1', u'vlan_id': 988}], u'hostname': u'race2', u'max_sem_ids': 4096, u'init_restarts': 0, u'state': u'running', u'max_shm_ids': 4096, u'zonepath': u'/zones/3cc0ccca-6884-42d6-94c6-901aa666b7c4', u'zonename': u'3cc0ccca-6884-42d6-94c6-901aa666b7c4', u'max_swap': 4096, u'zfs_root_recsize': 131072, u'tags': {}, u'brand': u'lx', u'quota': 20, u'zone_state': u'running', u'max_shm_memory': 2048, u'autoboot': True, u'owner_uuid': u'00000000-0000-0000-0000-000000000000', u'snapshots': [], u'billing_id': u'00000000-0000-0000-0000-000000000000', u'zoneid': 1, u'resolvers': [u'10.96.0.96', u'10.96.0.69'], u'max_lwps': 2000, u'kernel_version': u'3.10.0', u'zfs_filesystem': u'zones/3cc0ccca-6884-42d6-94c6-901aa666b7c4', u'max_locked_memory': 4096, u'zonedid': 30, u'limit_priv': u'default', u'last_modified': u'2020-10-08T09:42:30.000Z', u'internal_metadata': {}, u'v': 1, u'routes': {}, u'zpool': u'zones', u'max_msg_ids': 4096, u'platform_buildstamp': u'20200715T230801Z', u'cpu_shares': 100}""" [2020-10-08 15:53:53,069: INFO/MainProcess] Task 9e1d1-096114f9-d6e4-414b-bf62 removed from esdc:tasks-1 [2020-10-08 15:53:53,071: WARNING/MainProcess] Alias for new VM mirth34test could not be auto-detected. Fallback to alias=mirth34test [2020-10-08 15:53:53,072: WARNING/MainProcess] OS type for new VM mirth34test could not be auto-detected. Fallback to ostype=5 [2020-10-08 15:53:53,073: WARNING/MainProcess] Owner for new VM mirth34test could not be auto-detected. Fallback to owner=admin
Traceback (most recent call last):
File "/opt/erigones/api/node/vm/tasks.py", line 67, in harvest_vm_cb
update_ips=True, update_dns=True)
File "/opt/erigones/api/node/vm/utils.py", line 129, in vm_from_json
for net_uuid in vm.get_network_uuids():
File "/opt/erigones/vms/models/vm.py", line 2008, in get_network_uuids
return {nic['network_uuid'] for nic in self.get_vm_nics()}
File "/opt/erigones/vms/models/vm.py", line 2008, in
Traceback (most recent call last):
File "/opt/erigones/api/node/vm/tasks.py", line 67, in harvest_vm_cb
update_ips=True, update_dns=True)
File "/opt/erigones/api/node/vm/utils.py", line 129, in vm_from_json
for net_uuid in vm.get_network_uuids():
File "/opt/erigones/vms/models/vm.py", line 2008, in get_network_uuids
return {nic['network_uuid'] for nic in self.get_vm_nics()}
File "/opt/erigones/vms/models/vm.py", line 2008, in
Traceback (most recent call last):
File "/opt/erigones/api/node/vm/tasks.py", line 67, in harvest_vm_cb
update_ips=True, update_dns=True)
File "/opt/erigones/api/node/vm/utils.py", line 129, in vm_from_json
for net_uuid in vm.get_network_uuids():
File "/opt/erigones/vms/models/vm.py", line 2008, in get_network_uuids
return {nic['network_uuid'] for nic in self.get_vm_nics()}
File "/opt/erigones/vms/models/vm.py", line 2008, in
I see. There's missing network_uuid
parameter in VM's json nics.*
. Forgot that this one isn't automatically added by plain SmartOS.
You have to add this parameter for every vnic in VM json.
Example json:
cat update-nics.json
{
"update_nics": [
{
"mac": "72:1d:2c:f3:ca:b5",
"network_uuid": "d42bc4c3-ba17-43ee-a02a-74e667bd41fa"
}
]
}
cat update-nics.json | vmadm update 5746bdfe-481f-4b11-b77c-f34ef10f1c61
If you want to add the VMs into admin
network, use the above network_uuid
. Otherwise see network_uuid
in json of other VMs within wanted network that were deployed by DC.
Yes ! in this way it works, now I have 1 server in m2b node "status": "SUCCESS", "result": { "message": "Successfully harvested 1 server(s) (couch2)" }, "task_id": "9e1d1-9bde9ef6-cd56-406b-b90f" I have to repeat the procedure for 16 vms It is a bit tricker to do that for every vm . Maybe in the future we have to find a better solution.
Sure. We can add code to next DC version to look into existing networks and assign one automatically to first subnet match. Automatic image import might be a bit trickier though.
Ok in the mean time I added all the nics to vm. the image import is the most diffcult part, for example I did'nt find an old centos-6 image in the joyent db and I had to download it from "datasets.at". Paolo
There is also another operation that need to be done , and that is the :
zpool upgrade zones
In order to update zfs features Paolo
Yes. However this one is not done automatically because it would prevent OS downgrade.
I noticed that some disks do not show the real disk size due to the fact that the quota is 0.
I tried to do vmadm update 88edec34-da67-4f81-a4e3-15e2d3c3ee84 quota=15 But it is not recognized by the system.
Try reloading the real VM config from node:
es put /vm/<vmname.tld>/status/current -force
Command worked :
es put /vm/vmmia/status/current -f
Waiting for pending task 9e1d1-dcc3b888-7ec9-40d5-8029 ....
{
"method": "PUT",
"url": "https://10.xx.xx.xx/api/vm/vmmia/status/current/",
"status": 200,
"text": {
"status": "SUCCESS",
"result": {
"status": "running",
"status_changed": false,
"message": "",
"hostname": "vmmia"
},
"task_id": "9e1d1-dcc3b888-7ec9-40d5-8029"
}
}
But it had no effect . This is a sunos zone Paolo
Does this happen only on the nodes converted from plain SmartOS?
I think so, but how can I try on different node? I can try to migrate the vm?
Hi The VMs are present, as it shows the vmadm list command, but they are not added to the DC administration console. Is there a way to make it happen? Thank you Paolo