Open IlTasso opened 11 years ago
Update: The migration works perfectly when the VM is off
offline migration follows a totally different code path. Do you have more log output please ?
logs: migration from nodo01 to nodo02 VM=21284d02-e659-c10b-df40-3e255967b974 (WE12)
LOG NODO01;
INFO ::2013-10-14 10:04:35::utils.py:71::TNArchipelVirtualMachine.check_acp (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::acp received: from: admin@archipel-srv01.iper.it/ArchipelController, type: get, namespace: archipel:vm:control, action: info
INFO ::2013-10-14 10:04:35::utils.py:71::TNArchipelVirtualMachine.check_perm (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::Checking permission for action info asked by admin@archipel-srv01.iper.it/ArchipelController
INFO ::2013-10-14 10:04:35::utils.py:71::TNArchipelVirtualMachine.check_acp (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::acp received: from: admin@archipel-srv01.iper.it/ArchipelController, type: get, namespace: archipel:vm:control, action: screenshot
INFO ::2013-10-14 10:04:35::utils.py:71::TNArchipelVirtualMachine.check_perm (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::Checking permission for action screenshot asked by admin@archipel-srv01.iper.it/ArchipelController
INFO ::2013-10-14 10:04:35::utils.py:71::TNArchipelVirtualMachine.iq_screenshot (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::Screenshot sent
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.check_acp (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::acp received: from: admin@archipel-srv01.iper.it/ArchipelController, type: set, namespace: archipel:vm:control, action: migrate
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.check_perm (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::Checking permission for action migrate asked by admin@archipel-srv01.iper.it/ArchipelController
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.migrate_running_step2 (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::MIGRATION: remote info: libvirt URI is qemu+ssh://CT33-CT12-nodo02/system
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.migrate_running_step2 (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::MIGRATION: remote info: shared folder is /vm//drives/21284d02-e659-c10b-df40-3e255967b974
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.change_presence (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::status change: Migrating - 0% show:
INFO ::2013-10-14 10:04:36::utils.py:71::TNArchipelVirtualMachine.migrate_running_step3 (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::MIGRATION: starting to migrate domain qemu+ssh://CT33-CT12-nodo02/system
DEBUG ::2013-10-14 10:04:36::utils.py:69::TNArchipelVirtualMachine.presence_callback (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::PRESENCE : I just set change presence. The result is
DEBUG ::2013-10-14 10:04:46::utils.py:165::None
INFO ::2013-10-14 10:04:48::utils.py:71::TNArchipelVirtualMachine.change_presence (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::status change: Migrating - 35% show:
DEBUG ::2013-10-14 10:04:48::utils.py:69::TNArchipelVirtualMachine.presence_callback (21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01)::PRESENCE : I just set change presence. The result is
LOG NODO02:
INFO ::2013-10-14 10:04:36::utils.py:68::TNArchipelHypervisor.check_acp (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::acp received: from: 21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01, type: get, namespace: archipel:hypervisor:control, action: migrationinfo DEBUG ::2013-10-14 10:04:53::utils.py:66::TNArchipelHypervisor.parse_own_repo (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::TNHypervisorRepoManager: begin to refresh own vmcast feed DEBUG ::2013-10-14 10:04:53::utils.py:66::TNArchipelHypervisor.parse_own_repo (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::TNHypervisorRepoManager: finish to refresh own vmcast feed WARNING ::2013-10-14 10:05:07::utils.py:70::TNArchipelHypervisor.hypervisor_on_domain_event (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::EVENTMIGRATION: Can't alloc softly this virtual machine. Maybe it is not an archipel VM: 'NoneType' object has no attribute 'getCDATA' DEBUG ::2013-10-14 10:05:53::utils.py:66::TNArchipelHypervisor.parse_own_repo (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::TNHypervisorRepoManager: begin to refresh own vmcast feed DEBUG ::2013-10-14 10:05:53::utils.py:66::TNArchipelHypervisor.parse_own_repo (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::TNHypervisorRepoManager: finish to refresh own vmcast feed
LOG EJABBERED
=INFO REPORT==== 2013-10-14 09:54:04 === I(<0.3796.0>:ejabberd_c2s:1513) : ({socket_state,gen_tcp,#Port<0.39388>,<0.3795.0>}) Close session for 21284d02-e659-c10b-df40-3e255967b974@archipel-srv01.iper.it/ct33-ct12-nodo01
up
up
Hello when I migrate from archipel a kvm vm ,not created with archipel ,properly migrate the vm but it offline in the log I find:
WARNING ::2013-10-09 16:00:11::utils.py:70::TNArchipelHypervisor.hypervisor_on_domain_event (CT33-CT12-nodo02@archipel-srv01.iper.it/CT33-CT12-nodo02)::EVENTMIGRATION: Can't alloc softly this virtual machine. Maybe it is not an archipel VM: list index out of range
In the other node, the machine appears among the Others Virtual Machines
The only solution 'delete user from ejabbered vm server and re-add the vm
all client ejabbered and agent are at last nigthly version
Help me Thx