rcbops / ansible-lxc-rpc

Ansible Playbooks to deploy openstack
https://rcbops.github.io/ansible-lxc-rpc/
Apache License 2.0
38 stars 31 forks source link

[Cinder] iSCSI doesn't require running nova-compute outside of a container after all! #20

Closed mancdaz closed 9 years ago

mancdaz commented 10 years ago

Issue by Apsu Wednesday Aug 06, 2014 at 16:13 GMT Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/314


After lots of digging around and reading kernel code, trying to figure out how to fix iscsitarget's crackheadedness, I discovered that there's another iscsi targeting system built into recent kernels such as 3.13. If we load the scsi_transport_iscsi module, tgtd can talk to it from inside of a container with no problem! The initiator still only needs the iscsi_tcp module.

I made a crappy asciinema recording to demonstrate here: https://asciinema.org/a/11317

We should be able to just add the scsi_transport_iscsi module to cinder's module list and turn is_metal back off by default.

mancdaz commented 10 years ago

Comment by andymcc Thursday Aug 07, 2014 at 16:58 GMT


I think the issue exists on the initiator (so nova-compute) and not the tgt-admin (cinder-volumes) service. The asciinema link suggests you're adding modules to what would be the cinder-volumes (tgtd) container and then showing you can attach to it from a host, this works fine. As the current fix just moves nova-compute to metal (leaving cinder-volumes in a container) which works. The issue is that the initiator can't connect when it is in a container: root@compute1_nova_compute_container-25bf59ec:~# iscsiadm -m node -T iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8 -p 10.241.0.227:3260 --login Logging in to iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260 iscsiadm: got read error (0/111), daemon died? iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260]. iscsiadm: initiator reported error (18 - could not communicate to iscsid) iscsiadm: Could not log into all portals

If we run this exact same thing off the host itself we get the following:

root@533816-node20:~# iscsiadm -m node -T iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8 -p 10.241.0.227:3260 --login Logging in to iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260 Login to [iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260] successful.

The bug on launchpad would agree with this in that the iscsi issue relates to the initiator.

If you only have that scsi_transport_iscsi module then you get the following (from the host):

root@533816-node20:~# iscsiadm -m node -T iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8 -p 10.241.0.227:3260 --login Logging in to iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260 iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-e80bcc97-6288-499d-a11d-dd95cf8143d8, portal: 10.241.0.227,3260]. iscsiadm: initiator reported error (9 - internal error) iscsiadm: Could not log into all portals

Which relates to the missing module - maybe that will help figure out the issue.

mancdaz commented 10 years ago

Comment by andymcc Thursday Aug 07, 2014 at 17:16 GMT


Additional note, you can see the targets from both host and container, it's the --login that only works from the host.

mancdaz commented 10 years ago

Comment by Apsu Thursday Aug 07, 2014 at 23:14 GMT


Damn. Right you are. Hitting the same issue. I think there's another potential solution mentioned in the LP bug report we might be able to use without significant ugliness, but for now I guess we're stuck with is_metal: True on nova-compute. I'll do what I can as soon as I can.

mancdaz commented 9 years ago

Closing this as an 'issue', to be addressed separately.