Making Host and OpenStack iSCSI devices play nice together

OpenStack services assume that they are the sole owners of the iSCSI connections to the iSCSI portal-targets generated by the Cinder driver, and that is fine 98% of the time, but what happens when we also want to have other non-OpenStack iSCSI volumes from that same storage system present on boot?

In OpenStack the OS-Brick library is used by Nova-compute, Cinder-volume, Cinder-backup, and, depending on our configuration, Glance as well, to handle block volume connections.

At a very high level a Nova-compute iSCSI volume attach is initiated by the user, and then Nova asks Cinder to create an attachment for the host where it is running. Then Cinder asks its driver code to make the volume available for that host. At this point the driver will export and map the volume via iSCSI and return the connection information which goes all the way back to Nova. Now that Nova has the connection information it can ask OS-Brick to do the local attachment and get a local path where the volume has been attached.

One interesting detail of the Cinder iSCSI drivers is the fact that there are different types: some support multipathing and some don’t, some use target discovery while others return all the targets-portals to use, some share the same target for all volumes attached to the same host while others create a specific target for each volume…

Regardless of how the driver behaves, OS-Brick assumes that the iSCSI Linux initiator node entities, identified by a target-portal tuple, will not be used by anybody else and are its own to do as it pleases. Under this assumption OS-Brick configures the iSCSI nodes in the most appropriate way for OpenStack usage, which means that it disables the automatic scans happening on startup, on login, and on reception of AEN/AER iSCSI messages to prevent leaving leftover devices in the system.

Disabling automatic scans means that OS-Brick has to do manual scans to discover each of the LUNs so that they are discovered and populated in the Linux system. After a system restart, without automatic scans, the devices corresponding to the iSCSI attached volumes will not be populated automatically, which is fine for OpenStack, as Nova has to ask OS-Brick to reconnect them anyway as encrypted volumes need to be decrypted.

Now that we have some context we can go back to the main topic of this post, having iSCSI volumes on the host that will be present at system start up and can coexist with the ones that OpenStack manages.

The way we normally add these volumes normally is by attaching them to our host with the iSCSI initiator (using iscsiadm) and setting the node.startup configuration option for the different iSCSI initator nodes to automatic and leaving the node.session.scan to auto, so the iSCSI initiator will connect them automatically on the next boot and the target will be scanned making the volumes present in the system. The issue will arise from the fact that we are using a single session for each node, so if we added those volumes using the same target as OS-Brick the node.session.scan will be changed to manual when OS-Brick attaches a volume, and on the next boot the volumes we wanted for the host will not be scanned even if we login to the target. End result, we don’t see the volumes we wanted.

Looking at the different combinations of iSCSI Cinder drivers types and storage systems we find 4 scenarios that are of interest to us:

  1. Cinder driver uses a unique target for each volume
  2. Cinder driver uses a shared target for all volumes attached to a single host
    1. Storage system supports multiple targets per initiator
      1. Cinder driver uses its own target based on a fixed name or a template
      2. Cinder driver chooses the target based on the initiators that are connected
    2. Storage system supports only 1 target per initiator

Scenarios 1 and 2.1.1 are simple enough. We just have to create a new target on the storage system specific for the volumes that we want available on the host at boot and ensure that the target name that we create does not collide with the names used with the Cinder driver.

For the other scenarios -2.1.2 and 2.2- we find ourselves in a bind, because either we cannot create a new target for our initiator (scenario 2.2) or the Cinder driver is going to steal the target we specifically created for our host volumes even though it could create its own target (scenario 2.1.2).

The solution is surprisingly simple. We just need to use a different iSCSI initiator name to connect the volumes that we want to have available at boot on the host. By using a different initiator name we will be able to create a new target on the storage system for this different name (scenario 2.2) or just make the Cinder-driver not steal our target since it doesn’t recognize the initiator name as its own (scenario 2.1.2)

But how do we do this with open-iSCSI? Because /etc/iscsi/initiatorname.iscsi doesn’t seem to allow us having multiple initiator names. The answer is… Creating a new iSCSI interface!

To create a new iSCSI interface we create a new file in /var/lib/iscsi/ifaces/, for example iface0 and define the interface parameters with a new initiator name. For example:

iface.iscsi_ifacename = iface0
iface.initiatorname =
iface.transport_name = tcp

We can get a valid value for the iface.initiatorname option running the iscsi-iname command and there is no need to restart the iscsid daemon for it to pick up this new interface, so we can do this while the OpenStack services are up and running.

Now that we have defined this new interface we can check that the file is correct and that the iSCSI initiator can work with it like this:

$ sudo iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
iface0 tcp,<empty>,<empty>,<empty>,

Having this new iSCSI interface with its own initiator name we can proceed to create a new target in the storage system for this initiator name and connect it like we normally do, with the exception that we now need to indicate that we don’t want to use the default interface but the newly created one instead using the --interface parameter when running the node creation, like this:

$ sudo iscsiadm -m node -T -p --interface iface0 --op new
New iSCSI node [tcp:[hw=,ip=,net_if=,iscsi_if=iface0],3260,-1] added

$ sudo iscsiadm -m node -T -p --login
Logging in to [iface: iface0, target:, portal:,3260] (multiple)
Login to [iface: iface0, target:, portal:,3260] successful.

$ sudo iscsiadm -m node -T -p --op update -n node.startup -v automatic