Ansible Storage Role: automating your storage solutions


Were you in the middle of writing your Ansible playbooks to automate your software provisioning, configuration, and application deployment when you realized you had to manage your storage as well? And it turns out that each of your storage solutions has a completely different Ansible module. Now you have to figure out how each module works to create ad-hoc tasks for each one. What a pain!

If this has happened to you, or if you are interested in automating your storage solutions, you may be interested in the new Ansible Storage Role.

Introduction

By now, almost every storage vendor has its own Ansible module, and they are great. They allow you great control over each and every feature of your storage. But every one of them is so different you can’t reuse your knowledge from one another. Every additional module requires you to start from scratch, in your learning, and in your Ansible tasks. Not even volume creation is the same between them.

That’s where the Ansible Storage Role comes in. The role provides a storage agnostic abstraction, making it possible to reuse the same playbooks regardless of the storage backend being used. Only the storage configuration provided to the role needs to be changed when switching storage solutions. How great is that?

Objectives

The objectives of the ansible-role-storage are straightforward:

  • Provide vendor agnostic storage interface for Ansible.
  • Manage storage:
    • Block volumes.
    • Shared filesystems.
    • Object storage.
  • Handle storage connections on consumer nodes.
  • Support a wide range of storage solutions with default provider.
  • Include additional providers to serve as examples.

Given the rather ambitious scope of the role, the current proof of concept has restricted, for the time being, the type of storage it will be managing. Only block volumes will be supported.

Storage role

The Storage role is available on github as ansible-role-storage, and on Ansible Galaxy as Akrog.storage.

The following operations are supported:

  • Retrieve backend stats.
  • Create volumes.
  • Delete volumes.
  • Attach volumes.
  • Detach volumes.

Current feature set is quite limited, but enough to illustrate the great benefits that a common abstraction could bring to Ansible. The default block storage provider is called cinderlib, and it comes with 80 different drivers, such as:

  • RBD/Ceph.
  • Dell-EMC: PS Series, Unity, VNX, Storage Center.
  • HPE: LeftHand, 3PAR, MSA, Nimble.
  • Lenovo.
  • NetApp C-mode and E-Series.
  • Pure
  • ScaleIO.
  • …and many more…

These drivers come from the Cinder driver ecosystem. The Storage role uses the driver code directly. Thanks to the cinderlib Python library there’s no need to run any of the Cinder services to use the drivers. This is not a simple Cinder client that interfaces with a standalone or full blown Cinder service via REST APIs. This is Python code that dynamically loads and sets up Cinder drivers in the running process to manage storage solutions.

If there is an existing driver for your storage in Cinder, then the cinderlib provider from the Storage role can manage your storage.

The Storage role includes another block storage provider called cinderclient. This block volume provider uses an already deployed Cinder service to create volumes and attach them to non OpenStack nodes. Uses the cinderclient library, making the code brief and clear, perfect as an implementation example.

Architecture

The storage role is a little bit different than other roles. Familiar concepts like node types, providers, backends, resources, states, and resource identification, may have different meaning and usage from what you’d expect. Let’s go over these concepts to make sure we are all in the same page.

The Storage role has two types of nodes: controllers and consumers. This distinction reflects the difference in actions the nodes can do, as well as the requirements needed to do them.

Controller nodes perform all management operations like create, delete, extend, map, and unmap volumes. To do these operations they require access to the storage management network, and may additionally require the installation of vendor specific tools or libraries.

On the other hand, consumer nodes only need to connect and disconnect resources. This will be accomplished with the OS-Brick Python library with the help of some standard packages such as iscsi-initiator-tools. There are cases that the connection type of the volumes require specific tools, for example for ScaleIO connections.

A provider is the Ansible module responsible of carrying out operations on the storage hardware. One provider must support at least one specific hardware from a vendor, but it may support multiple vendors and a wide range of storage devices.

A backend refers to a specific storage pool, defined by the conjunction of a storage provider and a specific configuration. Backends must be identified by a user chosen unique name.

Resources are the abstract representation within the storage role of physical resources, such as backends and volumes (snapshots and shares will follow). Resource states represent actions we want performed on these resources. Nothing special about these two.

The Storage role tries, within its possibilities, to be smart about locating resources. In most cases just a small amount of information is required to reference a resource. For example, there’s no need to provide a backend name, volume name, or an id when deleting, connecting, or disconnecting a volume, if we only have one backend and one existing volume associated to a host.

Stands to reason that controller nodes must be setup before any consumer node can use their backends. Where this role differs from others is in the guidelines to write playbooks. Controller nodes are not meant to execute tasks (except for stats gathering). Resource tasks should be requested on the target consumer nodes. The Storage role will distribute the execution flow between the controller and the consumer nodes to reach requested state.

Example

Going over a simple playbook will illustrate the differences with other roles that were mentioned before. The example uses an LVM backend with the default cinderlib provider to create, attach, detach, and delete a single volume.

---
- hosts: storage_controller
  vars:
    storage_backends:
        lvm:
            volume_driver: 'cinder.volume.drivers.lvm.LVMVolumeDriver'
            volume_group: 'cinder-volumes'
            iscsi_protocol: 'iscsi'
            iscsi_helper: 'lioadm'
  roles:
      - { role: storage, node_type: controller }

- hosts: storage_consumers
  roles:
      - { role: storage, node_type: consumer }
  tasks:
      - name: Create volume
        storage:
            resource: volume
            state: present
            size: 1
        register: vol

      - name: Connect volume
        storage:
            resource: volume
            state: connected
        register: conn

      - debug:
          msg: "Volume {{ vol.id }} attached to {{ conn.path }}"

      - name: Disconnect volume
        storage:
            resource: volume
            state: disconnected

      - name: Delete volume
        storage:
            resource: volume
            state: absent

This is a simple and straightforward playbook, but a descriptive explanation will showcase the Storage role execution distribution mechanism. This is what happens when we run the playbook:

  • Controller node is initialized: Required libraries are installed on the controller and configuration is validated.
  • For each consumer node:
    • Required libraries are installed on the consumer node.
    • Create a volume: Controller create the volume and it’s associated to the consumer node.
    • Attach the volume:
    • Controller node maps the volume to the consumer node.
    • Consumer node uses iSCSI initiator to attach the volume with help of the OS-Brick library.
    • Display where the volume has been attached.
    • Detach the volume:
    • Consumer node detaches the volume using OS-Brick.
    • Controller node unmaps the volume.
    • Delete the volume: Controller node deletes the volume.

Examples for other storage solutions (Kaminario, RBD, XtremIO) and more advanced examples, such as multi-backend or bulk volume creation, are available in the repository’s example directory.

What’s next

Planned improvements to the Ansible Storage role include some additional block storage features such as:

  • Create snapshot
  • Delete snapshot
  • Create volume from snapshot
  • Extend volume
  • Clone volume

After completing these features, we will start adding the share resource type. Default provider for the share type will leverage the manilaclient to add support to Manila provided filesystems.

If you have a storage solution supported by Cinder, and you want to test if it works with the Ansible Storage role, I recommend starting with one of the existing examples. These examples can be used as a template, changing the configuration to match the same configuration you provide to the Cinder service in OpenStack with the cinder.conf file.

Please feel free to contact me if you are interested in contributing to this effort. If you don’t have time for coding, remember, providing feedback and suggestions are great ways to contribute.