Cinderlib: Every storage driver on a single Python library

Wouldn’t it be great if we could manage any storage array using a single Python library that provided the right storage management abstraction?

Imagine writing your own Python storage management software and not caring about the backend or the connection. What if your code was backend and connection agnostic so the same method was used for RBD/Ceph, NetApp, Solidfire, XtremIO, Kaminario, 3PAR, or any other vendor’s storage, and iSCSI, FC, RBD, or any other connections. Your code would call the exact same method to create a volume regardless of the backend. To use a different storage, you would only have to change the configuration passed on the initialization. That’s it.

Well, this is no longer a beautiful dream, it has become a reality! Keep reading to find out how.



For the past 3 years, I’ve been working on the OpenStack Cinder project. You may already be familiar with it, but even so I think it’s worth giving a brief introduction.

Cinder is the Python block storage service within OpenStack that provides volumes for Nova, the compute service. Cinder can create empty volumes, cloned volumes, volumes from images, snapshots, backups, migrate volumes between backends, and much more. All these features are exposed by the service using a common REST API that abstracts storage differences, yet allows particular storage features, like deduplication and compression, to be exposed and used.

The number of supported features is extensive, but they are unlikely to surprise you since they are expected for any mature storage system. What’s really impressive is the ecosystem of storage drivers that are supported by Cinder, No less than 80 drivers are listed in the project’s drivers pages.

Yes, you read that right, there are at least 80 supported volume drivers within Cinder. In this context, “supported” means drivers with active developers and their own CI system to validate every single patch submitted to both the Cinder project and the driver itself against real hardware.

Now that’s impressive!!

The real value of such an ecosystem has not gone unnoticed, as one of the two most commonly asked questions I’ve heard has been “how can I use the *Cinder storage drivers in my own project?”* Unfortunately, the answer has always been the same: “Sorry, you can’t use them out of Cinder”. We did the next best thing and provided support for Standalone Cinder, which strips the service to the bare minimum. The standalone offering includes only the core Cinder services (API, Scheduler, Volume), RabbitMQ, and a DBMS like MariaDB. Cinder runs without an identity service (Keystone), the compute service (Nova), or the image service (Glance).

Is it really not possible?

There have been people that have tried using the Cinder drivers on their own project directly as they are in the Cinder repository, and couldn’t make them work. Others have taken a couple of drivers from Cinder and modified them to be usable. This is no long term solution, since fixes submitted to Cinder will have to be ported to the custom driver and you’ll need to have access to all the different storage hardware solutions.

The main reason why Cinder drivers cannot be used directly from our own Python programs is because there is a contract between the Cinder-Volume service and the drivers that must be fulfilled for them to work properly. If this contract is not met by our Python application then drivers may just fail to start or have unexpected results.

Some of the expectations resulting from this contract that prevent driver reuse are:

  • Eventlet library must be initialized.
  • Oslo config is used for loading the driver’s configuration.
  • Dynamic configuration loading from file.
  • The privileged helper service is used to run CLI commands and privileged Python code.
  • DLM needs to be configured.
  • Some drivers access the metadata DB directly.
  • Some drivers access metadata via Oslo Versioned Objects.
  • Almost all drivers use Cinder‘s metadata persistence mechanism via return dictionaries.
  • There can only be 1 volume driver loaded at the same time.

But let’s not forget this is Python code we are talking about, where everything is possible (except maybe resolving the GIL nightmare), so I decided to have a go at resolving this particular need and started working on what would end up being called cinderlib.


During the last OpenStack Project Team Gathering held in Dublin, I announced to the Cinder community the development of the cinderlib project and explained how this library allowed any Python application to use Cinder drivers without running any services.

We are no longer talking about running a reduced set of services like in the standalone case. It really means not having a DB, a message broker, or even running any of the Cinder services. We are talking about your program being able to use the Cinder drivers directly just as if it were the Cinder API, Scheduler, and Volume services all rolled into one.

Before we continue, let me show you how a very simple program using cinderlib with an LVM backend looks like:

import cinderlib as cl

# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',

# Create a 1GB volume
vol = lvm.create_volume(1)

# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (, attach.path))

snap = vol.create_snapshot()


Doesn’t look bad, right?

The way cinderlib resolved the issues that prevented drivers from being reused is by monkey patching the Cinder code. Calls to the drivers are made in such a way that the contract we mentioned earlier is honored while providing an easy to use Object Oriented abstraction.

Currently provided features in the library’s master code are:

  • Using Cinder drivers without running a DBMS, Message broker, or Cinder services.
  • Using multiple simultaneous drivers on the same application.
  • Stateless: Support full serialization of objects and context to json or string so the state can be restored.
  • Basic storage operations

The library supports the following basic operations:

  • Create volume
  • Clone volume
  • Create volume from snapshot
  • Extend volume
  • Create snapshot
  • Connect volume
  • Disconnect volume
  • Local attach (with multipath support)
  • Local detach
  • Delete volume
  • Delete snapshot
  • Validate connector
  • Code should support multiple concurrent connections to a volume, though this has not yet been tested.

Supported backends

Some of the drivers that have been tested with cinderlib are:

  • RBD/Ceph
  • LVM
  • XtremIO
  • Kaminario
  • Solidfire

In theory, all Cinder backends should work with cinderlib. Truth is that they have not all been tested. This is still a proof of concept, but we encourage vendors to test it and report any issues that they may find so we can improve cinderlib. This is exactly what John Griffith did for Solidfire. He pointed out that the project_id and user_id fields were None and that was preventing the driver from working. So we quickly got it fixed in cinderlib.

To facilitate driver validation, cinderlib has a simple mechanism to run the functional tests against any backend using a simple YAML file to provide the backend’s configuration (more information on the specific documentation section).

NFS backends are unsupported at the moment and require a follow up patch to be supported.

Metadata persistence

One of the issues that had to be resolved was the database side of things, because almost all drivers require the metadata persistence provided by Cinder.

The way this mechanism works is by allowing drivers to return a dictionary with metadata to be stored with the resource (i.e., volume, snapshot, backup) and that will be passed to any consecutive call related to that resource. If the driver’s create volume method wants to store some metadata, it will be returned by the method and Cinder will store it in the database. When Cinder later on calls the attach method for that volume, or any other method, that information will be passed along.

The way cinderlib resolved this was by keeping all the metadata in memory and providing a JSON serialization mechanism and passing the responsibility of persisting this information to the user of the library. This way the library separated the concerns of managing the drivers and the persistence of metadata, making the library able to work as a stateless driver. Even if the library can work stateless, by default the state is maintained in memory to allow some functionality like listing volumes.

This resolved the situation of persisting the metadata for cinderlib, but then it became clear that any user of the library would have to solve the same persistence issue and we would end up with a lot of code duplication between different projects. Thus, I started working on a metadata persistence plugin on a specific feature branch.

This persistence branch still requires a little bit of work, but it will soon be merged into master. In essence, the idea is to provide a plugin mechanism via Python entrypoints, such that different persistence mechanisms could be provided independent of the cinderlib project as Python modules with their own requirements.

By default, cinderlib provides 3 metadata persistence mechanisms:

  • Memory: where the library user is responsible for storing the metadata
  • DB: Any sqlalchemy compatible DBMS such as MySQL, PostfreSQL, SQLite…
  • In memory DB: This is similar to the memory plugin but is implemented via an in memory SQLite DB providing a higher compatible mode with drivers than the memory one but at the expense of some performance.

What’s next

For the cinderlib project itself, there is a long list of improvements in the todo list, with the most notable ones that are planned being:

  • Completing the persistence feature
  • Support for NFS backends
  • Improving parameter validations/checks
  • Support for advanced features using volume types

One thing that would greatly benefit this project would be to integrate its testing in the third party Cinder CIs, not as independent gate jobs, but as an additional test that could be run after tempest. All of the hard work of deploying OpenStack has already been done and the functional tests take almost no time to run.

What interests me the most is what you are planning to do with this library and the crazy ideas you can come up with. And I would love to hear them, so please let me know if you start using this library in other projects.

To be fair, I’ll also tell you what I’ve been working on using this library:

  • Ansible: A generic storage role abstraction with a default cinderlib volume provider, but an abstraction that should also cover filesystem and object storage systems. You can read a little bit more about it in my follow up post called ansible storage role.
  • Containers: A Container Orchestration system agnostic storage driver implementing the latest CSI specs. You can read a little bit more about it in my follow up post called cinderlib CSI driver.

Don’t forget to visit the cinderlib github page at and the documentation page at