Don’t rewrite your driver. 80 storage drivers for containers rolled into one!

Do you work with containers but your storage doesn’t support your Container Orchestration system? Have you or your company already developed an Openstack/Cinder storage driver and now you have to do it again for containers? Are you having trouble deciding how to balance your engineering force between storage driver development in OpenStack, Containers, Ansible, etc? Then read on, as your life may be about to get better. Introduction For a long time, each Container Orchestration solution had its own storage […]


Ansible Storage Role: automating your storage solutions

Were you in the middle of writing your Ansible playbooks to automate your software provisioning, configuration, and application deployment when you realized you had to manage your storage as well? And it turns out that each of your storage solutions has a completely different Ansible module. Now you have to figure out how each module works to create ad-hoc tasks for each one. What a pain! If this has happened to you, or if you are interested in automating your […]


Cinderlib: Every storage driver on a single Python library

Wouldn’t it be great if we could manage any storage array using a single Python library that provided the right storage management abstraction? Imagine writing your own Python storage management software and not caring about the backend or the connection. What if your code was backend and connection agnostic so the same method was used for RBD/Ceph, NetApp, Solidfire, XtremIO, Kaminario, 3PAR, or any other vendor’s storage, and iSCSI, FC, RBD, or any other connections. Your code would call the […]


Revamping iSCSI connections in OpenStack

So you may have read my previous post about iSCSI multipathing in OpenStack and decided to try the new code, and everything seemed to be working fine, but then you start pushing it more and more and you find yourself back at a point were thinks don’t go as expected, so what’s the deal there? Were the issues fixed or not? The sort answer is yes and no, but let’s see what I mean by this. Issues At Red Hat […]


iSCSI multipath issues in OpenStack 7

Multipathing is a technique frequently used in enterprise deployments to increase throughput and reliability on external storage connections, and it’s been a little bit of a pain in the neck for OpenStack users. If you’ve nodded while reading the previous statement, then this post will probably be of interest to you, as we’ll be going over some iSCSI multipath issues found in OpenStack and how they can be solved. Pain in the neck Multipath is a great feature that we […]


Standalone Cinder: The definitive SDS 9

Are you looking for the best Software Defined Storage in the market? Look no further, Standalone Cinder is here! Let’s have an overview of the Standalone Cinder service, see some specific configurations, and find out how to make requests with no other OpenStack service is deployed. Cinder Until not so long ago Cinder was always mentioned in an OpenStack context, but for some time now you could hear conversations where Cinder was standing on its own and was discussed as […]


# of DB connections in OpenStack services 2

The other day someone asked me if the SQLAlchemy connections to the DB where per worker or shared among all workers, and what was the number of connections that should be expected from an OpenStack service. Maybe you have also wondered about this at some point, wonder no more, here’s a quick write up summarizing it. OpenStack services use Oslo DB library for database access, which in turn uses the SQLAlchemy library as the ORM, so we define our connection […]

db connections

Cinder’s Ceph Replication Sneak peek 9

Have you been dying to try out the Volume Replication functionality in OpenStack but you didn’t have some enterprise level storage with replication features lying around for you to play with? Then you are in luck!, because thanks to Ceph’s new RBD mirroring functionality and Jon Bernard’s work on Cinder, you can now have the full replication experience using your commodity hardware and I’m going to tell you how you can have a preview of what’s about to come to […]


Manual validation of Cinder A/A patches 4

In the Cinder Midcycle I agreed to create some sort of document explaining the manual tests I’ve been doing to validate the work on Cinder’s Active-Active High Availability -as a starting point for other testers and for the automation of the tests- and writing a blog post was the most convenient way for me to do so, so here it is. Scope The Active-Active High Availability work in Cinder is formed by a good number of specs and patches, and […]


Cinder Active-Active HA – Newton mid-cycle

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session. Just like in previous mid-cycles the Cinder community did its best to accommodate remote attendees and make them feel included in the sessions with hangouts, live video streaming, IRC pings as reminders, and even moving the […]