In-use Volume Backups in Cinder 3

Prior to the Liberty release of OpenStack, Cinder backup functionality was limited to available volumes; but in the latest L release, the possibility to create backups of in-use volumes was added, so let’s have a look into how this is done inside Cinder.

Before Liberty

If you have worked before with backups in Cinder you certainly know that you have to make sure your volumes are in available state before you can back them up, which meant that for in-use volumes the only option left, if you didn’t want to detach the volume, was to create a temporary clone of that volume and then backup that volume.

Using this procedure you could backup your in-use volumes, but you would lose information such as the source volume of the backup and you wouldn’t be able to benefit from incremental backups.

A more in-depth explanation of these issues can be found in previous posts about the status of volume backups and volume backup automation.

In Liberty

Backup of in-use volumes was identified as a potential improvement for the backup service in Kilo release and so the non disruptive backup feature was added in Liberty. This new feature brings us not only the convenience of creating backups of attached volumes with one single API call, but preserves original volume id reference in the backup record and can do incremental backups as well. For more detailed explanation of the incremental backup feature read this other post.

The bird’s eye view of the backup flow is very logical and therefore easy to follow, so let’s get to it.

The backup operation begins in the volume backend driver, where cinder will firstly check that the volume status will allow the backup operation to succeed and raise an error otherwise. It is important to set force flag to true on the backup request for in-use volumes, otherwise cinder will quickly reject the operation.

Once we know that the volume has the right status we check the volume capabilities to determine the best way to perform the backup. Most volume drivers can only attach volumes, but some have the advanced feature of attaching snapshots. Since cloning a volume is usually more expensive than creating a snapshot of that volume, cinder will make use of this feature to optimize the operation.

If the volume driver only supports attaching volumes we will proceed to clone the source volume into a temporary volume for in-use volumes. Depending on the storage backend the volume lives in this could be a simple snapshot and volume creation or could use a storage backend optimal feature. For safety/consistency reasons this temporary volume ID is stored in the backup DB record. Then the newly created volume is attached and the backup can proceed as usual. Lastly the temporary volume is deleted on completion.

The flow for volume drivers that support attaching snapshot is symmetrical, create temporary snapshot, store snapshot ID in the backup DB record, attach snapshot, do the backup and delete the temporary snapshot.

Drivers that implement snapshot attachment will return True on backup_use_temp_snapshot method and will implement initialize_connection_snapshot and terminate_connection_snapshot methods.

More detailed information on non disruptive backups can be found in the original specs and the code itself

As a side note, in older cinder releases preferred method for backing up in-use volumes was the longer method of creating a snapshot, and then a volume from it instead of cloning the volume directly because volume cloning was not available on all backend drivers.


As explained in the documentation, creating incremental backups of an in-use volume is as easy as executing:

user@localhost:$ cinder backup-create --incremental --force VOLUME