Zum Inhalt springen

How to migrate your storage to another availability zone

Diese Seite ist noch nicht in deiner Sprache verfügbar. Englische Seite aufrufen

This tutorial guides you through the steps required to create migrate an existing Persistent Volume Claim (PVC) and its data to another availability zone (AZ).

  • A node pool in the target availability zone. For further information have a look at the node pool documentation.
  • Access to the Kubernetes Cluster in which you want to migrate the volumes

To migrate a volume from one availability zone to another, you can use the tool pv-migrate. This tutorial will show you how to use it. Please install it first by following the installation instructions.

To migrate a volume, you need to create a new volume with the same parameters as the old one. Therefore, fetch the old volume to get its definition with the following command.

Terminal window
kubectl get pvc <pvc-name> -n <namespace> -o yaml > pvc.yaml
vim pvc.yaml

The result should look like below.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"old-volume","namespace":"docs"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: cinder.csi.openstack.org
creationTimestamp: "2022-02-04T13:21:03Z"
finalizers:
- kubernetes.io/pvc-protection
name: old-volume
namespace: docs
resourceVersion: "49199"
uid: 4ad285b1-3be2-44c3-afd6-098631a6a6ec
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: premium-perf1-stackit
volumeMode: Filesystem
volumeName: pv-shoot--garden--mig-test-4ad285b1-3be2-44c3-afd6-098631a6a6ec
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound

Therefore delete

  • kubectl.kubernetes.io/last-applied-configuration
  • pv.kubernetes.io/bind-completed
  • pv.kubernetes.io/bound-by-controller
  • volume.beta.kubernetes.io/storage-provisioner

from metadata.annotations.

Also remove

  • the creationTimestamp,
  • thefinalizers,
  • theuid,
  • and the resourceVersion.

In spec you need to delete

  • the volumeName.
  • the status section can be dropped entirely.

Lastly you will have to give the PVC a new name. In this tutorial new-volume will be used:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: new-volume
namespace: docs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: premium-perf1-stackit
volumeMode: Filesystem

After editing the file, save it, and create the new PVC using the following command:

Terminal window
kubectl apply -f pvc.yaml

Check if the PVC has been created successfully. The result should look similar to the one below. Be aware that the new volume should be in Pending status.

Terminal window
kubectl get pvc -n <namespace>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
new-volume Pending premium-perf1-stackit-wait 15m
old-volume Bound pv-shoot--garden--mig-test-4ad285b1-3be2-44c3-afd6-098631a6a6ec 1Gi RWO premium-perf1-stackit 15m

Next, create a file called values.yaml with the following content:

rsync:
nodeSelector:
topology.kubernetes.io/zone: "eu01-3"

To migrate your storage into another availability zone, change the value of topology.kubernetes.io/zone to your desired Availability Zone.

Now you should shut down the application mounting the volume you want to migrate. After the pod, that mounted the volume, has been shut down successfully, execute the following command:

Terminal window
pv-migrate migrate -n <namespace> -N <namespace> -s svc <old-pvc-name> <new-pvc-name> -f values.yaml
🚀 Starting migration
💭 Will attempt 1 strategies: svc
🚁 Attempting strategy: svc
🔑 Generating SSH key pair
📂 Copying data... 100% |████████████████████████████████████████████████████████████| ()
🧹 Cleaning up
✨ Cleanup done
✅ Migration succeeded

If the migration was successful, the data should now be available in the new Availability Zone. This can be double-checked by looking at the nodeAffinity of the newly created persistent volume.

Terminal window
kubectl get pv -oyaml | grep <new-volume-name> -B5 -A20
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: new-volume
namespace: docs
resourceVersion: "57660"
uid: b60918ab-4bc5-4896-aab3-151e0d42d3c4
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.cinder.csi.openstack.org/zone
operator: In
values:
- eu01-3
...

As the data is now in the correct availability zone, you need to change the reference your application is using to point to the new volume. This can be done by binding the new PVC in your deployment or by renaming the new PVC to the old PVC’s name. Be careful, changing the PVC in your deployment might not work for stateful sets and requires changes in your Kubernetes resources.

To rename the PVC, you first need to remove or rename the old one. For safety reasons, it will be renamed in this tutorial. Afterwards the new PVC can be renamed successfully. For that do the following:

  • Follow the installation instructions to install rename-pvc
  • Run rename-pvc --help to check if it is installed correctly.
  • Execute the following to rename the old volume to pvc-old:
Terminal window
rename-pvc -n <namespace> <old-pvc-name> pvc-old

Then change the name of the new PVC to the original name.

Terminal window
rename-pvc -n <namespace> <new-pvc-name> <old-pvc-name>

Now, you can safely restart your application, and it will use the volume in the correct Availability Zone, without the need to change the deployment’s specification.