Skip to content

How to use STACKIT File Storage with Kubernetes Engine

Last updated on

This page demonstrates, how to connect a STACKIT File Storage (SFS) instance with SKE. This will allow you to use RWX volumes (ReadWriteMany) that can be attached to several Pods at once.

  1. Register the upstream Helm chart repository so Helm knows where to fetch the NFS Container Storage Interface (CSI) driver chart from. If you already have the repo configured, Helm will keep the existing entry and you can move on to the next step.

    Terminal window
    helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
  2. Create a file called nfs-csi.yaml with the following content. This is used to configure the CSI driver. Make sure to replace the server and share parameters with the actual information from your STACKIT File Storage setup and experiment with different mountOptions if necessary:

    storageClasses:
    - name: nfs-client <-- replace with your desired storageClass name
    parameters:
    server: 10.10.0.1 <-- replace with your STACKIT File Storage server IP
    share: /rp_Ew3Oak0/foo <-- replace with your STACKIT File Storage share path
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    mountOptions:
    - nfsvers=4.1 <-- feel free to experiment with different options
  3. Deploy the configured CSI driver into your cluster. Using helm upgrade keeps the command idempotent, so you can run it again later to apply updates. The --namespace kube-system flag installs it into the system namespace. The -f nfs-csi.yaml option points to the configuration file you created in the previous step.

    Terminal window
    helm upgrade csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system -f nfs-csi.yaml

After completing these steps, your cluster has the NFS CSI driver installed and is configured with a StorageClass that points to your STACKIT File Storage share. This means Kubernetes can now provision and mount RWX persistent volumes backed by that share, enabling multiple Pods to read from and write to the same storage.

To use the newly configured NFS share, simply reference the StorageClass you created earlier within, for example, a PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-example
spec:
storageClassName: nfs-client <-- replace with the name used for your StorageClass
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi

The PersistentVolumeClaim itself does not mount the volume into any Pod; it only reserves the storage.

This example also does not configure access controls, file permissions, or application-level data consistency for the mounted share. Depending on your use case, you may need to implement additional measures to ensure data integrity and security when multiple Pods are accessing the same NFS share.