The official Container Storage Interface driver for Gluesys FlexA storage (FlexStor, ExaStor).
Driver Name: csi.flexa.com
| Driver Version | Image | Supported K8s Version |
|---|---|---|
| 1.0.0 | ghcr.io/gluesys/flexa-csi:1.0.0 |
1.21+ |
The Gluesys FlexA CSI driver supports:
- Access modes (CSI):
SINGLE_NODE_WRITER,MULTI_NODE_MULTI_WRITER(typical Kubernetes mapping: RWO and RWX-style multi-writer NFS where applicable). - Controller: Create/Delete volume, List volumes, Get capacity.
- Node: Stage/Unstage (no-op for the current NFS flow), Publish/Unpublish, Get volume stats.
Not supported at this time: snapshots, cloning. Volume expansion is supported (expand-only; shrink is not supported).
- Kubernetes 1.21 or above
- Gluesys FlexA storage (FlexStor, ExaStor) 1.4.2 or above, with at least one storage pool for ZFS-backed volumes, or Lustre cluster configuration as required by your deployment
- Go 1.21+ recommended when building from source
- For ZFS workflows, create and initialize at least one storage pool on Gluesys FlexA storage.
- Deploy the driver using Helm (recommended) or raw manifests (
deploy/kubernetes) as described in Installation.
From the repository root (directory that contains charts/ and config/).
Chart: charts/flexa-csi.
Minimal install (uses charts/flexa-csi/values.yaml for clientInfo.content and other defaults):
helm upgrade --install flexa-csi ./charts/flexa-csi \
--namespace flexa-csiCreate the namespace first if it does not exist (kubectl create namespace flexa-csi), or add --create-namespace to the helm line so Helm creates it. The chart itself does not install a Namespace resource.
- Proxy / VIP config: Edit
clientInfo.contentinvalues.yaml, or use-f my-values.yaml, or--set-file clientInfo.content=/absolute/path/to/client-info.yml(paths are relative to your shell cwd). You do not need a separateconfig/client-info.ymlunless you choose the file-based workflow. - Secret managed by chart: With
clientInfo.create: true(default), the chart creates the Secret fromclientInfo.content. To use a Secret you created yourself:kubectl create secret ...then install with--set clientInfo.create=false(Secret name must matchclientInfo.secretName, defaultclient-info-secret). - Optional: Tune
image,sidecars, host paths, andstorageClassentries undervalues.yaml(templates/storageclass.yamlrenders enabled entries). Runhelm lint ./charts/flexa-csibefore applying.
-
git clone https://github.com/gluesys/flexa-csi.gitandcd flexa-csi -
cp config/client-info-template.yml config/client-info.ymland editconfig/client-info.yml(proxy endpoints and optionalmountIP; see client-info / Secret and StorageClass parameters). -
Create the
client-infoSecret in the driver namespace (see Creating the Secret manually). -
Install:
cd deploy/kubernetes sh install.sh -
Verify:
kubectl get pods -n flexa-csi
You need a Secret (client-info.yml) and StorageClasses. Volume snapshots are not implemented; do not rely on VolumeSnapshotClass until support exists.
Contents: client-info / Secret · StorageClasses · PVC annotations · Pod annotations · VolumeContext
Helm: Configure proxy endpoints in charts/flexa-csi/values.yaml (clientInfo.content) or use -f / --set-file as described in Installation.
Raw manifests: Prepare config/client-info.yml with profiles (proxyIP, proxyPort, optional mountIP for VIP resolve). See the template at config/client-info-template.yml.
Use when not using Helm, or when you pre-create the Secret for Helm (clientInfo.create=false):
-
Example
config/client-info.yml:profiles: zfs: proxyIP: 10.0.0.11 proxyPort: 9001 mountIP: "192.168.0.0/18" lustre: proxyIP: 10.0.0.12 proxyPort: 9001 mountIP: "192.168.0.0/18"
-
Create the Secret so the data key is
client-info.yml(required by the driver):kubectl create secret generic client-info-secret -n flexa-csi \ --from-file=client-info.yml=./config/client-info.yml
Replace the namespace or file path as needed. If you rename the Secret, update the same name in
deploy/kubernetes/controller.ymlandnode.yml(or HelmclientInfo.secretName).
The driver watches client-info-secret in the driver namespace and applies updates at runtime.
Parameter names are case-sensitive (poolName, not poolname).
Helm: Define entries under storageClass in charts/flexa-csi/values.yaml. Each key can enable a class; ZFS entries set poolName, Lustre entries set clusterName. The chart template is charts/flexa-csi/templates/storageclass.yaml.
Raw YAML examples: deploy/kubernetes/storage-class-zfs.yml, deploy/kubernetes/storage-class-lustre.yml
ZFS example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: flexa-sc-zfs
provisioner: csi.flexa.com
parameters:
fs: "zfs"
poolName: "kubernetes"
protocol: "nfs"
proxyProfile: "zfs"
reclaimPolicy: Delete
allowVolumeExpansion: trueLustre example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: flexa-sc-lustre
provisioner: csi.flexa.com
parameters:
fs: "lustre"
clusterName: "cvol"
protocol: "nfs"
proxyProfile: "lustre"
reclaimPolicy: Delete
allowVolumeExpansion: trueParameters
| Name | Required | Description |
|---|---|---|
| fs | Yes | zfs or lustre. |
| poolName | For fs=zfs |
ZFS pool name on Gluesys FlexA storage. |
| clusterName | For fs=lustre |
Lustre cluster name. |
| protocol | Typically set | e.g. nfs. |
| proxyProfile | Optional | Profile name under profiles in client-info. When set, the driver uses that profile’s proxyIP, proxyPort, and mountIP. |
Volume expansion is expand-only. Shrink (reducing requested size) is not supported.
Apply raw YAML:
kubectl apply -f <storageclass_yaml>The controller reads PersistentVolumeClaim annotations with the flexa.io/ prefix at volume creation. Examples: deploy/kubernetes/pvc_zfs.yaml, deploy/kubernetes/pvc_lustre.yaml.
| Annotation | Purpose (summary) |
|---|---|
flexa.io/optionISS |
Instant secure sync (ZFS). |
flexa.io/optionSVS |
Secure volume service (ZFS). |
flexa.io/optionComp |
Compression (ZFS). |
flexa.io/optionDedup |
Deduplication (ZFS). |
flexa.io/secureAddress |
Secure network address for export. |
flexa.io/secureSubnet |
Secure network mask. |
flexa.io/nfsAccess |
NFS access mode (e.g. RW). |
flexa.io/nfsNoRootSquashing |
NFS root squashing. |
flexa.io/nfsInsecure |
NFS insecure port. |
Use values such as "on" / "off" or addresses as in the samples; follow Gluesys FlexA documentation for your environment.
Used at NodePublishVolume (NFS mount):
| Annotation | Purpose |
|---|---|
flexa.io/mountOptions |
Comma-separated NFS options (e.g. vers=4,ro,async,timeo=666). See deploy/kubernetes/pod_zfs.yaml. |
flexa.io/serviceVIP |
If set, NFS server address for the mount; otherwise the controller-provisioned vip from VolumeContext is used. One of these must be available for publish to succeed. |
On provision, the PV carries metadata such as vip, baseDir, poolName, fs, clusterName, protocol, pvcName, pvcNS, proxyProfile, proxyIP, proxyPort, mountIP. Do not edit these under normal operation; they support delete and node publish. For troubleshooting, check vip, baseDir, and pod flexa.io/serviceVIP.
- Build binary:
make flexa-csi-driver - Build image:
make docker-build - Publish to GHCR:
docker login ghcr.iothenmake docker-publish(publishesghcr.io/gluesys/flexa-csi:1.0.0and:latest)
By default the cluster pulls ghcr.io/gluesys/flexa-csi from GHCR. For a locally built image, set imagePullPolicy: IfNotPresent and the image reference in Helm values.yaml or in deploy/kubernetes/controller.yml / node.yml.
Installing a build is covered in Installation; override image.repository / image.tag in Helm or edit the manifests before kubectl apply.
- Helm:
helm uninstall flexa-csi -n flexa-csi(usehelm list -n flexa-csiif the release name differs). - Raw manifests: From
deploy/kubernetes, runsh cleanup.sh(review the script if you customized StorageClass names).
Ensure no workloads still depend on Gluesys FlexA volumes. PVCs, PVs, and backend volumes may remain depending on reclaimPolicy and storage behavior; delete or retain them according to your operations policy.