Featured image of post Setup NFS driver on Kubernetes TKGm cluster provide by Container Service Extension

Setup NFS driver on Kubernetes TKGm cluster provide by Container Service Extension

Setup driver csi NFS

In this article we’re going to install a new storage driver for persistent data for your containers hosted on a Tanzu TKGm cluster provided by Cloud Director and Container Service Extension (CSE).

Why use another storage driver?

Clusters deployed by CSE are installed by default with the following driver cloud-director-named-disk-csi-driver

However, this driver has a number of limitations, particularly with regard to access modes

CSI Cloud Director Feature matrix

Feature Support Scope
Storage Type Independent Shareable Named Disks of VCD
Provisioning Static Provisioning, Dynamic Provisioning
Access Modes ReadOnlyMany , ReadWriteOnce
Volume Block
VolumeMode FileSystem
Topology Static Provisioning: reuses VCD topology capabilities, Dynamic Provisioning: places disk in the OVDC of the ClusterAdminUser based on the StorageProfile specified.

As can be seen from the table above, taken from the default driver documentation, it is not possible to create ReadWriteMany PVCs. This can be problematic if several Pods wish to write data to the same PVC.

Nor is it possible to assign more than 15 volumes per worker node with this driver.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// https://github.com/vmware/cloud-director-named-disk-csi-driver/blob/main/pkg/csi/node.go
const (
	// The maximum number of volumes that a node can have attached.
	// Since we're using bus 1 only, it allows up-to 16 disks of which one (#7)
	// is pre-allocated for the HBA. Hence we have only 15 disks.
	maxVolumesPerNode = 15

	DevDiskPath          = "/dev/disk/by-path"
	ScsiHostPath         = "/sys/class/scsi_host"
	HostNameRegexPattern = "^host[0-9]+"
)

CSI NFS driver installation

Requirement

  • A functional Kubernetes cluster
  • Some Kubernetes knowledge
  • Helm

NFS server and PVC

In this example we have a Kubernetes cluster deployed by CSE 4.0.3, with one a master node and one worker node in version 1.21.

1
2
3
4
k get node
NAME                                                   STATUS   ROLES                  AGE     VERSION
k8slorislombardi-control-plane-node-pool-njdvh         Ready    control-plane,master   6h19m   v1.21.11+vmware.1
k8slorislombardi-worker-node-pool-1-69f68cc6b9-kfhz9   Ready    <none>                 6h16m   v1.21.11+vmware.1

We install the driver with helm

1
2
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system

We need to modify the default configuration of the Helm chart. You can edit the deployments and daemonsets configuration directly or modify the values.yaml file in the helm repository.

Modify dnsPolicy

1
2
k edit deployments.apps csi-nfs-controller -n kube-system
k edit daemonsets.apps csi-nfs-node -n kube-system

Replace

1
dnsPolicy: Default 

With

1
dnsPolicy: ClusterFirstWithHostNet

Check

1
2
3
4
k get pod -n kube-system | grep csi-nfs
csi-nfs-controller-64cc5764b7-bd4p5                                      3/3     Running   0          6m37s
csi-nfs-node-fp45z                                                       3/3     Running   0          6m37s
csi-nfs-node-vwf44                                                       3/3     Running   0          6m37s

We’re going to create a PVC that will be used by our NFS server. This PVC is created via the default driver: cloud-director-named-disk-csi-driver.

Be sure to configure the size of this PVC according to your needs and future requirements, as the size of this PVC cannot be modified.

In this example, the cloud-director-named-disk-csi-driver is provide by the StorageClass default-storage-class-1

1
2
3
k get storageclasses
NAME                                PROVISIONER                                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
default-storage-class-1 (default)   named-disk.csi.cloud-director.vmware.com   Delete          Immediate           false

Example of a PVC for NFS server

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# pvc-nfs-server.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: named-disk.csi.cloud-director.vmware.com
  name: pvc-nfs-csi-server
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1024Gi
  storageClassName: default-storage-class-1 # Use your StorageClasse Name
  volumeMode: Filesystem

Next, we’ll create an NFS server in a dedicated namespace. This template will create an NFS server pod and also a ClusterIP service, which will be used by a new StorageClass for our future NFS PVCs.

Exemple d’un serveur NFS

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
# nfs-server.yaml
kind: Service
apiVersion: v1
metadata:
  name: nfs-server
  labels:
    app: nfs-server
spec:
  type: ClusterIP  # use "LoadBalancer" to get a public ip
  selector:
    app: nfs-server
  ports:
    - name: tcp-2049
      port: 2049
      protocol: TCP
    - name: udp-111
      port: 111
      protocol: UDP
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      name: nfs-server
      labels:
        app: nfs-server
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: nfs-server
          image: itsthenetwork/nfs-server-alpine:latest
          env:
            - name: SHARED_DIRECTORY
              value: "/exports"
          volumeMounts:
            - mountPath: /exports
              name: pvc-nfs-csi-server
          securityContext:
            privileged: true
          ports:
            - name: tcp-2049
              containerPort: 2049
              protocol: TCP
            - name: udp-111
              containerPort: 111
              protocol: UDP
      volumes:
        - name: pvc-nfs-csi-server
          persistentVolumeClaim:
            claimName: pvc-nfs-csi-server

Setup NFS Server

1
2
3
4
k create namespace nfs-csi
kubens nfs-csi
k apply -f pvc-nfs-server.yaml
k apply -f nfs-server.yaml

Vérification

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
k get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
pvc-nfs-csi-server   Bound    pvc-d61aa727-313d-4466-b923-6121a1ce93f7   1Ti        RWO            default-storage-class-1   3m17s

k get deployments.apps
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nfs-server   1/1     1            1           76s

k get pod
NAME                          READY   STATUS    RESTARTS   AGE
nfs-server-56dfcc48c8-w759j   1/1     Running   0          2m24s

k get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
nfs-server   ClusterIP   100.64.116.170   <none>        2049/TCP,111/UDP   3m12s

Since we have created our service in the nfs-csi namespace, it responds to fqdn nfs-server.nfs-csi.svc.cluster.local

NFS StorageClass

Finally, we’ll create a new StorageClass using the previously created NFS server. You can also use another NFS server present in your environment (VM NFS, NFS NetApp…).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server:  nfs-server.nfs-csi.svc.cluster.local # FQDN or IP of NFS server 
  share: /
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=4.1
allowVolumeExpansion: true
1
2
3
4
5
k apply -f StorageClass.yaml
k get storageclasses
NAME                                PROVISIONER                                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
default-storage-class-1 (default)   named-disk.csi.cloud-director.vmware.com   Delete          Immediate           false                  7h8m
nfs-csi                             nfs.csi.k8s.io                             Delete          Immediate           true                   18s

NFS driver test

Setup NFS PVC and Pod

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-csi
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
---
# nginx-nfs-example.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-nfs-example
spec:
  containers:
    - image: nginx
      name: nginx
      ports:
        - containerPort: 80
          protocol: TCP
      volumeMounts:
        - mountPath: /var/www
          name: test-pvc
  volumes:
    - name: test-pvc
      persistentVolumeClaim:
        claimName: test-pvc
1
2
k apply -f test-pvc.yaml
k apply -f nginx-nfs-example.yaml

Global check

1
2
3
4
5
6
7
k get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-6e8e670d-73d8-4110-b6ed-8c6442b0e2c3   5Gi        RWX            nfs-csi        7m41s

k get pod
NAME                READY   STATUS    RESTARTS   AGE
nginx-nfs-example   1/1     Running   0          16s

NFS Server

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
kubens nfs-csi
k get pod
NAME                          READY   STATUS    RESTARTS   AGE
nfs-server-7cccc9cc84-r2l7l   1/1     Running   0          65m

k exec -it nfs-server-7cccc9cc84-r2l7l -- sh
/ # ls
Dockerfile  bin         etc         home        media       opt         root        sbin        sys         var
README.md   dev         exports     lib         mnt         proc        run         srv         usr
/ # cd exports
/exports # ls
lost+found                                pvc-6e8e670d-73d8-4110-b6ed-8c6442b0e2c3
/exports #

Source : csi-driver-nfs

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy