Reclaim PVs¶
NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a NebulaGraph cluster, by default, PV and PVC objects and the relevant data will be retained to ensure data security.
You can also define the automatic deletion of PVCs to release data by setting the parameter spec.enablePVReclaim to true in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See reclaimPolicy in StorageClass and PV Reclaiming for details.
Prerequisites¶
You have created a cluster. For how to create a cluster with Kubectl, see Create a cluster with Kubectl.
Steps¶
The following example uses a cluster named nebula and the cluster's configuration file named nebula_cluster.yaml to show how to set enablePVReclaim:
-
Run the following command to access the edit page of the
nebulacluster.kubectl edit nebulaclusters.apps.nebula-graph.io nebula -
Add
enablePVReclaimand set its value totrueunderspec.apiVersion: apps.nebula-graph.io/v1alpha1 kind: NebulaCluster metadata: name: nebula spec: enablePVReclaim: true //Set its value to true. graphd: image: vesoft/nebula-graphd logVolumeClaim: resources: requests: storage: 2Gi storageClassName: fast-disks replicas: 1 resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi version: v3.4.3 imagePullPolicy: IfNotPresent metad: dataVolumeClaim: resources: requests: storage: 2Gi storageClassName: fast-disks image: vesoft/nebula-metad logVolumeClaim: resources: requests: storage: 2Gi storageClassName: fast-disks replicas: 1 resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi version: v3.4.3 nodeSelector: nebula: cloud reference: name: statefulsets.apps version: v1 schedulerName: default-scheduler storaged: dataVolumeClaims: - resources: requests: storage: 2Gi storageClassName: fast-disks - resources: requests: storage: 2Gi storageClassName: fast-disks image: vesoft/nebula-storaged logVolumeClaim: resources: requests: storage: 2Gi storageClassName: fast-disks replicas: 3 resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi version: v3.4.3 ... -
Run
kubectl apply -f nebula_cluster.yamlto push your configuration changes to the cluster.