Skip to content

Reclaim PVs

NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a NebulaGraph cluster, by default, PV and PVC objects and the relevant data will be retained to ensure data security.

You can also define the automatic deletion of PVCs to release data by setting the parameter spec.enablePVReclaim to true in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See reclaimPolicy in StorageClass and PV Reclaiming for details.

Prerequisites

You have created a cluster. For how to create a cluster with Kubectl, see Create a cluster with Kubectl.

Steps

The following example uses a cluster named nebula and the cluster's configuration file named nebula_cluster.yaml to show how to set enablePVReclaim:

  1. Run the following command to access the edit page of the nebula cluster.

    kubectl edit nebulaclusters.apps.nebula-graph.io nebula
    
  2. Add enablePVReclaim and set its value to true under spec.

    apiVersion: apps.nebula-graph.io/v1alpha1
    kind: NebulaCluster
    metadata:
      name: nebula
    spec:
      enablePVReclaim: true  //Set its value to true.
      graphd:
        image: vesoft/nebula-graphd
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 1
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.4.0
      imagePullPolicy: IfNotPresent
      metad:
        dataVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        image: vesoft/nebula-metad
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 1
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.4.0
      nodeSelector:
        nebula: cloud
      reference:
        name: statefulsets.apps
        version: v1
      schedulerName: default-scheduler
      storaged:
        dataVolumeClaims:
        - resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        - resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        image: vesoft/nebula-storaged
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 3
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.4.0
    ...    
    
  3. Run kubectl apply -f nebula_cluster.yaml to push your configuration changes to the cluster.


Last update: February 19, 2024