Skip to content

Reclaim PVs

NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a NebulaGraph cluster, PV and PVC objects and the relevant data will be retained to ensure data security.

You can define whether to reclaim PVs or not in the configuration file of the cluster's CR instance with the parameter enablePVReclaim.

If you need to release a graph space and retain the relevant data, update your nebula cluster by setting the parameter enablePVReclaim to true.

Prerequisites

You have created a cluster. For how to create a cluster with Kubectl, see Create a cluster with Kubectl.

Steps

The following example uses a cluster named nebula and the cluster's configuration file named nebula_cluster.yaml to show how to set enablePVReclaim:

  1. Run the following command to access the edit page of the nebula cluster.

    kubectl edit nebulaclusters.apps.nebula-graph.io nebula
    
  2. Add enablePVReclaim and set its value to true under spec.

    apiVersion: apps.nebula-graph.io/v1alpha1
    kind: NebulaCluster
    metadata:
      name: nebula
    spec:
      enablePVReclaim: true  //Set its value to true.
      graphd:
        image: vesoft/nebula-graphd
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 1
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.3.0
      imagePullPolicy: IfNotPresent
      metad:
        dataVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        image: vesoft/nebula-metad
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 1
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.3.0
      nodeSelector:
        nebula: cloud
      reference:
        name: statefulsets.apps
        version: v1
      schedulerName: default-scheduler
      storaged:
        dataVolumeClaims:
        - resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        - resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        image: vesoft/nebula-storaged
        logVolumeClaim:
          resources:
            requests:
              storage: 2Gi
          storageClassName: fast-disks
        replicas: 3
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi
        version: v3.3.0
    ...    
    
  3. Run kubectl apply -f nebula_cluster.yaml to push your configuration changes to the cluster.


Last update: February 19, 2024