Install a NebulaGraph cluster using NebulaGraph Operator¶
Using NebulaGraph Operator to install NebulaGraph clusters enables automated cluster management with automatic error recovery. This topic covers two methods, kubectl apply
and helm
, for installing clusters using NebulaGraph Operator.
Historical version compatibility
NebulaGraph Operator versions 1.x are not compatible with NebulaGraph versions below 3.x.
Prerequisites¶
Use kubectl apply
¶
-
Create a namespace for storing NebulaGraph cluster-related resources. For example, create the
nebula
namespace.kubectl create namespace nebula
-
Create a Secret for pulling NebulaGraph images from a private registry.
kubectl -n <nebula> create secret docker-registry <image-pull-secret> \ --docker-server=DOCKER_REGISTRY_SERVER \ --docker-username=DOCKER_USER \ --docker-password=DOCKER_PASSWORD
<nebula>
: Namespace to store the Secret.<image-pull-secret>
: Name of the Secret.DOCKER_REGISTRY_SERVE
: Private registry server address for pulling images, for example,reg.example-inc.com
.DOCKER_USE
: Username for the image registry.DOCKER_PASSWORD
: Password for the image registry.
-
Create a YAML configuration file for the cluster. For example, create a cluster named
nebula
.Expand to view an example configuration for the
nebula
clusterapiVersion: apps.nebula-graph.io/v1alpha1 kind: NebulaCluster metadata: name: nebula namespace: default spec: # Control the Pod scheduling strategy. topologySpreadConstraints: - topologyKey: "kubernetes.io/hostname" whenUnsatisfiable: "ScheduleAnyway" # Enable PV recycling. enablePVReclaim: false # Enable the backup and restore feature. enableBR: false # Enable monitoring. exporter: image: vesoft/nebula-stats-exporter version: v3.3.0 replicas: 1 maxRequests: 20 # Customize Agent image for cluster backup and restore. agent: # Agent image address. The default value is vesoft/nebula-agent. image: vesoft/nebula-agent # Agent image version. The default value is latest. version: latest resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "1" memory: "1Gi" # Limit the speed of file upload and download, in Mbps. The default value is 0, indicating no limit. # rateLimit: 0 # The connection timeout between the Agent and metad, in seconds. The default value is 60. # heartbeatInterval: 60 # Secret for pulling images from a private registry. imagePullSecrets: - name: secret-name # Configure the image pull policy. imagePullPolicy: Always # Select the nodes for Pod scheduling. nodeSelector: nebula: cloud # Dependent controller name. reference: name: statefulsets.apps version: v1 # Scheduler name. schedulerName: default-scheduler # Start NebulaGraph Console service for connecting to the Graph service. console: image: vesoft/nebula-console version: nightly username: "demo" password: "test" # Graph service configuration. graphd: # Used to check if the Graph service is running normally. # readinessProbe: # failureThreshold: 3 # httpGet: # path: /status # port: 19669 # scheme: HTTP # initialDelaySeconds: 40 # periodSeconds: 10 # successThreshold: 1 # timeoutSeconds: 10 # Container image for the Graph service. image: reg.example-inc.com/xxx/xxx logVolumeClaim: resources: requests: storage: 2Gi # Storage class name for storing Graph service logs. storageClassName: local-sc # Number of replicas for the Graph service Pod. replicas: 1 # Resource configuration for the Graph service. resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi # Version of the Graph service. version: v3.5.0 # Custom flags configuration for the Graph service. config: {} # Meta service configuration. metad: # LM access address and port, used to obtain License information. licenseManagerURL: 192.168.x.xxx:9119 # readinessProbe: # failureThreshold: 3 # httpGet: # path: /status # port: 19559 # scheme: HTTP # initialDelaySeconds: 5 # periodSeconds: 5 # successThreshold: 1 # timeoutSeconds: 5 # Container image for the Meta service. image: reg.example-inc.com/xxx/xxx logVolumeClaim: resources: requests: storage: 2Gi storageClassName: local-sc dataVolumeClaim: resources: requests: storage: 2Gi storageClassName: local-sc replicas: 1 resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi version: v3.5.0 # Custom flags configuration for the Meta service. config: {} # Storage service configuration. storaged: # readinessProbe: # failureThreshold: 3 # httpGet: # path: /status # port: 19779 # scheme: HTTP # initialDelaySeconds: 40 # periodSeconds: 10 # successThreshold: 1 # timeoutSeconds: 5 # Container image for the Storage service. image: reg.example-inc.com/xxx/xxx logVolumeClaim: resources: requests: storage: 2Gi storageClassName: local-sc dataVolumeClaims: - resources: requests: storage: 2Gi storageClassName: local-sc replicas: 1 resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 500Mi version: v3.5.0 # Custom flags configuration for the Storage service. config: {}
When creating the YAML configuration file for the cluster, you must customize the following parameters. For more detailed information about these parameters, see the Cluster configuration parameters section below.
spec.metad.licenseManagerURL
spec.<graphd|metad|storaged>.image
spec.imagePullSecrets
spec.<graphd|metad|storaged>.<logVolumeClaim|dataVolumeClaim>.storageClassName
Expand to view all configurable parameters and descriptions
Parameter Default Value Description metadata.name
- The name of the created NebulaGraph cluster. spec.console
- Launches a Console container for connecting to the Graph service. For configuration details, see nebula-console. spec.topologySpreadConstraints
- Controls the scheduling strategy for Pods. For more details, see Topology Spread Constraints. When the value of topologyKey
iskubernetes.io/zone
, the value ofwhenUnsatisfiable
must be set toDoNotSchedule
, and the value ofspec.schedulerName
should benebula-scheduler
.spec.graphd.replicas
1
The number of replicas for the Graphd service. spec.graphd.image
vesoft/nebula-graphd
The container image for the Graphd service. spec.graphd.version
v3.5.0
The version of the Graphd service. spec.graphd.service
Configuration for accessing the Graphd service via a Service. spec.graphd.logVolumeClaim.storageClassName
- The storage class name for the log volume claim of the Graphd service. When using sample configuration, replace it with the name of the pre-created storage class. See Storage Classes for creating a storage class. spec.metad.replicas
1
The number of replicas for the Metad service. spec.metad.image
vesoft/nebula-metad
The container image for the Metad service. spec.metad.version
v3.5.0
The version of the Metad service. spec.metad.dataVolumeClaim.storageClassName
- Storage configuration for the data disk of the Metad service. When using sample configuration, replace it with the name of the pre-created storage class. See Storage Classes for creating a storage class. spec.metad.logVolumeClaim.storageClassName
- Storage configuration for the log disk of the Metad service. When using sample configuration, replace it with the name of the pre-created storage class. See Storage Classes for creating a storage class. spec.storaged.replicas
3
The number of replicas for the Storaged service. spec.storaged.image
vesoft/nebula-storaged
The container image for the Storaged service. spec.storaged.version
v3.5.0
The version of the Storaged service. spec.storaged.dataVolumeClaims.resources.requests.storage
- The storage size for the data disk of the Storaged service. You can specify multiple data disks. When specifying multiple data disks, the paths are like /usr/local/nebula/data1
,/usr/local/nebula/data2
, and so on.spec.storaged.dataVolumeClaims.storageClassName
- Storage configuration for the data disks of the Storaged service. When using sample configuration, replace it with the name of the pre-created storage class. See Storage Classes for creating a storage class. spec.storaged.logVolumeClaim.storageClassName
- Storage configuration for the log disk of the Storaged service. When using sample configuration, replace it with the name of the pre-created storage class. See Storage Classes for creating a storage class. spec.<metad|storaged|graphd>.securityContext
{}
Defines the permission and access control for the cluster containers to control access and execution of container operations. For details, see SecurityContext. spec.agent
{}
Configuration for the Agent service used for backup and recovery, and log cleaning functions. If you don't customize this configuration, the default configuration is used. spec.reference.name
{}
The name of the controller it depends on. spec.schedulerName
default-scheduler
The name of the scheduler. spec.imagePullPolicy
Always
The image pull policy for NebulaGraph images. For more details on pull policies, please see Image pull policy. spec.logRotate
{}
Log rotation configuration. For details, see Managing Cluster Logs. spec.enablePVReclaim
false
Defines whether to automatically delete PVCs after deleting the cluster to release data. For details, see Reclaim PV. spec.metad.licenseManagerURL
- Configures the URL pointing to the License Manager (LM), consisting of the access address and port (default port 9119
). For example,192.168.8.xxx:9119
. You must configure this parameter to obtain the License information; otherwise, the NebulaGraph cluster will not function.spec.storaged.enableAutoBalance
false
Whether to enable automatic balancing. For details, see Balancing Storage Data After Scaling Out. spec.enableBR
false
Defines whether to enable the BR tool. For details, see Backup and Restore. spec.imagePullSecrets
[]
Defines the Secret required to pull images from a private repository. -
Create the NebulaGraph cluster.
kubectl create -f apps_v1alpha1_nebulacluster.yaml -n nebula
Output:
nebulacluster.apps.nebula-graph.io/nebula created
If you don't specify the namespace using
-n
, it will default to thedefault
namespace. -
Check the status of the NebulaGraph cluster.
kubectl get nebulaclusters nebula -n nebula
Output:
NAME READY GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE nebula2 True 1 1 1 1 1 1 86s
Use helm
¶
-
Add the NebulaGraph Operator Helm repository (if it's already added, run the next step directly).
helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts
-
Update the Helm repository to fetch the latest resources.
helm repo update nebula-operator
-
Set environment variables for the configuration parameters required for installing the cluster.
export NEBULA_CLUSTER_NAME=nebula # Name of the NebulaGraph cluster. export NEBULA_CLUSTER_NAMESPACE=nebula # Namespace for the NebulaGraph cluster. export STORAGE_CLASS_NAME=local-sc # StorageClass for the NebulaGraph cluster.
-
Create a namespace for the NebulaGraph cluster if it hasn't been created already.
kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}"
-
Create a Secret for pulling images from a private repository.
kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" create secret docker-registry <image-pull-secret> \ --docker-server=DOCKER_REGISTRY_SERVER \ --docker-username=DOCKER_USER \ --docker-password=DOCKER_PASSWORD
<image-pull-secret>
: Specify the name of the Secret.DOCKER_REGISTRY_SERVER
: Specify the address of the private image repository (e.g.,reg.example-inc.com
).DOCKER_USER
: Username for the image repository.DOCKER_PASSWORD
: Password for the image repository.
-
Check the customizable configuration parameters for the
nebula-cluster
Helm chart of thenebula-operator
when creating the cluster.- Visit nebula-cluster/values.yaml to see all the configuration parameters for the NebulaGraph cluster.
-
Run the following command to view all the configurable parameters.
helm show values nebula-operator/nebula-cluster
Example to view all configurable parameters
nebula: version: v3.5.0 imagePullPolicy: Always storageClassName: "" enablePVReclaim: false enableBR: false enableForceUpdate: false schedulerName: default-scheduler topologySpreadConstraints: - topologyKey: "kubernetes.io/hostname" whenUnsatisfiable: "ScheduleAnyway" logRotate: {} reference: name: statefulsets.apps version: v1 graphd: image: vesoft/nebula-graphd replicas: 2 serviceType: NodePort env: [] config: {} resources: requests: cpu: "500m" memory: "500Mi" limits: cpu: "1" memory: "500Mi" logVolume: enable: true storage: "500Mi" podLabels: {} podAnnotations: {} securityContext: {} nodeSelector: {} tolerations: [] affinity: {} readinessProbe: {} livenessProbe: {} initContainers: [] sidecarContainers: [] volumes: [] volumeMounts: [] metad: image: vesoft/nebula-metad replicas: 3 env: [] config: {} resources: requests: cpu: "500m" memory: "500Mi" limits: cpu: "1" memory: "1Gi" logVolume: enable: true storage: "500Mi" dataVolume: storage: "2Gi" licenseManagerURL: "" license: {} podLabels: {} podAnnotations: {} securityContext: {} nodeSelector: {} tolerations: [] affinity: {} readinessProbe: {} livenessProbe: {} initContainers: [] sidecarContainers: [] volumes: [] volumeMounts: [] storaged: image: vesoft/nebula-storaged replicas: 3 env: [] config: {} resources: requests: cpu: "500m" memory: "500Mi" limits: cpu: "1" memory: "1Gi" logVolume: enable: true storage: "500Mi" dataVolumes: - storage: "10Gi" enableAutoBalance: false podLabels: {} podAnnotations: {} securityContext: {} nodeSelector: {} tolerations: [] affinity: {} readinessProbe: {} livenessProbe: {} initContainers: [] sidecarContainers: [] volumes: [] volumeMounts: [] exporter: image: vesoft/nebula-stats-exporter version: v3.3.0 replicas: 1 env: [] resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi" podLabels: {} podAnnotations: {} securityContext: {} nodeSelector: {} tolerations: [] affinity: {} readinessProbe: {} livenessProbe: {} initContainers: [] sidecarContainers: [] volumes: [] volumeMounts: [] maxRequests: 20 agent: image: vesoft/nebula-agent version: latest resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi" console: username: root password: nebula image: vesoft/nebula-console version: latest nodeSelector: {} alpineImage: "" imagePullSecrets: [] nameOverride: "" fullnameOverride: ""
Expand to view parameter descriptions
Parameter Default Value Description nebula.version
v3.5.0 Version of the cluster. nebula.imagePullPolicy
Always
Container image pull policy. Always
means always attempting to pull the latest image from the remote.nebula.storageClassName
""
Name of the Kubernetes storage class for dynamic provisioning of persistent volumes. nebula.enablePVReclaim
false
Enable persistent volume reclaim. See Reclaim PV for details. nebula.enableBR
false
Enable the backup and restore feature. See Backup and Restore with NebulaGraph Operator for details. nebula.enableForceUpdate
false
Force update the Storage service without transferring the leader partition replicas. See Optimize leader transfer in rolling updates for details. nebula.schedulerName
default-scheduler
Name of the Kubernetes scheduler. Must be configured as nebula-scheduler
when using the Zone feature.nebula.topologySpreadConstraints
[]
Control the distribution of pods in the cluster. nebula.logRotate
{}
Log rotation configuration. See Manage cluster logs for details. nebula.reference
{"name": "statefulsets.apps", "version": "v1"}
The workload referenced for a NebulaGraph cluster. nebula.graphd.image
vesoft/nebula-graphd
Container image for the Graph service. nebula.graphd.replicas
2
Number of replicas for the Graph service. nebula.graphd.serviceType
NodePort
Service type for the Graph service, defining how the Graph service is accessed. See Connect to the Cluster for details. nebula.graphd.env
[]
Container environment variables for the Graph service. nebula.graphd.config
{}
Configuration for the Graph service. See Customize the configuration of the NebulaGraph cluster for details. nebula.graphd.resources
{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"500Mi"}}}
Resource limits and requests for the Graph service. nebula.graphd.logVolume
{"logVolume": {"enable": true,"storage": "500Mi"}}
Log storage configuration for the Graph service. When enable
isfalse
, log volume is not used.nebula.metad.image
vesoft/nebula-metad
Container image for the Meta service. nebula.metad.replicas
3
Number of replicas for the Meta service. nebula.metad.env
[]
Container environment variables for the Meta service. nebula.metad.config
{}
Configuration for the Meta service. See Customize the configuration of the NebulaGraph cluster for details. nebula.metad.resources
{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}
Resource limits and requests for the Meta service. nebula.metad.logVolume
{"logVolume": {"enable": true,"storage": "500Mi"}}
Log storage configuration for the Meta service. When enable
isfalse
, log volume is not used.nebula.metad.dataVolume
{"dataVolume": {"storage": "2Gi"}}
Data storage configuration for the Meta service. nebula.metad.licenseManagerURL
""
URL for the license manager (LM) to obtain license information. nebula.storaged.image
vesoft/nebula-storaged
Container image for the Storage service. nebula.storaged.replicas
3
Number of replicas for the Storage service. nebula.storaged.env
[]
Container environment variables for the Storage service. nebula.storaged.config
{}
Configuration for the Storage service. See Customize the configuration of the NebulaGraph cluster for details. nebula.storaged.resources
{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}
Resource limits and requests for the Storage service. nebula.storaged.logVolume
{"logVolume": {"enable": true,"storage": "500Mi"}}
Log storage configuration for the Storage service. When enable
isfalse
, log volume is not used.nebula.storaged.dataVolumes
{"dataVolumes": [{"storage": "10Gi"}]}
Data storage configuration for the Storage service. Supports specifying multiple data volumes. nebula.storaged.enableAutoBalance
false
Enable automatic balancing. See Balance storage data after scaling out for details. nebula.exporter.image
vesoft/nebula-stats-exporter
Container image for the Exporter service. nebula.exporter.version
v3.3.0
Version of the Exporter service. nebula.exporter.replicas
1
Number of replicas for the Exporter service. nebula.exporter.env
[]
Environment variables for the Exporter service. nebula.exporter.resources
{"resources":{"requests":{"cpu":"100m","memory":"128Mi"},"limits":{"cpu":"200m","memory":"256Mi"}}}
Resource limits and requests for the Exporter service. nebula.agent.image
vesoft/nebula-agent
Container image for the agent service. nebula.agent.version
latest
Version of the agent service. nebula.agent.resources
{"resources":{"requests":{"cpu":"100m","memory":"128Mi"},"limits":{"cpu":"200m","memory":"256Mi"}}}
Resource limits and requests for the agent service. nebula.console.username
root
Username for accessing the NebulaGraph Console client. See Connect to the cluster for details. nebula.console.password
nebula
Password for accessing the NebulaGraph Console client. nebula.console.image
vesoft/nebula-console
Container image for the NebulaGraph Console client. nebula.console.version
latest
Version of the NebulaGraph Console client. nebula.alpineImage
""
Alpine Linux container image used to obtain zone information for nodes. imagePullSecrets
[]
Names of secrets to pull private images. nameOverride
""
Cluster name. fullnameOverride
""
Name of the released chart instance. nebula.<graphd|metad|storaged|exporter>.podLabels
{}
Additional labels to be added to the pod. nebula.<graphd|metad|storaged|exporter>.podAnnotations
{}
Additional annotations to be added to the pod. nebula.<graphd|metad|storaged|exporter>.securityContext
{}
Security context for setting pod-level security attributes, including user ID, group ID, Linux Capabilities, etc. nebula.<graphd|metad|storaged|exporter>.nodeSelector
{}
Label selectors for determining which nodes to run the pod on. nebula.<graphd|metad|storaged|exporter>.tolerations
[]
Tolerations allow a pod to be scheduled to nodes with specific taints. nebula.<graphd|metad|storaged|exporter>.affinity
{}
Affinity rules for the pod, including node affinity, pod affinity, and pod anti-affinity. nebula.<graphd|metad|storaged|exporter>.readinessProbe
{}
Probe to check if a container is ready to accept service requests. When the probe returns success, traffic can be routed to the container. nebula.<graphd|metad|storaged|exporter>.livenessProbe
{}
Probe to check if a container is still running. If the probe fails, Kubernetes will kill and restart the container. nebula.<graphd|metad|storaged|exporter>.initContainers
[]
Special containers that run before the main application container starts, typically used for setting up the environment or initializing data. nebula.<graphd|metad|storaged|exporter>.sidecarContainers
[]
Containers that run alongside the main application container, typically used for auxiliary tasks such as log processing, monitoring, etc. nebula.<graphd|metad|storaged|exporter>.volumes
[]
Storage volumes to be attached to the service pod. nebula.<graphd|metad|storaged|exporter>.volumeMounts
[]
Specifies where to mount the storage volume inside the container.
-
Create the NebulaGraph cluster.
You can use the
--set
flag to customize the default values of the NebulaGraph cluster configuration. For example,--set nebula.storaged.replicas=3
sets the number of replicas for the Storage service to 3.helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ # Specify the version of the cluster chart. If not specified, it will install the latest version by default. # You can check all chart versions by running the command: helm search repo -l nebula-operator/nebula-cluster --version=1.8.0 \ # Specify the namespace for the NebulaGraph cluster. --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ # Configure the Secret for pulling images from the private repository. --set imagePullSecrets[0].name="{<image-pull-secret>}" \ # Customize the cluster name. --set nameOverride="${NEBULA_CLUSTER_NAME}" \ # Configure the LM (License Manager) access address and port, with the default port being '9119'. # You must configure this parameter to obtain the License information. --set nebula.metad.licenseManagerURL="192.168.8.XXX:9119" \ # Configure the image addresses for various services in the cluster. --set nebula.graphd.image="<reg.example-inc.com/test/graphd-ent>" \ --set nebula.metad.image="<reg.example-inc.com/test/metad-ent>" \ --set nebula.storaged.image="<reg.example-inc.com/test/storaged-ent>" \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # Specify the version for the NebulaGraph cluster. --set nebula.version=v3.5.0
-
Check the status of NebulaGraph cluster pods.
kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}"
Output:
NAME READY STATUS RESTARTS AGE nebula-exporter-854c76989c-mp725 1/1 Running 0 14h nebula-graphd-0 1/1 Running 0 14h nebula-graphd-1 1/1 Running 0 14h nebula-metad-0 1/1 Running 0 14h nebula-metad-1 1/1 Running 0 14h nebula-metad-2 1/1 Running 0 14h nebula-storaged-0 1/1 Running 0 14h nebula-storaged-1 1/1 Running 0 14h nebula-storaged-2 1/1 Running 0 14h