Back up data with BR (Enterprise Edition)¶
You can use BR (Enterprise Edition) to back up NebulaGraph data. You can perform full backups and incremental backups. You can also back up data to your local disks and to cloud storage services compatible with the Amazon S3 interface. This topic introduces how to back up NebulaGraph data.
Background¶
- A full backup is a backup of your NebulaGraph database's entire.
- An incremental backup covers all files that have changed or been modified since the last backup was made. The last backup can be either a full backup or an incremental backup.
- For the data directory structure of NebulaGraph, see the (default) path
usr/local/nebula-ent/data
.
Notes¶
- During the process of data backup, DDL and DML statements in all specified graph spaces are blocked. We recommend that you perform the backup operation during the low business peak period.
- If you back up data to a local disk, the backup files will be saved in the local storage path of each server in a NebulaGraph cluster. You can also mount the NFS on your server to restore the backup data to a different server.
- During the process of a full backup, it may take a long time if the amount of the data to be backed up is large.
- The cluster performing an incremental backup needs to be the same as the cluster specified for the previous backup. The storage path of the incremental backup should also be the same as the path of the previous backup.
- Make sure that the time between each incremental backup and the last backup is less than a
wal_ttl
time. - Make sure that Agent has read and write permissions for the corresponding NebulaGraph cluster installation directory and backup directory.
- Not support backup for Listener.
- Not support backup for full-text indexes.
- Not support backup for data of a specified graph space.
Prerequisites¶
- The NebulaGraph services are running.
- You have installed BR Enterprise Edition and Agent, and have started the Agent(s) that are installed in each machine.
- You have created a directory with the same absolute path on the machines where the meta service, storage service, and the BR Enterprise Edition tool are located. Make sure your account has write privileges for this directory.
Full backups¶
To a cloud storage service¶
Note
You can back up data to the cloud storage services that are compatible with the Amazon S3 interface.
In the directory of the BR Enterprise Edition installation tool, run the following command to back up the entire data of your NebulaGraph database to a cloud storage service:
./br backup full --meta <ip_address:port> --s3.access_key <access_key> --s3.secret_key <secret_key> --s3.region <region_name> --storage s3://<storage_path> --s3.endpoint <endpoint_url>
For example, perform a full backup of the entire cluster where one of the meta service addresses is 192.168.8.129:9559
, and back up data to the /
directory in the nebula-br-test
bucket of Amazon S3.
./br backup full --meta 192.168.8.129:9559 --s3.access_key QImbbGDjfQExxx --s3.secret_key dVSJZfl7tnoFq7Z5zt6sfxxxx --s3.region us-east-1 --storage s3://nebula-br-test/ --s3.endpoint http://192.168.8.xxx:9000/
To a local disk¶
Caution
In the production environment, we recommend that you mount NFS (Network File System) to the machines where the meta service, the storage service, and the BR Enterprise Edition tool are located for local backups, or use Alibaba Cloud OSS, and Amazon S3 for cloud backups. Otherwise, you must manually move these backup files to the specified directory before restoring the data.
For local backups, only the data of the leader mated and the leader partitions are backed up. When the shared storage service is not used and there are multiple metads or the replica number of partitions is greater than 1, you need to manually copy the directory of the backed-up leader metad (<storage_path>/meta
) and overwrite the corresponding directories of other follower metads, and copy the corresponding partition data from the directory of the leader partitions (<storage_path>/<partition_id>
) to the corresponding directories of other follower partitions. It is not recommended for you to have the copy operation.
In the directory of the BR Enterprise Edition installation tool, run the following command to back up the entire data of your NebulaGraph database to a local disk:
Note
Make sure you have created a directory where your data to be backed up.
./br backup full --meta <ip_address:port> --storage local://<storage_path>
For example, perform a full backup of the entire cluster where one of the meta server addresses is 192.168.8.129:9559
, and back up data to the /backup/
directory on the local disk.
./br backup full --meta "192.168.8.129:9559" --storage "local:///backup/"
Directory structure¶
Full backups back up all the data of the leader metad and leader partitions. The structure of a backed-up directory is as follows:
├── BACKUP_2022_08_12_08_41_19.meta(Backup of metadata, such as host information and the storage path of partitions)
├── data(Backup of the storage data)
│ ├── 1(Graph space ID)
│ │ ├── 1(Partition ID)
│ │ │ ├── data
│ │ │ │ ├── 000033.sst
│ │ │ │ ├── 000035.sst
│ │ │ │ ├── 000037.sst
│ │ │ │ ├── 000039.sst
│ │ │ │ ├── CURRENT
│ │ │ │ ├── MANIFEST-000004
│ │ │ │ └── OPTIONS-000007
│ │ │ └── wal
│ │ │ ├── 0000000000000000001.wal
│ │ │ ├── 0000000000000000267.wal
│ │ │ ├── 0000000000000000324.wal
│ │ │ └── commitlog.id
│ │ ├── 10(Partition ID)
...
└── meta(Backup of the meta data)
...
Incremental backups¶
Caution
Before performing an incremental backup, make sure that the data of the previous full backup or incremental backup is not changed. Otherwise, the incremental backup may fail.
To a cloud storage service¶
Note
You can back up data to the cloud storage services that are compatible with the Amazon S3 interface.
In the directory of the BR Enterprise Edition installation tool, run the following command to back up the incremental data of your NebulaGraph database to a cloud storage service:
./br backup incr --meta <ip_address:port> --s3.access_key <access_key> --s3.secret_key <secret_key> --s3.region <region_name> --storage s3://<storage_path> --s3.endpoint <endpoint_url> --base <backup_file_name>
For example, based on the file BACKUP_2022_08_11_09_11_07
, perform an incremental backup for the cluster where one of the meta server addresses is 192.168.8.129:9559
, and back up data to the /
directory in the nebula-br-test
bucket of Amazon S3.
./br backup incr --meta 192.168.8.129:9559 --s3.access_key QImbbGDjfQExxx --s3.secret_key dVSJZfl7tnoFq7Z5zt6sfxxxx --s3.region us-east-1 --storage s3://nebula-br-test/ --s3.endpoint http://192.168.8.xxx:9000/ --base BACKUP_2022_08_11_09_11_07
To a local disk¶
In the directory of the BR Enterprise Edition installation tool, run the following command to back up the incremental data of your NebulaGraph database to a local disk:
Note
Make sure that the local path where the backup file is stored exists.
./br backup incr --meta <ip_address:port> --storage local://<storage_path> --base <backup_file_name>
For example, based on the file BACKUP_2022_08_11_09_11_07
, perform an incremental backup for the cluster where one of the meta server addresses is 192.168.8.129:9559
, and back up data to the /backup/
directory on the local disk.
./br backup incr --meta "192.168.8.129:9559" --storage "local:///backup/" --base BACKUP_2022_08_11_09_11_07
Directory structure¶
In addition to the leader meta data, for the leader partition data of an existing graph space (graph space ID 1
in the following code block), Incremental backups only back up the wal
directory of the leader partition; for the leader partition data of the newly added graph space (graph space ID 4
in the following code block), the full data
and wal
directories are backed up. The incremental backup directory structure may be different compared to the full backup data structure.
The structure of an incremental backup is as follows:
├── BACKUP_2022_08_12_08_58_23.meta(Backup of metadata, such as host information and the storage path of partitions)
├── data(Backup of the storage data)
│ ├── 1 (Graph space ID)
│ │ ├── 1 (Partition ID)
│ │ │ └── wal (wal directory)
│ │ │ ├── 0000000000000000671.wal
│ │ │ ├── 0000000000000000700.wal
│ │ │ └── commitlog.id
...
│ └── 4 (Graph space ID)
│ ├── 1
│ │ ├── data(data directory)
│ │ │ ├── 000009.sst
│ │ │ ├── CURRENT
│ │ │ ├── MANIFEST-000004
│ │ │ └── OPTIONS-000007
│ │ └── wal (wal directory)
│ │ ├── 0000000000000000001.wal
│ │ └── commitlog.id
└── meta(Backup of the meta data)
...
Options¶
Option | Data type | Required | Default value | Description |
---|---|---|---|---|
-h ,--help |
- | No | None | View more information about BR commands. |
--debug |
- | No | None | View more information for BR logs. |
--log |
string | No | "br.log" |
The path of BR logs. |
--concurrency |
int | No | None | Used to control the number of concurrent file uploads during data backup. The default value is 5 . |
--meta |
string | Yes | None | The IP address and port number of any Meta service in the cluster. |
--storage |
string | Yes | None | The storage path of the data to be backed up in the form of <schema>://<storage_path> .schema : Two options, s3 and local .When schema is s3 , you still need to add options of s3.access_key 、s3.endpoint 、s3.region , and s3.secret_key .<storage_path> : The storage path. |
--s3.access_key |
string | No | None | The Access Key ID that is used to identify a user. |
--s3.endpoint |
string | No | None | The S3 endpoint URL. You need to specify the HTTP or HTTPS scheme explicitly. |
--s3.region |
string | No | None | The physical location of a data center. |
--s3.secret_key |
string | No | None | The Access Key Secret that is used to verify the identity of the user. |
--base |
string | Yes | None | The name of the directory of any previous backup. Based on this backup, perform an incremental backup. |
View backup progress¶
The log file br.log
of BR can be viewed in the installation directory. The log file will record the backup progress and will look like as follows:
{"level":"info","msg":"full upload storaged partition finished, progress: 1/20","time":"2023-03-15T02:13:20.946Z"}
{"level":"info","msg":"full upload storaged partition finished, progress: 2/20","time":"2023-03-15T02:13:21.154Z"}
{"level":"info","msg":"full upload storaged partition finished, progress: 3/20","time":"2023-03-15T02:13:21.537Z"}
What is next?¶
After your NebulaGraph data is backed up, you can restore the data to NebulaGraph. For details, see Restore data.
Caution
DO NOT change the name and path of the backed-up files. Otherwise, data restoration will fail.