Runtime logs¶
Runtime logs are provided for DBAs and developers to locate faults when the system fails.
NebulaGraph uses glog to print runtime logs, uses gflags to control the severity level of the log, and provides an HTTP interface to dynamically change the log level at runtime to facilitate tracking.
Log directory¶
The default runtime log directory is /usr/local/nebula/logs/.
If the log directory is deleted while NebulaGraph is running, the log would not continue to be printed. However, this operation will not affect the services. To recover the logs, restart the services.
Parameter descriptions¶
minloglevel: Specifies the minimum level of the log. That is, no logs below this level will be printed. Optional values are0(INFO),1(WARNING),2(ERROR),3(FATAL). It is recommended to set it to0during debugging and1in a production environment. If it is set to4, NebulaGraph will not print any logs.
v: Specifies the detailed level of the log. The larger the value, the more detailed the log is. Optional values are0,1,2,3.
The default severity level for the metad, graphd, and storaged logs can be found in their respective configuration files. The default path is /usr/local/nebula/etc/.
Check the severity level¶
Check all the flag values (log values included) of the current gflags with the following command.
$ curl <ws_ip>:<ws_port>/flags
| Parameter | Description |
|---|---|
ws_ip |
The IP address for the HTTP service, which can be found in the configuration files above. The default value is 127.0.0.1. |
ws_port |
The port for the HTTP service, which can be found in the configuration files above. The default values are 19559(Meta), 19669(Graph), and 19779(Storage) respectively. |
Examples are as follows:
- Check the current
minloglevelin the Meta service:$ curl 127.0.0.1:19559/flags | grep 'minloglevel'
- Check the current
vin the Storage service:$ curl 127.0.0.1:19779/flags | grep -w 'v'
Change the severity level¶
Change the severity level of the log with the following command.
$ curl -X PUT -H "Content-Type: application/json" -d '{"<key>":<value>[,"<key>":<value>]}' "<ws_ip>:<ws_port>/flags"
| Parameter | Description |
|---|---|
key |
The type of the log to be changed. For optional values, see Parameter descriptions. |
value |
The level of the log. For optional values, see Parameter descriptions. |
ws_ip |
The IP address for the HTTP service, which can be found in the configuration files above. The default value is 127.0.0.1. |
ws_port |
The port for the HTTP service, which can be found in the configuration files above. The default values are 19559(Meta), 19669(Graph), and 19779(Storage) respectively. |
Examples are as follows:
$ curl -X PUT -H "Content-Type: application/json" -d '{"minloglevel":0,"v":3}' "127.0.0.1:19779/flags" # storaged
$ curl -X PUT -H "Content-Type: application/json" -d '{"minloglevel":0,"v":3}' "127.0.0.1:19669/flags" # graphd
$ curl -X PUT -H "Content-Type: application/json" -d '{"minloglevel":0,"v":3}' "127.0.0.1:19559/flags" # metad
If the log level is changed while NebulaGraph is running, it will be restored to the level set in the configuration file after restarting the service. To permanently modify it, see Configuration files.
RocksDB runtime logs¶
RocksDB runtime logs are usually used to debug RocksDB parameters and stored in /usr/local/nebula/data/storage/nebula/$id/data/LOG. $id is the ID of the example.
Log recycling¶
Glog does not inherently support log recycling. To implement this feature, you can either use cron jobs in Linux to regularly remove old log files or use the log management tool, logrotate, to rotate logs for regular archiving and deletion.
Log recycling using cron jobs¶
This section provides an example of how to use cron jobs to regularly delete old log files from the Graph service's runtime logs.
-
In the Graph service configuration file, apply the following settings and restart the service:
timestamp_in_logfile_name = true max_log_size = 500- By setting
timestamp_in_logfile_nametotrue, the log file name includes a timestamp, allowing regular deletion of old log files. - The
max_log_sizeparameter sets the maximum size of a single log file in MB, such as500. Once this size is exceeded, a new log file is automatically created. The default value is1800.
- By setting
-
Use the following command to open the cron job editor.
crontab -e -
Add a cron job command to the editor to regularly delete old log files.
* * * * * find <log_path> -name "<YourProjectName>" -mtime +7 -deleteCaution
The
findcommand in the above command should be executed by the root user or a user with sudo privileges.* * * * *: This cron job time field signifies that the task is executed every minute. For other settings, see Cron Expression.<log_path>: The path of the service runtime log file, such as/usr/local/nebula/logs.<YourProjectName>: The log file name, such asnebula-graphd.*.-mtime +7: This deletes log files that are older than 7 days. Alternatively, use-mmin +nto delete log files older thannminutes. For details, see the find command.-delete: This deletes log files that meet the conditions.
For example, to automatically delete the Graph service runtime log files older than 7 days at 3 o'clock every morning, use:
0 3 * * * find /usr/local/nebula/logs -name nebula-graphd.* -mtime +7 -delete -
Save the cron job and exit the editor.
Log recycling using logrotate¶
Logrotate is a tool that can rotate specified log files for archiving and recycling.
Note
You must be the root user or a user with sudo privileges to install or run logrotate.
This section provides an example of how to use logrotate to manage the Graph service's INFO level log file (/usr/local/nebula/logs/nebula-graphd.INFO.impl).
-
In the Graph service configuration file, set
timestamp_in_logfile_nametofalseso that the logrotate tool can recognize the log file name. Then, restart the service.timestamp_in_logfile_name = false -
Install logrotate.
-
For Debian/Ubuntu:
sudo apt-get install logrotate
-
For CentOS/RHEL:
sudo yum install logrotate
-
-
Create a logrotate configuration file, add log rotation rules, and save the configuration file.
In the
/etc/logrotate.ddirectory, create a new logrotate configuration filenebula-graphd.INFO.sudo vim /etc/logrotate.d/nebula-graphd.INFOThen, add the following content:
# The absolute path of the log file needs to be configured # And the file name cannot be a symbolic link file, such as `nebula-graph.INFO` /usr/local/nebula/logs/nebula-graphd.INFO.impl { daily rotate 2 copytruncate nocompress missingok notifempty create 644 root root dateext dateformat .%Y-%m-%d-%s maxsize 1k }Parameter Description dailyRotate the log daily. Other available time units include hourly,daily,weekly,monthly, andyearly.rotate 2Keep the most recent 2 log files before deleting the older one. copytruncateCopy the current log file and then truncate it, ensuring no disruption to the logging process. nocompressDo not compress the old log files. missingokDo not report errors if the log file is missing. notifemptyDo not rotate the log file if it's empty. create 644 root rootCreate a new log file with the specified permissions and ownership. dateextAdd a date extension to the log file name.
The default is the current date in the format-%Y%m%d.
You can extend this using thedateformatoption.dateformat .%Y-%m-%d-%sThis must follow immediately after dateextand defines the file name after log rotation.
Before V3.9.0, only%Y,%m,%d, and%sparameters were supported.
Starting from V3.9.0, the%Hparameter is also supported.maxsize 1kRotate the log when it exceeds 1 kilobyte ( 1024bytes) in size or when the specified time unit (e.g.,daily) passes.
You can use size units likekandM, with the default unit being bytes.Modify the parameters in the configuration file according to actual needs. For more information about parameter configuration, see logrotate.
-
Test the logrotate configuration.
To verify whether the logrotate configuration is correct, use the following command for testing.
sudo logrotate --debug /etc/logrotate.d/nebula-graphd.INFO -
Execute logrotate.
Although
logrotateis typically executed automatically by cron jobs, you can manually execute the following command to perform log rotation immediately.sudo logrotate -fv /etc/logrotate.d/nebula-graphd.INFO-fv:fstands for forced execution,vstands for verbose output. -
Verify the log rotation results.
After log rotation, new log files are found in the
/usr/local/nebula/logsdirectory, such asnebula-graphd.INFO.impl.2024-01-04-1704338204. The original log content is cleared, but the file is retained for new log entries. When the number of log files exceeds the value set byrotate, the oldest log file is deleted.For example,
rotate2` means keeping the 2 most recently generated log files. When the number of log files exceeds 2, the oldest log file is deleted.[test@test logs]$ ll -rw-r--r-- 1 root root 0 Jan 4 11:18 nebula-graphd.INFO.impl -rw-r--r-- 1 root root 6894 Jan 4 11:16 nebula-graphd.INFO.impl.2024-01-04-1704338204 # This file is deleted when a new log file is generated -rw-r--r-- 1 root root 222 Jan 4 11:18 nebula-graphd.INFO.impl.2024-01-04-1704338287 [test@test logs]$ ll -rw-r--r-- 1 root root 0 Jan 4 11:18 nebula-graphd.INFO.impl -rw-r--r-- 1 root root 222 Jan 4 11:18 nebula-graphd.INFO.impl.2024-01-04-1704338287 -rw-r--r-- 1 root root 222 Jan 4 11:18 nebula-graphd.INFO.impl.2024-01-04-1704338339 # The new log file is generated
If you need to rotate multiple log files, create multiple configuration files in the /etc/logrotate.d directory, with each configuration file corresponding to a log file. For example, to rotate the INFO level log file and the WARNING level log file of the Meta service, create two configuration files nebula-metad.INFO and nebula-metad.WARNING, and add log rotation rules in them respectively.