Frequently Asked Questions

Common questions about Nebula Graph and more. If you do not find the information you need in this documentation, please try searching the Users tab in the Nebula Graph official forum.

General Information

General Information lists the conceptual questions about Nebula Graph.

Explanations on the Time Return in Queries

nebula> GO FROM 101 OVER follow
===============
| follow._dst |
===============
| 100         |
---------------
| 102         |
---------------
| 125         |
---------------
Got 3 rows (Time spent: 7431/10406 us)

Taking the above query as an example, the number 7431 in Time spent is the time spent by the database itself, that is, the time it takes for the query engine to receive a query from the console, fetch the data from the storage and perform a series of calculation ; the number 10406 is the time spent from the client's perspective, that is, the time it takes for the console from sending a request and receiving a response to displaying the result on the screen.

Trouble Shooting

Trouble Shooting session lists the common operation errors in Nebula Graph.

Server Parameter Configuration

In Nebula console, run

nebula> SHOW CONFIGS;

For configuration details, please see here.

How to Check Configs

Configuration files are stored under /usr/local/nebula/etc/ by default.

Unbalanced Partitions

See Storage Balance.

Log and Changing Log Levels

Logs are stored under /usr/local/nebula/logs/ by default.

See graphd Logs and storaged Logs.

Using Multiple Hard Disks

Modify /usr/local/nebula/etc/nebula-storage.conf. For example

--data_path=/disk1/storage/,/disk2/storage/,/disk3/storage/

When multiple hard disks are used, multiple directories can be separated by commas, and each directory corresponds to a RocksDB instance for better concurrency. See here for details.

Process Crash

  1. Check disk space df -h.

    If there is not enough disk space, the service fails to write files, and crashes. Use the above command to check the current disk usage. Check whether the --data_path configured service directory is full.

  2. Check memory usage free -h.

    The service uses too much memory and is killed by the system. Use dmesg to check whether there is an OOM record and whether there is keyword nebula in it.

  3. Check logs.

Errors Thrown When Executing Command in Docker

This is likely caused by the inconsistency between the docker IP and the default listening address (172.17.0.2). Thus we need to change the the latter.

  1. First run ifconfig in container to check your container IP, here we assume your IP is 172.17.0.3.
  2. In directory /usr/local/nebula/etc, check the config locations of all the IP addresses with the command grep "172.17.0.2" . -r.
  3. Change all the IPs you find in step 2 to your container IP 172.17.0.3.
  4. Restart all the services.

Adding Two Clusters on a Single Host

When the same host is used for single host or cluster test, the storaged service cannot start normally. The listening port of the storaged service is red in the console.

Check the logs (/usr/local/nebula/nebula-storaged.ERROR) of the storaged service. If you find the "wrong cluster" error message, the possible cause is that the cluster id generated by Nebula Graph during the single host test and the cluster test are inconsistent. You need to delete the cluster.id file under the installation directory (/usr/local/nebula) and the data directory and restart the service.

Connection Refused

E1121 04:49:34.563858   256 GraphClient.cpp:54] Thrift rpc call failed: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused

Check service status by

$ /usr/local/nebula/scripts/nebula.service status all

Could not create logging file:... Too many open files

  1. Check your disk space df -h
  2. Check log directory /usr/local/nebula/logs/
  3. reset your max open files by ulimit -n 65536

How to Check Nebula Graph Version

Use the command curl http://ip:port/status to obtain the git_info_sha, the commitID of the binary package.

Modifying the Configuration File Does not Take Effect

Nebula Graph uses the following two methods obtaining configurations:

  1. From the configuration files (You need to modify the files then restart the services);
  2. From the Meta. Set via CLI and persists in Meta service. Please refer to the Configs Syntax for details.

Modifying the configuration file does not take effect because Nebula Graph gets configuration in the second method (from meta) by default. If you want to use the first way, please add the --local_config=true option in flag files metad.conf, storaged.conf, graphd.conf (flag files directory is /home/user/nebula/build/install/etc) respectively.

Modify RocksDB block cache

Modify the storage layer's configuration file storaged.conf (the default directory is /usr/local/nebula/etc/, yours maybe different) and restart the service. For example:

# Change rocksdb_block_cache to 1024 MB
--rocksdb_block_cache = 1024
# Stop storaged and restart
/usr/local/nebula/scripts/nebula.service stop storaged
/usr/local/nebula/scripts/nebula.service start storaged

Details see here.

Nebula fails on CentOS 6.5

Nebula Graph fails on CentOS 6.5, the error message is as follows:

# storage log
Heartbeat failed, status:RPC failure in MetaClient: N6apache6thrift9transport19TTransportExceptionE: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused

# meta log
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0415 22:32:38.944437 15532 AsyncServerSocket.cpp:762] failed to set SO_REUSEPORT on async server socket Protocol not available
E0415 22:32:38.945001 15510 ThriftServer.cpp:440] Got an exception while setting up the server: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.945057 15510 RaftexService.cpp:90] Setup the Raftex Service failed, error: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.949586 15463 NebulaStore.cpp:47] Start the raft service failed
E0415 22:32:38.949597 15463 MetaDaemon.cpp:88] Nebula store init failed
E0415 22:32:38.949796 15463 MetaDaemon.cpp:215] Init kv failed!

Nebula service status is as follows:

[root@redhat6 scripts]# ./nebula.service status  all
[WARN] The maximum files allowed to open might be too few: 1024
[INFO] nebula-metad: Exited
[INFO] nebula-graphd: Exited
[INFO] nebula-storaged: Running as 15547, Listening on 44500

Reason for error: CentOS 6.5 system kernel version is 2.6.32, which is less than 3.9. However, SO_REUSEPORT only supports Linux 3.9 and above.

Upgrading the system to CentOS 7.5 can solve the problem by itself.

The Precedence Between max_edge_returned_per_vertex and WHERE

If max_edge_returned_per_vertex is set to be 10, when filtering with WHERE, how many edges are returned if the actual edge number is greater than 10?

Nebula Graph performs WHERE condition filter first, if the actual number of edges is less than 10, then return the actual number of edges. Otherwise, it returns 10 edges according to the max_edge_returned_per_vertex value.

FETCH Doesn't Work Sometimes

When using FETCH to return data, sometimes the corresponding data is returned but sometimes it doesn't work.

If you encounter the above situation, please check if you have run two storage services on the same node and the port of the two storage service are the same. If so, please modify one of the storage port and import the data again.

Does the sst Files Support Migration

The sst files used by the storaged is bound to the graph space in the cluster. Thus you can not copy the sst files directly to a new graph space or a new cluster.