Import data from Parquet files¶
This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local Parquet files.
To import a local Parquet file to NebulaGraph, see NebulaGraph Importer.
Data set¶
This topic takes the basketballplayer dataset as an example.
Environment¶
This example is done on MacOS. Here is the environment configuration information:
- Hardware specifications:
- CPU: 1.7 GHz Quad-Core Intel Core i7
- Memory: 16 GB
- Spark: 2.4.7, stand-alone
- Hadoop: 2.9.2, pseudo-distributed deployment
- NebulaGraph: 3.2.1. Deploy NebulaGraph with Docker Compose.
Prerequisites¶
Before importing data, you need to confirm the following information:
-
NebulaGraph has been installed and deployed with the following information:
- IP addresses and ports of Graph and Meta services.
- The user name and password with write permission to NebulaGraph.
- Spark has been installed.
- Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more.
- If files are stored in HDFS, ensure that the Hadoop service is running properly.
- If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster.
Steps¶
Step 1: Create the Schema in NebulaGraph¶
Analyze the data to create a Schema in NebulaGraph by following these steps:
-
Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table.
Element Name Property Tag player
name string, age int
Tag team
name string
Edge Type follow
degree int
Edge Type serve
start_year int, end_year int
-
Create a graph space basketballplayer in the NebulaGraph and create a Schema as shown below.
## Create a graph space. nebula> CREATE SPACE basketballplayer \ (partition_num = 10, \ replica_factor = 1, \ vid_type = FIXED_STRING(30)); ## Use the graph space basketballplayer. nebula> USE basketballplayer; ## Create the Tag player. nebula> CREATE TAG player(name string, age int); ## Create the Tag team. nebula> CREATE TAG team(name string); ## Create the Edge type follow. nebula> CREATE EDGE follow(degree int); ## Create the Edge type serve. nebula> CREATE EDGE serve(start_year int, end_year int);
For more information, see Quick start workflow.
Step 2: Process Parquet files¶
Confirm the following information:
-
Process Parquet files to meet Schema requirements.
-
Obtain the Parquet file storage path.
Step 3: Modify configuration files¶
After Exchange is compiled, copy the conf file target/classes/application.conf
to set Parquet data source configuration. In this example, the copied file is called parquet_application.conf
. For details on each configuration item, see Parameters in the configuration file.
{
# Spark configuration
spark: {
app: {
name: Nebula Exchange 3.0.0
}
driver: {
cores: 1
maxResultSize: 1G
}
executor: {
memory:1G
}
cores: {
max: 16
}
}
# NebulaGraph configuration
nebula: {
address:{
# Specify the IP addresses and ports for Graph and all Meta services.
# If there are multiple addresses, the format is "ip1:port","ip2:port","ip3:port".
# Addresses are separated by commas.
graph:["127.0.0.1:9669"]
meta:["127.0.0.1:9559"]
}
# The account entered must have write permission for the NebulaGraph space.
user: root
pswd: nebula
# Fill in the name of the graph space you want to write data to in the NebulaGraph.
space: basketballplayer
connection: {
timeout: 3000
retry: 3
}
execution: {
retry: 3
}
error: {
max: 32
output: /tmp/errors
}
rate: {
limit: 1024
timeout: 1000
}
}
# Processing vertexes
tags: [
# Set the information about the Tag player.
{
# Specify the Tag name defined in NebulaGraph.
name: player
type: {
# Specify the data source file format to Parquet.
source: parquet
# Specifies how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the Parquet file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example, "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet".
path: "hdfs://192.168.*.13:9000/data/vertex_player.parquet"
# Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph.
# If multiple values need to be specified, separate them with commas.
fields: [age,name]
# Specify the property name defined in NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [age, name]
# Specify a column of data in the table as the source of VIDs in the NebulaGraph.
# The value of vertex must be consistent with the field in the Parquet file.
# Currently, NebulaGraph 3.2.1 supports only strings or integers of VID.
vertex: {
field:id
}
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# Set the information about the Tag team.
{
# Specify the Tag name defined in NebulaGraph.
name: team
type: {
# Specify the data source file format to Parquet.
source: parquet
# Specifies how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the Parquet file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example, "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet".
path: "hdfs://192.168.11.13:9000/data/vertex_team.parquet"
# Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph.
# If multiple values need to be specified, separate them with commas.
fields: [name]
# Specify the property name defined in NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [name]
# Specify a column of data in the table as the source of VIDs in the NebulaGraph.
# The value of vertex must be consistent with the field in the Parquet file.
# Currently, NebulaGraph 3.2.1 supports only strings or integers of VID.
vertex: {
field:id
}
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# If more vertexes need to be added, refer to the previous configuration to add them.
]
# Processing edges
edges: [
# Set the information about the Edge Type follow.
{
# Specify the Edge Type name defined in NebulaGraph.
name: follow
type: {
# Specify the data source file format to Parquet.
source: parquet
# Specifies how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the Parquet file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example, "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet".
path: "hdfs://192.168.11.13:9000/data/edge_follow.parquet"
# Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph.
# If multiple values need to be specified, separate them with commas.
fields: [degree]
# Specify the property name defined in NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [degree]
# Specify a column as the source for the source and destination vertexes.
# The values of vertex must be consistent with the fields in the Parquet file.
# Currently, NebulaGraph 3.2.1 supports only strings or integers of VID.
source: {
field: src
}
target: {
field: dst
}
# (Optional) Specify a column as the source of the rank.
#ranking: rank
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# Set the information about the Edge type serve.
{
# Specify the Edge type name defined in NebulaGraph.
name: serve
type: {
# Specify the data source file format to Parquet.
source: parquet
# Specifies how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the Parquet file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example, "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example, "file:///tmp/xx.parquet".
path: "hdfs://192.168.11.13:9000/data/edge_serve.parquet"
# Specify the key name in the Parquet file in fields, and its corresponding value will serve as the data source for the properties specified in the NebulaGraph.
# If multiple values need to be specified, separate them with commas.
fields: [start_year,end_year]
# Specify the property name defined in NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [start_year, end_year]
# Specify a column as the source for the source and destination vertexes.
# The values of vertex must be consistent with the fields in the Parquet file.
# Currently, NebulaGraph 3.2.1 supports only strings or integers of VID.
source: {
field: src
}
target: {
field: dst
}
# (Optional) Specify a column as the source of the rank.
#ranking: _c5
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
]
# If more edges need to be added, refer to the previous configuration to add them.
}
Step 4: Import data into NebulaGraph¶
Run the following command to import Parquet data into NebulaGraph. For a description of the parameters, see Options for import.
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange-3.0.0.jar_path> -c <parquet_application.conf_path>
Note
JAR packages are available in two ways: compiled them yourself, or download the compiled .jar
file directly.
For example:
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange-3.0.0.jar -c /root/nebula-exchange/nebula-exchange/target/classes/parquet_application.conf
You can search for batchSuccess.<tag_name/edge_name>
in the command output to check the number of successes. For example, batchSuccess.follow: 300
.
Step 5: (optional) Validate data¶
Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, NebulaGraph Studio). For example:
GO FROM "player100" OVER follow;
Users can also run the SHOW STATS
command to view statistics.
Step 6: (optional) Rebuild indexes in NebulaGraph¶
With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see Index overview.