Import data from Hive¶
This topic provides an example of how to use Exchange to import Nebula Graph data stored in Hive.
Data set¶
This topic takes the basketballplayer dataset as an example.
In this example, the data set has been stored in Hive. All vertexes and edges are stored in the player
, team
, follow
, and serve
tables. The following are some of the data for each table.
scala> spark.sql("describe basketball.player").show
+--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
|playerid| string| null|
| age| bigint| null|
| name| string| null|
+--------+---------+-------+
scala> spark.sql("describe basketball.team").show
+----------+---------+-------+
| col_name|data_type|comment|
+----------+---------+-------+
| teamid| string| null|
| name| string| null|
+----------+---------+-------+
scala> spark.sql("describe basketball.follow").show
+----------+---------+-------+
| col_name|data_type|comment|
+----------+---------+-------+
|src_player| string| null|
|dst_player| string| null|
| degree| bigint| null|
+----------+---------+-------+
scala> spark.sql("describe basketball.serve").show
+----------+---------+-------+
| col_name|data_type|comment|
+----------+---------+-------+
| playerid| string| null|
| teamid| string| null|
|start_year| bigint| null|
| end_year| bigint| null|
+----------+---------+-------+
Note
The Hive data type bigint
corresponds to the Nebula Graph int
.
Environment¶
This example is done on MacOS. Here is the environment configuration information:
- Hardware specifications:
- CPU: 1.7 GHz Quad-Core Intel Core i7
- memory: 16 GB
- Spark: 2.4.7, Stand-alone
- Hadoop: 2.9.2, Pseudo-distributed deployment
- Hive: 2.3.7,Hive Metastore database is MySQL 8.0.22
- Nebula Graph: 2.5.1 (Deploy Nebula Graph with Docker Compose)
Prerequisites¶
Before importing data, you need to confirm the following information:
-
Nebula Graph has been installed and deployed with the following information:
- IP address and port of Graph and Meta services.
- User name and password with Nebula Graph write permission.
- Spark has been installed.
- Learn about the Schema created in Nebula Graph, including Tag and Edge type names, properties, and more.
- Hadoop has been installed and started, and the Hive Metastore database (MySQL in this example) has been started.
Steps¶
Step 1: Create the Schema in Nebula Graph¶
Analyze the data to create a Schema in Nebula Graph by following these steps:
-
Identify the Schema elements. The Schema elements in the Nebula Graph are shown in the following table.
Element name property Tag player
name string, age int
Tag team
name string
Edge Type follow
degree int
Edge Type serve
start_year int, end_year int
-
Create a graph space basketballplayer in the Nebula Graph and create a Schema as shown below.
## create graph space nebula> CREATE SPACE basketballplayer \ (partition_num = 10, \ replica_factor = 1, \ vid_type = FIXED_STRING(30)); ## use the graph space basketballplayer nebula> USE basketballplayer; ## create Tag player nebula> CREATE TAG player(name string, age int); ## create Tag team nebula> CREATE TAG team(name string); ## create Edge type follow nebula> CREATE EDGE follow(degree int); ## create Edge type serve nebula> CREATE EDGE serve(start_year int, end_year int);
For more information, see Quick start workflow.
Step 2: Use Spark SQL to confirm Hive SQL statements¶
After the Spark-shell environment is started, run the following statements to ensure that Spark can read data in Hive.
scala> sql("select playerid, age, name from basketball.player").show
scala> sql("select teamid, name from basketball.team").show
scala> sql("select src_player, dst_player, degree from basketball.follow").show
scala> sql("select playerid, teamid, start_year, end_year from basketball.serve").show
The following is the result read from the table basketball.player
.
+---------+----+-----------------+
| playerid| age| name|
+---------+----+-----------------+
|player100| 42| Tim Duncan|
|player101| 36| Tony Parker|
|player102| 33|LaMarcus Aldridge|
|player103| 32| Rudy Gay|
|player104| 32| Marco Belinelli|
+---------+----+-----------------+
...
Step 3: Modify configuration file¶
After Exchange is compiled, copy the conf file target/classes/application.conf
settings Hive data source configuration. In this case, the copied file is called hive_application.conf
. For details on each configuration item, see Parameters in the configuration file.
{
# Spark configuration
spark: {
app: {
name: Nebula Exchange 2.5.1
}
driver: {
cores: 1
maxResultSize: 1G
}
cores {
max: 16
}
}
# If Spark and Hive are deployed in different clusters, you need to configure the parameters for connecting to Hive. Otherwise, skip these configurations.
#hive: {
# waredir: "hdfs://NAMENODE_IP:9000/apps/svr/hive-xxx/warehouse/"
# connectionURL: "jdbc:mysql://your_ip:3306/hive_spark?characterEncoding=UTF-8"
# connectionDriverName: "com.mysql.jdbc.Driver"
# connectionUserName: "user"
# connectionPassword: "password"
#}
# Nebula Graph configuration
nebula: {
address:{
# Specify the IP addresses and ports for Graph and all Meta services.
# If there are multiple addresses, the format is "ip1:port","ip2:port","ip3:port".
# Addresses are separated by commas.
graph:["127.0.0.1:9669"]
meta:["127.0.0.1:9559"]
}
# The account entered must have write permission for the Nebula Graph space.
user: root
pswd: nebula
# Fill in the name of the graph space you want to write data to in the Nebula Graph.
space: basketballplayer
connection {
timeout: 3000
retry: 3
}
execution {
retry: 3
}
error: {
max: 32
output: /tmp/errors
}
rate: {
limit: 1024
timeout: 1000
}
}
# Processing vertex
tags: [
# Set information about Tag player.
{
# The Tag name in Nebula Graph.
name: player
type: {
# Specify the data source file format, set to Hive.
source: hive
# Specifies how to import the data into Nebula Graph: Client or SST.
sink: client
}
# Set the SQL statement to read the data of player table in basketball database.
exec: "select playerid, age, name from basketball.player"
# Specify the column names in the player table in fields, and their corresponding values are specified as properties in the Nebula Graph.
# The sequence of fields and nebula.fields must correspond to each other.
# If multiple column names need to be specified, separate them by commas.
fields: [age,name]
nebula.fields: [age,name]
# Specify a column of data in the table as the source of vertex VID in the Nebula Graph.
vertex:{
field:playerid
}
# Number of pieces of data written to Nebula Graph in a single batch.
batch: 256
# Number of Spark partitions
partition: 32
}
# Set Tag Team information.
{
name: team
type: {
source: hive
sink: client
}
exec: "select teamid, name from basketball.team"
fields: [name]
nebula.fields: [name]
vertex: {
field: teamid
}
batch: 256
partition: 32
}
]
# Processing edge
edges: [
# Set information about Edge Type follow
{
# The corresponding Edge Type name in Nebula Graph.
name: follow
type: {
# Specify the data source file format, set to Hive.
source: hive
# Specifies how to import the data into Nebula Graph: Client or SST.
sink: client
}
# Set the SQL statement to read the data of follow table in basketball database.
exec: "select src_player, dst_player, degree from basketball.follow"
# Specify the column names in the follow table in Fields, and their corresponding values are specified as properties in the Nebula Graph.
# The sequence of fields and nebula.fields must correspond to each other.
# If multiple column names need to be specified, separate them by commas.
fields: [degree]
nebula.fields: [degree]
# In source, use a column in the follow table as the source of the edge's starting vertex.
source: {
field: src_player
}
# In target, use a column in the follow table as the source of the edge's destination vertex.
target: {
field: dst_player
}
# Number of pieces of data written to Nebula Graph in a single batch.
batch: 256
# Number of Spark partitions
partition: 32
}
# Set information about Edge Type serve
{
name: serve
type: {
source: hive
sink: client
}
exec: "select playerid, teamid, start_year, end_year from basketball.serve"
fields: [start_year,end_year]
nebula.fields: [start_year,end_year]
source: {
field: playerid
}
target: {
field: teamid
}
batch: 256
partition: 32
}
]
}
Step 4: Import data into Nebula Graph¶
Run the following command to import Hive data into Nebula Graph. For a description of the parameters, see Options for import.
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange-2.5.1.jar_path> -c <hive_application.conf_path> -h
Note
JAR packages are available in two ways: compiled them yourself, or download the compiled .jar
file directly.
Example:
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-spark-utils/nebula-exchange/target/nebula-exchange-2.5.1.jar -c /root/nebula-spark-utils/nebula-exchange/target/classes/hive_application.conf -h
You can search for batchSuccess.<tag_name/edge_name>
in the command output to check the number of successes. For example, batchSuccess.follow: 300
.
Step 5: (optional) Validation data¶
Users can verify that data has been imported by executing a query in the Nebula Graph client (for example, Nebula Graph Studio). Such as:
GO FROM "player100" OVER follow;
Users can also run the SHOW STATS command to view statistics.
Step 6: (optional) Rebuild indexes in Nebula Graph¶
With the data imported, users can recreate and rebuild indexes in Nebula Graph. For details, see Index overview.