
上QQ阅读APP看书,第一时间看更新
Installing Derby
Hive works by leveraging the MapReduce framework and uses the tables and schemas to create the mappers and reducers for the MapReduce jobs that are run behind the scenes. In order to maintain the metadata about the data, Hive uses Derby which is an easy to use database. In this section, we will look at installing Derby to be used in our Hive installation, https://db.apache.org/derby/derby_downloads.html:

- Extract Derby using a command, as shown in the following code:
tar -xvzf db-derby-10.14.1.0-bin.tar.gz
- Then, change directory into derby and create a directory named data. In fact, there are several commands to be run so we are going to list all of them in the following code:
export HIVE_HOME=<YOURDIRECTORY>/apache-hive-2.3.3-bin
export HADOOP_HOME=<YOURDIRECTORY>/hadoop-3.1.0
export DERBY_HOME=<YOURDIRECTORY>/db-derby-10.14.1.0-bin
export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin:$DERBY_HOME/bin
mkdir $DERBY_HOME/data
cp $DERBY_HOME/lib/derbyclient.jar $HIVE_HOME/lib
cp $DERBY_HOME/lib/derbytools.jar $HIVE_HOME/lib
- Now, start up the Derby server using a simple command, as shown in the following code:
nohup startNetworkServer -h 0.0.0.0
- Once this is done, you have to create and initialize the derby instance:
schematool -dbType derby -initSchema --verbose
- Now, you are ready to open the hive console:
hive
