基於Hadoop3.1.2集群的Hive3.1.2安裝(有不少坑)


  1. 前置條件: 已經安裝好了帶有HDFS, MapReduce, Yarn 功能的 Hadoop集群

    鏈接: ubuntu18.04.2 hadoop3.1.2+zookeeper3.5.5高可用完全分布式集群搭建

  2. 上傳tar包並解壓到指定目錄:

    tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/ronnie
    
  3. 修改hive配置文件:

    • 新建文件夾

      mkdir /opt/ronnie/hive-3.1.2/warehouse
      hadoop fs -mkdir -p /opt/ronnie/hive-3.1.2/warehouse
      hadoop fs -chmod 777 /opt/ronnie/hive-3.1.2/warehouse
      hadoop fs -ls /opt/ronnie/hive-3.1.2/
      
    • 復制配置文件

      cd /opt/ronnie/hive-3.1.2/conf
      cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
      cp hive-log4j2.properties.template hive-log4j2.properties
      cp hive-default.xml.template hive-default.xml
      cp hive-default.xml.template hive-site.xml
      cp hive-env.sh.template hive-env.sh
      
  • 修改環境配置文件

    vim hive-env.sh
    
    HADOOP_HOME=/opt/ronnie/hadoop-3.1.2
    export HIVE_CONF_DIR=/opt/ronnie/hive-3.1.2/conf
    export HIVE_AUX_JARS_PATH=/opt/ronnie/hive-3.1.2/lib
    
    • vim hive-site.xml修改配置文件

      • 這時候先回顧一下vim操作(由於這個文件頁數比較多...):

        • gg: 到頁首

        • G: 到頁末

        • 22, 6918 d(在此執行的刪行操作)
          
        • 修改配置文件參數:

          <configuration>
            <property>
                    <name>javax.jdo.option.ConnectionUserName</name>
                    <value>root</value>
            </property>
            <property>
                    <name>javax.jdo.option.ConnectionPassword</name>
                    <!--你的Mysql數據庫密碼--> 
                    <value>xxxxxxx</value> 
            </property>
            <property>
                    <name>javax.jdo.option.ConnectionURL</name>
                    <value>jdbc:mysql://192.168.180.130:3306/hive?allowMultiQueries=true&amp;useSSL=false&amp;verifyServerCertificate=false</value>
            </property>
            <property>
                    <name>javax.jdo.option.ConnectionDriverName</name>
                    <value>com.mysql.jdbc.Driver</value>
            </property>
              <property>
                  <name>datanucleus.readOnlyDatastore</name>
                  <value>false</value>
              </property>
              <property>
                  <name>datanucleus.fixedDatastore</name>
                  <value>false</value>
              </property>
              <property>
                  <name>datanucleus.autoCreateSchema</name>
                  <value>true</value>
              </property>
              <property>
                  <name>datanucleus.autoCreateTables</name>
                  <value>true</value>
              </property>
              <property>
                  <name>datanucleus.autoCreateColumns</name>
                  <value>true</value>
              </property>
          </configuration>
          
          
  1. 下載jdbc

    cd /home/ronnie/soft
    wget http://mirrors.163.com/mysql/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz
    
  2. MySQL設置

    • 下載:

      sudo apt-get install mysql
      
    • mysql -uroot 進入mysql界面(Ubuntu mysql 下載后默認開機自啟, Centos的話還需要service start mysqld 一下)

    • 修改密碼:

      • 查看用戶及密碼:

        • 老版本:

          use mysql;
          select host,user,password from mysql.user;
          
        • 我用的5.7版本

          use mysql;
          select user, host, authentication_string from user;
          
      • 設置新密碼

        update mysql.user set authentication_string='你要設置的密碼' where user='root';
        
    • 這邊有一個巨坑, 初始化數據庫的時候報的:

      org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
      Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure
      
      • 連接的問題, 但是grant all on hive.* to root@'%' identified by 'xxxxxx'; 敲了好幾次都沒用。

        mysql> select user, authentication_string, host from user;
        +------------------+-------------------------------------------+-----------+
        | user             | authentication_string                     | host      |
        +------------------+-------------------------------------------+-----------+
        | root             |                                           | localhost |
        | mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
        | mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
        | debian-sys-maint | *19A653DDEEC19D326E8DFA1A3D00E26C16438DD8 | localhost |
        | root             | *A63376A449EDC1A66FEFBC77E645D70EF6941893 | %         |
        +------------------+-------------------------------------------+-----------+
        
        
        • 發現有重復的root用戶, 刪掉, 直接將root修改為%

          delete from user where host = '%';
          mysql> select user, authentication_string, host from user;
          +------------------+-------------------------------------------+-----------+
          | user             | authentication_string                     | host      |
          +------------------+-------------------------------------------+-----------+
          | root             |                                           | localhost |
          | mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
          | mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
          | debian-sys-maint | *19A653DDEEC19D326E8DFA1A3D00E26C16438DD8 | localhost |
          +------------------+-------------------------------------------+-----------+
          
          mysql> update user set host='%' where user = 'root';
          Query OK, 1 row affected (0.00 sec)
          Rows matched: 1  Changed: 1  Warnings: 0
          
          mysql> flush privileges;
          Query OK, 0 rows affected (0.00 sec)
          
          mysql> select user, authentication_string, host from user;
          +------------------+-------------------------------------------+-----------+
          | user             | authentication_string                     | host      |
          +------------------+-------------------------------------------+-----------+
          | root             |                                           | %         |
          | mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
          | mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | localhost |
          | debian-sys-maint | *19A653DDEEC19D326E8DFA1A3D00E26C16438DD8 | localhost |
          +------------------+-------------------------------------------+-----------+
          
          
          
        • 重啟服務:

          service mysqld restart
          
        • 還是報錯......, 測試了一下遠程navicat也連不上, 報的1251

        • vim /etc/mysql/mysql.conf.d/mysqld.cnf 把其中的bind-address改為0.0.0.0

        • 還是報錯, 最終的解決方案:

          ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '密碼';
          #記得提交
          FLUSH PRIVILEGES;
          
      • 然后就連上navicat了, 執行初始化成功了

        root@node02:~# schematool  -initSchema -dbType mysql
        SLF4J: Class path contains multiple SLF4J bindings.
        SLF4J: Found binding in [jar:file:/opt/ronnie/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in [jar:file:/opt/ronnie/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
        SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
        Metastore connection URL:	 jdbc:mysql://192.168.180.131:3306/hive?allowMultiQueries=true&useSSL=false&verifyServerCertificate=false
        Metastore Connection Driver :	 com.mysql.jdbc.Driver
        Metastore connection User:	 root
        Starting metastore schema initialization to 3.1.0
        Initialization script hive-schema-3.1.0.mysql.sql
        
        
      • 看一下mysql, 表生成成功

        1575261174466

    1. 啟動Hive

      root@node02:~# hive
      SLF4J: Class path contains multiple SLF4J bindings.
      SLF4J: Found binding in [jar:file:/opt/ronnie/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/opt/ronnie/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
      SLF4J: Class path contains multiple SLF4J bindings.
      SLF4J: Found binding in [jar:file:/opt/ronnie/hbase-2.0.6/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/opt/ronnie/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/opt/ronnie/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
      2019-12-02 12:34:18,689 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      SLF4J: Class path contains multiple SLF4J bindings.
      SLF4J: Found binding in [jar:file:/opt/ronnie/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/opt/ronnie/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
      Hive Session ID = b34ea22b-d5d7-4c0a-b8de-4ff47f241e34
      
      Logging initialized using configuration in file:/opt/ronnie/hive-3.1.2/conf/hive-log4j2.properties Async: true
      Hive Session ID = 368bd863-0a45-4c46-94d6-df196a3b4d9b
      Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
      hive> 
      
      • Hive 2以上版本已經將Hive on MR視為廢棄, 將來版本可能會移除, 現在用spark或tez結合hive的會多一些。
      • 現在企業主流使用的hive還是1.x, 部分企業逐漸向2.3版本靠攏, 3.1.2 確實還是太新了。
    2. 啟動HiveServer2 (hiveserver2的服務端口默認是10000,WebUI端口默認是10002)

      $HIVE_HOME/bin/./hive --service hiveserver2
      

      1575291524779


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM