此笔记仅用于作者记录复习使用,如有错误地方欢迎留言指正,作者感激不尽,如有转载请指明出处

$ /usr/local/src/zookeeper-3.4.5-cdh5.3.6/bin/zkServer.sh start

$ /usr/local/src/hadoop-2.5.0-cdh5.3.6/sbin/start-dfs.sh

$ /usr/local/src/hadoop-2.5.0-cdh5.3.6/sbin/start-yarn.sh

$ tar -zxf /opt/cdh/hbase-0.98.6-cdh5.3.6.tar.gz -C /usr/local/src/

  • 修改hbase-env.sh文件

export JAVA_HOME=/usr/local/src/jdk1.8.0_121/

export HBASE_MANAGES_ZK=false

  • 修改hbase-site.xml文件
  1. <!--指定hbase的根目录,mycluster:高可用hdfs名称-->
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://mycluster/hbase</value>
  5. </property>
  6. <!--是否运行在完全分布式中-->
  7. <property>
  8. <name>hbase.cluster.distributed</name>
  9. <value>true</value>
  10. </property>
  11. <!---高可用集群中只配置端口号就行,如果是单节点,则指明机器名-->
  12. <property>
  13. <name>hbase.master</name>
  14. <value>60000</value>
  15. </property>
  16. <property>
  17. <name>hbase.zookeeper.quorum</name>
  18. <value>master:2181,slave1:2181,slave2:2181</value>
  19. </property>
  20. <property>
  21. <name>hbase.zookeeper.property.dataDir</name>
  22. <value>/usr/local/src/zookeeper-3.4.5-cdh5.3.6/dataDir</value>
  23. </property>
  • 修改regionservers文件
    master
    slave1
    slave2
  • 删除原有Jar包以及zookeeper的jar包
    $ rm -rf /usr/local/src/hbase-0.98.6-cdh5.3.6/lib/hadoop-*

    $ rm -rf lib/zookeeper-3.4.6.jar

  • 拷贝新的Jar包
    这里涉及到的jar包大概是:
    hadoop-annotations-2.5.0.jar
    hadoop-auth-2.5.0-cdh5.3.6.jar
    hadoop-client-2.5.0-cdh5.3.6.jar
    hadoop-common-2.5.0-cdh5.3.6.jar
    hadoop-hdfs-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar
    hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar
    hadoop-yarn-api-2.5.0-cdh5.3.6.jar
    hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar
    hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar
    hadoop-yarn-client-2.5.0-cdh5.3.6.jar
    hadoop-yarn-common-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar
    zookeeper-3.4.5-cdh5.3.6.jar

$ ln -s /usr/local/src/hadoop-2.5.0-cdh5.3.6/etc/hadoop/core-site.xml /usr/local/src/hbase-0.98.6-cdh5.3.6/conf/core-site.xml

$ ln -s /usr/local/src/hadoop-2.5.0-cdh5.3.6/etc/hadoop/hdfs-site.xml /usr/local/src/hbase-0.98.6-cdh5.3.6/conf/hdfs-site.xml

$ bin/hbase-daemon.sh start master

$ bin/hbase-daemon.sh start regionserver

或者:
$ bin/start-hbase.sh

对应的停止命令:
$ bin/stop-hbase.sh

http://192.168.159.30:60010

  • 确保HBase集群已正常停止
    $ bin/stop-hbase.sh

  • 在conf目录下创建backup-masters文件
    $ touch conf/backup-masters

  • 在backup-masters文件中配置高可用HMaster节点
    $ echo slave1 > conf/backup-masters

  • 拷贝到其他节点的机器

  • 打开页面测试
    测试http://master:60010

  • 尝试关闭第一台机器的HMaster
    $ bin/hbase-daemon.sh stop master
    然后查看第二台的HMaster是否会直接启用

版权声明:本文为screen原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/screen/p/9070524.html