一、zookeeper

1、/etc/profile

HADOOP_PREFIX=/opt/hadoop

JAVA_HOME=/opt/jdk18

ZOOKEEPER_HOME=/opt/zookeeper

HBASE_HOME=/opt/hbase

PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin

export HADOOP_PREFIX PATH JAVA_HOME ZOOKEEPER_HOME HBASE_HOME USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

2、创建目录:

mkdir -p /opt/zookeeper/data

mkdir -p /opt/zookeeper/logs

3、修改zoo.cfg配置文件,在在文件末尾添加。

  

server.1=NameNode34:2888:3888

server.2=DataNode35:2888:3888

server.3=DataNode37:2888:3888

server.4=DataNode38:2888:3888

dataDir=/opt/zookeeper/data

dataLogDir=/opt/zookeeper/logs

4、创建myid文件

在dataDir目录下创建myid文件,该文件的内容根据server定义的不同而不同,如server.1 该文件的内容是1,server.2 该文件内容是 2,以此类推.

节点1

echo "1">myid

节点2

echo "2">myid

节点3

echo "3">myid

节点4

echo "4">myid

5、拷贝至其他节点

scp /etc/profile root@DataNode35:/etc/profile

scp /etc/profile root@DataNode37:/etc/profile

scp /etc/profile root@DataNode38:/etc/profile

scp -r /opt/zookeeper root@DataNode35:/opt

scp -r /opt/zookeeper root@DataNode37:/opt

scp -r /opt/zookeeper root@DataNode38:/opt

6、在每个节点上

source /etc/profile

7、在每个节点上执行 zkServer.sh start

JMX enabled by default

Using config: /usr1/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

 

1.8 验证

[hadoop1@node4 bin]$ ./zkServer.sh status

JMX enabled by default

Using config: /usr1/zookeeper/bin/../conf/zoo.cfg

Mode: leader

[hadoop1@node1 bin]$ ./zkServer.sh status

JMX enabled by default

Using config: /usr1/zookeeper/bin/../conf/zoo.cfg

Mode: follower

[hadoop1@node2 bin]$ ./zkServer.sh status

JMX enabled by default

Using config: /usr1/zookeeper/bin/../conf/zoo.cfg

Mode: follower

[hadoop1@node3 bin]$ ./zkServer.sh status

JMX enabled by default

Using config: /usr1/zookeeper/bin/../conf/zoo.cfg

Mode: follower

-- The End --

二、hbase

1、下载hbase

2、/etc/profile

HADOOP_PREFIX=/opt/hadoop

JAVA_HOME=/opt/jdk18

ZOOKEEPER_HOME=/opt/zookeeper

HBASE_HOME=/opt/hbase

PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin

export HADOOP_PREFIX PATH JAVA_HOME ZOOKEEPER_HOME HBASE_HOME USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

3、修改配置文件/opt/hbase/conf/hbase-env.sh

export JAVA_HOME=/opt/jdk18/

export HBASE_CLASSPATH=/opt/hbase/conf

export HBASE_MANAGES_ZK=true

4、在hbase安装目录下建立tmp文件夹

mkdir tmp

5、修改hbase-site.xml

<configuration>

<property>

    <name>hbase.rootdir</name>

    <value>hdfs://NameNode34:9000/hbase</value>

</property>

  <property>

    <name>hbase.cluster.distributed</name>

    <value>true</value>

</property>

<property>

    <name>hbase.tmp.dir</name>

    <value>/opt/hbase/tmp</value>

</property>

<property>

    <name>hbase.zookeeper.quorum</name>

    <value>NameNode34,DataNode35,DataNode37,DataNode38</value>

</property>

<property>

    <name>hbase.zookeeper.property.dataDir</name>

    <value>/opt/hbase/tmp/zookeeper</value>

</property>

</configuration>

6、编辑/opt/hbase/conf/regionservers,将所有的slavenode添加到这个文件

NameNode34

DataNode35

DataNode37

DataNode38

7、将Hbase拷贝到其他节点机并对其他节点机配置环境变量

scp /etc/profile root@DataNode35:/etc/profile

scp /etc/profile root@DataNode37:/etc/profile

scp /etc/profile root@DataNode38:/etc/profile

scp -r /opt/hbase root@DataNode35:/opt

scp -r /opt/hbase root@DataNode37:/opt

scp -r /opt/hbase root@DataNode38:/opt

8、在每个节点上

source /etc/profile

9、在主节点启动hbase(只在主节点启动就OK) 

start-hbase.sh

10、验证hbase

jps

11、测试hbase shell

hbase shell

Create  ‘test’,’data’

Disable ‘test’

Dorp ‘test’

参考:

zookeeper: 

hbase: