服务器集群技术(hadoop3.2.0集群搭建)

 
 
单机模式请参考:https://blog.csdn.net/pengjunlee/article/details/104290537
1.准备机器4台和安装包
1)机器masters*1,slaves、works *3

hadoop00 (masters)
hadoop01 (slaves、works)
hadoop02 (slaves,works)
hadoop03 (slaves,works)
2)最新安装包

下载hadoop
wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz
下载jre
jdk-8u151-linux-x64.rpm
2.升级最新版2本的操作系统
笔者目前是4.3升级到6.3,大多数新系统应该是比这个新的,应该不需要

lsb_release -a
LSB Version: :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.3 (Final)
Release: 6.3
Codename: Final
配置四台机器的互信免密钥登录

cd ~/.ssh
ssh-keygen -t rsa -P ""
把4台机器id_rsa.pub内容,都追加到每台机器的authorized_keys 不要忘记权限设定chmod 600 authorized_keys,不要忘记home目录的权限设置为755 .ssh目录设置为700=,如果免密码登录不生效大概率这个权限问题
3.安装java

yum localinstall jdk-8u151-linux-x64.rpm

设置java环境变量
vi ~/.bash_profile
export JAVA_HOME=/usr/java/jdk1.8.0_151/
export PATH=$JAVA_HOME/bin:$PATH
source ~/.bash_profile
4.安装hadoop
1)解压,hadoop不需要安装过程,解压正确安装位置即可

tar zxf hadoop-3.2.0.tar.gz
mv hadoop-3.2.0 /home/work/hadoop
2)修改环境变量

vi ~/.bash_profile

export HADOOP_HOME=/home/work/hadoop
export PATH=$JAVA_HOME/bin:$PATH:$HOME/bin:$HADOOP_HOME/bin

source ~/.bash_profile
3)配置文件修改
(配置文件说明https://blog.csdn.net/senvil/article/details/48915815)
a.修改etc/hadoop/core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop00:9000</value>
</property>

<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/work/.hadoop-data/tmp</value>
</property>
</configuration>
b.修改etc/hadoop/hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/work/.hadoop-data/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/work/.hadoop-data/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop00:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
c.修改etc/hadoop/yarn-site.xml

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop00:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop00:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop00:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop00:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop00:8088</value>
</property>

<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>
</configuration>

d.修改etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_151/
export HADOOP_PREFIX=/home/work/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR="/home/work/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/home/work/hadoop/lib/native"
e.修改etc/hadoop/mapred-site.xml

<configuration>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop00:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop00:19888</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>
</property>

</configuration>

f.修改 etc/hadoop/yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_151/
g.修改etc/hadoop/slaves

hadoop01
hadoop02
hadoop03
h.etc/hadoop/masters

hadoop00
i.修改etc/hadoop/workers
(这个地方很重要,有好多网上资料没有提到修改这个文件,可能新版本需要,我在这里花费了大量时间排查)
文件是指定HDFS上有哪些DataNode节点。

hadoop01
hadoop02
hadoop03
4)把目录打包拷贝到其他三台机器上
以上设置完毕,把hadoop目录拷贝到其他3台机器!(文件太多,可以先打包),再其他目标机器解压即可

tar zcf hadoop.tar hadoop
scp hadoop.tar work@hadoop01:/home/work/
5.hadoop环境初始化
1)防火墙设置,4台机器都需要设置(root账号,or sudouser)

vi /etc/sysconfig/iptables

#修改内容如下

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 8088 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 9000 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 9001 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 8030 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 8031 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 8032 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 8033 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 50010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 50070 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 10020 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 19888 -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
COMMIT

/etc/init.d/iptables restart

2)hdfs初始化
在masters执行命令

hdfs namenode –format
注意: 如果需要重新格式化NameNode,需要先将原来NameNode和DataNode下的文件全部删除,不然会报错。NameNode和DataNode所在目录是在core-site.xml中hadoop.tmp.dir、dfs.namenode.name.dir、dfs.datanode.data.dir属性配置的。
因为每次格式化,默认是创建一个集群ID,并写入NameNode和DataNode的VERSION文件中(VERSION文件所在目录为hdfs/name/current 和 hdfs/data/current),重新格式化时,默认会生成一个新的集群ID,如果不删除原来的目录,会导致namenode中的VERSION文件中是新的集群ID,而DataNode中是旧的集群ID,不一致时会报错。
注意:
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException
这种情况,需要把主机名hostname与/etc/hosts文件中进行的映射配置一致才可以执行

vi /etc/hosts
3)全部启动

#启动
./hadoop/sbin/start-all.sh
#./hadoop/sbin/stop-all.sh
#关闭安全模式
hdfs dfsadmin -safemode leave
jps可以看到对应的java进程

在master
jps
26432 ResourceManager
24330 SecondaryNameNode
8362 Jps
21327 NameNode

在(slaves、works)
jps
28403 NodeManager
9624 Jps
22379 DataNode

执行hdfs dfsadmin -report
也可以看到hdfs情况
以上情况只能判断初步安装成功,中间的报错日志可以在hadoop/logs下查看
6.可用性测试
1)hdfs测试

hdfs dfs -put NOTICE.txt /NOTICE.txt
hdfs dfs -ls /
以下显示说明成功
-rw-r–r– 2 map supergroup 22125 2019-06-02 15:58 /NOTICE.txt
可下载

hdfs dfs -get /NOTICE.txt aa.txt
2)map-reduce测试

hadoop jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar wordcount /NOTICE.txt /output
hdfs dfs -cat /output/*
如果正确输出类似代表成功

utility 3
v14, 1
v2 1
v2.0 3
v21 1
v3.0.0 1
vectorfree.com 1
vectorportal.com. 1
version 4
visit 1
voluntary 1
was 2
way 1
we 3
web 1
were 2
when 2
which 43
with 8
without 1
work 1
works 1
writing, 1
written 7
you 2
zlib 1
© 6
参考资料

https://blog.csdn.net/wangkai_123456/article/details/87185339

QA
问:.2019-06-01 22:48:50,782 INFOorg.apache.hadoop.ipc.Client: Retrying connect to server: hadoop00/10.46.100.18:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
答:
1)防火墙没有设置好
2)hadoop namenode -format 再次初始化时候【注意: 判断是否re_format filesystem 的时候 Y/N 一定要大写的 Y!!!】
3)端口被占用netstat -tunlp | grep port,可以次命令查看端口被那个程序占用了
问:Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/map/.staging/job_1559466109730_0001. Name node is in safe mode.
答:关闭安全模式 hdfs dfsadmin -safemode leave
问:Please check whether youretc/hadoop/mapred-site.xml contains the below configuration: <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value> </property>
答:添加一下配置

<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/work/hadoop</value>


本文出自快速备案,转载时请注明出处及相应链接。

本文永久链接: https://www.xiaosb.com/beian/36486/