大数据之Hadoop环境安装

发布于:2024-12-18 ⋅ 阅读:(101) ⋅ 点赞:(0)

0. Hadoop源码包下载

http://mirror.bit.edu.cn/apache/hadoop/common/

1. 集群环境

Master 172.16.11.97

Slave1 172.16.11.98

Slave2 172.16.11.99

2. 关闭系统防火墙及内核防火墙

#Master、Slave1、Slave2

#清空系统防火墙

iptables -F

#保存防火墙配置

service iptables save

#临时关闭内核防火墙

setenforce 0

#永久关闭内核防火墙

vim /etc/selinux/config

SELINUX=disabled

3. 修改主机名

#Master

vim /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=master

#Slave1

vim /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=slave1

#Slave2

vim /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=slave2

4. 修改IP地址

#Master、Slave1、Slave2

vim /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

HWADDR=00:50:56:89:25:3E

TYPE=Ethernet

UUID=de38a19e-4771-4124-9792-9f4aabf27ec4

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=static

IPADDR=172.16.11.97

NETMASK=255.255.254.0

GATEWAY=172.16.10.1

DNS1=119.29.29.29

5. 修改主机文件

#Master、Slave1、Slave2

vim /etc/hosts

172.16.11.97 master

172.16.11.98 slave1

172.16.11.99 slave2

6. SSH互信配置

#Master、Slave1、Slave2

#生成密钥对(公钥和私钥)

ssh-keygen -t rsa

#三次回车

cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys

chmod 600 /root/.ssh/authorized_keys

#相互追加Key

#Master

ssh slave1 cat /root/.ssh/authorized_keys >> /root/.ssh/authorized_keys

ssh slave2 cat /root/.ssh/authorized_keys >> /root/.ssh/authorized_keys

#Slave1

ssh master cat /root/.ssh/authorized_keys > /root/.ssh/authorized_keys

#Slave2

ssh master cat /root/.ssh/authorized_keys > /root/.ssh/authorized_keys

7. 安装JDK

Java Downloads | Oracle

#Master

cd /usr/local/src

wget 具体已上面的链接地址为准

tar zxvf jdk1.8.0_152.tar.gz

8. 配置JDK环境变量

#Master、Slave1、Slave2

vim ~/.bashrc

JAVA_HOME=/usr/local/src/jdk1.8.0_152

JAVA_BIN=/usr/local/src/jdk1.8.0_152/bin

JRE_HOME=/usr/local/src/jdk1.8.0_152/jre

CLASSPATH=/usr/local/jdk1.8.0_152/jre/lib:/usr/local/jdk1.8.0_152/lib:/usr/local/jdk1.8.0_152/jre/lib/charsets.jar

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

9. JDK拷贝到Slave主机

#Master

scp -r /usr/local/src/jdk1.8.0_152 root@slave1:/usr/local/src/jdk1.8.0_152

scp -r /usr/local/src/jdk1.8.0_152 root@slave2:/usr/local/src/jdk1.8.0_152

10. 下载Hadoop

#Master

wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

tar zxvf hadoop-1.2.1.tar.gz

cd hadoop-1.2.1.tar.gz

mkdir tmp

11. 修改Hadoop配置文件

#Master

cd conf

vim masters

master

vim slaves

slave1

slave2

vim core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/src/hadoop-1.2.1/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://172.16.11.97:9000</value>
    </property>
</configuration>

 vim mapred-site.xml

<configuration>
 	<property>
        <name>mapred.job.tracker</name>
        <value>http://172.16.11.97:9001</value>
    </property>
</configuration>

vim hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

vim hadoop-env.sh

export JAVA_HOME=/usr/local/src/jdk1.8.0_152

11. 配置环境变量

#Master、Slave1、Slave2

vim ~/.bashrc

HADOOP_HOME=/usr/local/src/hadoop-1.2.1

export PATH=$PATH:$HADOOP_HOME/bin

#刷新环境变量

source ~/.bashrc

11. 安装包拷贝到Slave主机

#Master

scp -r /usr/local/src/hadoop-1.2.1 root@slave1:/usr/local/src/hadoop-1.2.1

scp -r /usr/local/src/hadoop-1.2.1 root@slave2:/usr/local/src/hadoop-1.2.1

12. 启动集群

#Master

#初始化NameNode

hadoop namenode -format

#启动Hadoop集群

start-all.sh

13. 集群状态

jps

14. 监控页面

NameNode 

http://master:50070/dfshealth.jsp

SecondaryNameNode

http://master:50090/status.jsp

DataNode

http://slave1:50075/

http://slave2:50075/

JobTracker

http://master:50030/jobtracker.jsp

TaskTracker

http://slave1:50060/tasktracker.jsp

http://slave2:50060/tasktracker.jsp

15.关闭集群

stop-all.sh


网站公告

今日签到

点亮在社区的每一天
去签到