ubuntu-hadoop伪分布

发布于:2022-12-23 ⋅ 阅读:(379) ⋅ 点赞:(0)

1. ubuntu-hadoop伪分布-环境配置

1.1 创建新用户(确保环境最干净)

  • sudo useradd -m hduser -s /bin/bash(创建新用户)
  • sudo passwd hduser(为新用户设置密码-必设)
  • sudo adduser hduser sudo(为新用户赋予sudo权限)
  • sudo apt update(更新软件列表)
  • sudo apt upgrade(安装列表中的安装包)

1.2 jdk

  • sudo tar -zxvf jdk-18_linux-x64_bin.tar.gz -C /usr/local(注意:进入到安装包文件夹)
  • sudo tar -zxvf hadoop-3.3.4.tar.gz -C /usr/local(后面要安装hadoop,一起了)
  • sudo gedit /etc/profile(改配置文件)
# java environment
export JAVA_HOME=/usr/local/jdk-18.0.2.1
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=.:${JAVA_HOME}/bin:$PATH
  • source /etc/profile(刷新配置文件)
  • sudo gedit ~/.bashrc(主配置文件,让profile配置次次生效)
if [ -f /etc/profile ]; then
        . /etc/profile
fi

1.3 hadoop配置

ssh无密码(分布式的结点以ssh控制,有密码不行)

  • sudo apt install openssh-server
  • ssh localhosrt (必要)
  • cd /home/hduser/.ssh
  • ssh-keygen -t rsa
  • cat ./id_rsa.pub >> ./authorized_keys

hadoop环境变量

  • sudo gedit /etc/profile(配置文件)
# hadoop
export HADOOP_HOME=/usr/local/hadoop-3.3.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HAME/sbin
  • 单机模式验证(可以不做,只是一个测试)
- cd /usr/local/hadoop-3.3.4
- mkdir ./input
- cp ./etc/hadoop/*.xml ./input
- ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
- cat ./output/*
- rm -r ./output

伪分布模式

  • sudo chown -R hduser /usr/local/hadoop-3.3.4(修改hadoop-3.3.4的权限为hduser)

  • cd /usr/local/hadoop-3.3.4/etc/hadoop

  • gedit hadoop-env.sh

export JAVA_HOME=/usr/local/jdk-18.0.2.1
  • gedit core-site.xml
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
  • gedit hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop-3.3.4/tmp/dfs/data</value>
    </property>
</configuration>
  • cd /usr/local/hadoop-3.3.4
  • ./bin/hdfs namenode -format(不要执行两边)
  • ./sbin/start-dfs.sh

检查

  • jps检查
  • http://localhost:9870

停止/启用hadoop

  • /usr/local/hadoop-3.3.4/sbin/stop-dfs.sh
  • /usr/local/hadoop-3.3.4/sbin/start-dfs.sh

2. 伪分布实例

2.1 估计pi值

  • cd /usr/local/hadoop-3.3.4/share/hadoop/mapreduce
  • hadoop jar hadoop-mapreduce-examples-3.3.4.jar pi 1000 50000

2.2 统计文本

  • 制作本地文件(sudo gedit /home/hduser/hadoop/my-local-file.txt),内容如下:
I love you
you love me
I love you and you love me
  1. 本地文件上传到hdfs
hdfs dfs -put /home/hduser/hadoop/my-local-file.txt /user/hadoop/input/my-hdfs-file.txt
  1. wordcount
hadoop jar /usr/local/hadoop-3.3.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4 wordcount /user/hadoop/input/my-hdfs-file.txt /user/hadoop/output/wct
  1. 产看结果
hdfs dfs -cat /user/hadoop/output/wct/part-r-00000
  • hadoop实例不会自动覆盖,每次要自己删除输出文件,避免以后报错
hdfs dfs -rm -r /user/hadoop/output
本文含有隐藏内容,请 开通VIP 后查看

网站公告

今日签到

点亮在社区的每一天
去签到