先说最终的访问端口,如我这里ip为172.20.94.37、172.20.94.38、172.20.94.39,主机名分别为:hadoop37、hadoop38、hadoop39.
最终访问(默认端口):
hadoop webui 172.20.94.37:9870
hdfs 端口 8020
yarn 172.20.94.37:8088
historyserver 172.20.94.37:19888
spark-master-port: 7077
spark-webui-port: 172.20.94.37:8080
spark-worker-webui-port: 172.20.94.37:8081
spark-historyserver: 172.20.94.37:18081
**
注意:如果要切换集群模式,一定要修改/etc/profile中SPARK_HOME的路径到对应的目录
**
1、去清华的镜像源下载相关文件 https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-3.5.6/
2、解压缩
tar zxvf spark-3.5.6-bin-hadoop3.tgz
3、移动到适当位置
mv spark-3.5.6-bin-hadoop3 /app/spark-3
4、下载scala
https://www.scala-lang.org/download/all.html
这个版本的spark用的是scala-2.12.20
https://www.scala-lang.org/download/2.12.20.html
linux下
https://downloads.lightbend.com/scala/2.12.20/scala-2.12.20.tgz
tar zxvf scala-2.12.20.tgz
mv scala-2.12.20 /app/scala-2
5、配置环境变量
nano /etc/profile
下面是完整的java、hadoop、scala、spark的配置环境
export JAVA_HOME=/app/openjdk-8
export HADOOP_HOME=/app/hadoop-3
export SCALA_HOME=/app/scala-2
export SPARK_HOME=/app/spark-3
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/sbin:$SCALA_HOME/bin
source /etc/profile 环境变量生效
验证:
scala -version
spark-shell
:quit 退出
单机模式:
6、配置
进到/app/spark-3/conf中,复制一个文件出来
cd /app/spark-3/conf
cp spark-env.sh.template spark-env.sh
nano spark-env.sh
export SCALA_HOME=/app/scala-3
export JAVA_HOME=/app/openjdk-8
export SPARK_MASTER_IP=hadoop37
export SPARK_WOKER_CORES=2
export SPARK_WOKER_MEMORY=2g
export HADOOP_CONF_DIR=/app/hadoop-3/etc/hadoop
#export SPARK_MASTER_WEBUI_PORT=8080
#export SPARK_MASTER_PORT=7070
7、配置spark-defaults.conf
cd /app/spark-3/conf
cp spark-defaults.conf.template spark-defaults.conf
nano spark-defaults.conf
spark.master yarn
spark.hadoop.fs.defaultFS hdfs://hadoop37:8020
spark.yarn.jars hdfs://hadoop37:8020/spark-jars/*.jar
可以启动了
cd /app/spark-3
sbin/start-all.sh
8、查看spark版本
spark-submit --version
9、启动spark
cd /app/spark-3
sbin/start-all.sh
停止所有:
sbin/stop-all.sh
10、jps 查看是否安装成功,比如我的是这样,Worker、Master启动完,证明spark和scala安装启动成功了,这是主节点:
19136 JobHistoryServer
18533 ResourceManager
18039 DataNode
18727 NodeManager
20584 Worker
18233 SecondaryNameNode
20681 Jps
17900 NameNode
20479 Master
11、子节点:
jps执行后:
1559 DataNode
1687 NodeManager
1853 Jps
12、ip:8080 ,如 172.20.94.37:8080 来查看验证集群的情况。
——————————————————————————————————
spark on yarn集群模式
我是先装一个docker,在此基础上安装zookeeer3.6.4,然后搭建集群。
容器化安装:
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/zookeeper:3.6.4
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/zookeeper:3.6.4 docker.io/zookeeper:3.6.4
docker run --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name zookeeper -p 2181:2181 -p 8090:8080 -v /etc/localtime:/etc/localtime -d docker.io/zookeeper:3.6.4
启动了一个新的端口,可以查看相关命令,如类似下面的路径:
http://172.20.94.33:8090/commands
1、解压缩
tar zxvf spark-3.5.6-bin-hadoop3.tgz
2、移动到适当位置
mv spark-3.5.6-bin-hadoop3 /app/spark3-yarn
3、配置
cd /app/spark3-yarn/conf
cp spark-env.sh.template spark-env.sh
4、nano spark-env.sh
默认增加:
export JAVA_HOME=/app/openjdk-8
HADOOP_CONF_DIR=/app/hadoop-3/etc/hadoop
YARN_CONF_DIR=/app/hadoop-3/etc/hadoop
——————————————————————————————————
spark standalone集群模式 -------------
1、解压缩
tar zxvf spark-3.5.6-bin-hadoop3.tgz
2、移动到适当位置
mv spark-3.5.6-bin-hadoop3 /app/spark3-standalone
3、配置
cd /app/spark3-standalone/conf
cp spark-env.sh.template spark-env.sh
4、nano spark-env.sh
默认增加:
export JAVA_HOME=/app/openjdk-8
export SPARK_MASTER_HOST=hadoop37
export SPARK_MASTER_IP=hadoop37
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_WOKER_MEMORY=2g
export SPARK_WOKER_WEBUI_PORT=8081
export SPARK_HISTORY_OPTS="
-Dspark.history.fs.cleaner.enabled=true
-Dspark.history.fs.logDirectory=hdfs://hadoop37:8020/spark/logs
-Dspark.history.ui.port=18081"
5、cd /app/spark3-standalone/conf
cp spark-defaults.conf.template spark-defaults.conf
nano spark-defaults.conf
增加内容:
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop37:8020/spark/logs
6、cp workers.template workers
nano workers
增加内容:
hadoop37
hadoop38
hadoop39
7、创建spark/logs目录
hdfs dfs -mkdir -p /spark/logs
8、文件分发
scp -r /app/spark3-standalone/ hadoop38:/app/
scp -r /app/spark3-standalone/ hadoop39:/app/
9、启动spark
cd /app/spark3-standalone
./sbin/start-all.sh
停止所有:
./sbin/stop-all.sh
————————————————————————————————
spark ha 集群模式 -------------
1、解压缩
tar zxvf spark-3.5.6-bin-hadoop3.tgz
2、移动到适当位置
mv spark-3.5.6-bin-hadoop3 /app/spark3-ha
3、配置
cd /app/spark3-ha/conf
cp spark-env.sh.template spark-env.sh
4、nano spark-env.sh
默认增加:
export JAVA_HOME=/app/openjdk-8
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_HISTORY_OPTS="
-Dspark.history.fs.cleaner.enabled=true
-Dspark.history.fs.logDirectory=hdfs://hadoop37:8020/spark/logs
-Dspark.history.ui.port=18081"
export SPARK_DAEMON_JAVA_OPTS="
-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=172.20.94.33:2181,172.20.94.33:2181
-Dspark.deploy.zookeeper.dir=/app/spark3-ha"
5、cd /app/spark3-ha/conf
cp spark-defaults.conf.template spark-defaults.conf
nano spark-defaults.conf
增加内容:
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop37:8020/spark/logs
6、cp workers.template workers
nano workers
增加内容:
hadoop37
hadoop38
hadoop39
7、创建spark/logs目录
hdfs dfs -mkdir -p /spark/logs
8、文件分发
scp -r /app/spark3-ha/ hadoop38:/app/
scp -r /app/spark3-ha/ hadoop39:/app/
9、启动spark
cd /app/spark3-ha
./sbin/start-all.sh
停止所有:
./sbin/stop-all.sh
10、启动standby状态的Master
比如在hadoop38
cd /app/spark3-ha
sbin/start-master.sh
11、启动历史服务器
cd /app/spark3-ha
sbin/start-history-server.sh
————————————————————————————
部署及运行的例子:
不指定模式,使用本地模式
cd /app/spark3-standalone
bin/spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode client --executor-memory 1G --executor-cores 1 --num-executors 1 examples/jars/spark-examples_2.12-3.5.6.jar 5
指定运行模式 yarn:
cd /app/spark3-yarn
bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --executor-memory 1G --executor-cores 1 --num-executors 1 examples/jars/spark-examples_2.12-3.5.6.jar 5
上传文件:
hdfs dfs -put /app/word.txt /
删除输出目录
hdfs dfs -rm -r /out
运行:
bin/spark-submit --class org.rainpet.WordCount --master yarn --conf spark.yarn.jars=$SPARK_HOME/jars/* --deploy-mode client --executor-memory 1G --executor-cores 1 --num-executors 1 /app/scala-spark-cluster01-1.0-SNAPSHOT.jar /word.txt /out
windows下:
spark-submit --class org.rainpet.WordCount --master yarn --conf spark.yarn.jars=%SPARK_HOME%/jars/* --deploy-mode client --executor-memory 1G --executor-cores 1 --num-executors 1 scala-spark-cluster01-1.0-SNAPSHOT.jar /word.txt /out
本地文件
sc.textFile("file:///D:/java/workspace_gitee/cloud-compute-course-demo/scala-spark01/src/main/resources/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
hdfs文件
sc.textFile("/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect