pyspark driver 上传pod本地文件到对象存储

发布于:2025-07-01 ⋅ 阅读:(19) ⋅ 点赞:(0)

前提: pyspark driver on k8s,环境变量或者spark_home/jars 下有相关对象存储的包,报错包问题就这里添加jar即可

from py4j.java_gateway import java_import
from pyspark.sql import SparkSession

# ------------------------------------------------------------------------------
# 1) 启动 / 获取 SparkSession
# ------------------------------------------------------------------------------
spark = (
    SparkSession.builder
        .enableHiveSupport()          # 按需保留;若不用 Hive 可去掉
        .getOrCreate()
)

hconf = spark._jsc.hadoopConfiguration()

# ------------------------------------------------------------------------------
# 2) 导入 Hadoop Java 类
# ------------------------------------------------------------------------------
java_import(spark._jvm, "org.apache.hadoop.fs.FileSystem")
java_import(spark._jvm, "org.apache.hadoop.fs.Path")

# ------------------------------------------------------------------------------
# 3) 定义源 / 目标路径
# ------------------------------------------------------------------------------
src_path = spark._jvm.Path("file:///opt/decom.sh")
# 目标写成目录即可;FileSystem 会沿用原文件名
dst_dir  = spark._jvm.Path("oss://aysh-s-data/tmp1/")

# ------------------------------------------------------------------------------
# 4) 获取 *目标* FileSystem(HDFS)实例
# ------------------------------------------------------------------------------
fs_hdfs = spark._jvm.org.apache.hadoop.fs.FileSystem.get(dst_dir.toUri(), hconf)

# 若目录不存在,先创建
if not fs_hdfs.exists(dst_dir):
    fs_hdfs.mkdirs(dst_dir)

# ------------------------------------------------------------------------------
# 5) 执行复制
#    copyFromLocalFile(deleteSource=False, overwrite=True, src, dst)
#    目标是目录时,自动使用原文件名
# ------------------------------------------------------------------------------
fs_hdfs.copyFromLocalFile(False, False, src_path, dst_dir)

print("Done! file:///opt/decom.sh -> oss://aysh-s-data/tmp1/")


网站公告

今日签到

点亮在社区的每一天
去签到