LLAMA-Factory安装教程(解决报错cannot allocate memory in static TLS block的问题)

发布于:2025-02-10 ⋅ 阅读:(164) ⋅ 点赞:(0)

步骤一: 下载基础镜像

# 配置docker DNS
vi /etc/docker/daemon.json

# daemon.json文件中

{ "insecure-registries": ["https://swr.cn-east-317.qdrgznjszx.com"], "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] }

systemctl restart docker.service 
docker pull swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 
mkdir /root/llama_factory_model

步骤二:新建基础容器 

docker create -it -u root --ipc=host --net=host --name=llama-factory   -e LANG="C.UTF-8"\
        --device=/dev/davinci0 \
        --device=/dev/davinci1 \
        --device=/dev/davinci2 \
        --device=/dev/davinci3 \
        --device=/dev/davinci4 \
        --device=/dev/davinci5 \
        --device=/dev/davinci6 \
        --device=/dev/davinci7 \
        --device=/dev/davinci_manager \
        --device=/dev/devmm_svm \
        --device=/dev/hisi_hdc \
        -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
        -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \
        -v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi \
        -v /mnt/:/mnt/ \
        -v /root/llama_factory_model:/root/llama_factory_model \
        -v /var/log/npu:/usr/slog swr.cn-east-317.qdrgznjszx.com/donggang/llama-factory-ascend910b:cann8-py310-torch2.2.0-ubuntu18.04 \
        /bin/bash \

步骤三:安装llamafactory

docker start llama-factory
docker exec -it llama-factory bash

# 安装llama-factory
wget https://codeload.github.com/hiyouga/LLaMA-Factory/zip/refs/heads/main -O LLaMA-Factory.zip
unzip LLaMA-Factory.zip
mv LLaMA-Factory-main LLaMA-Factory

cd LLaMA-Factory
pip install -e ".[torch-npu,metrics]" 
apt install libsndfile1

# 激活昇腾环境变量(建议加入 ~/.bashrc中)
source /usr/local/Ascend/ascend-toolkit/set_env.sh

#使用以下指令对 LLaMA-Factory × 昇腾的安装进行校验
llamafactory-cli env

# 运行llamafactory webui(访问本机7860端口)
nohup llamafactory-cli webui> llama_factory_output.log 2>&1 &
# 查看llamafactory运行日志
tail -f /home/HwHiAiUser/LLaMA-Factory/llama_factory_output.log

解决报错

问题描述

RuntimeError: Failed to import transformers.generation.utils because of the following error (look up to see its traceback):

/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block

解决思路

vim ~/.bashrc

#文档末尾添加
export LD_PRELOAD=/usr/local/python3.10.13/lib/python3.10/site-packages/sklearn/utils/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0

source ~/.bashrc


网站公告

今日签到

点亮在社区的每一天
去签到