本地部署index-tts并且通过docker做成镜像

发布于:2025-07-05 ⋅ 阅读:(18) ⋅ 点赞:(0)

项目地址: https://github.com/index-tts/index-tts

本地部署步骤:

git clone https://github.com/index-tts/index-tts.git

-- 虚拟环境
conda create -n index-tts python=3.10
conda activate index-tts


conda install -c conda-forge ffmpeg
conda install -c conda-forge pynini==2.1.6

pip install WeTextProcessing --no-deps

pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple 

-- 魔搭
pip install modelscope  -i https://pypi.tuna.tsinghua.edu.cn/simple   

 -- 下载模型
modelscope download --model IndexTeam/IndexTTS-1.5 --local_dir ./checkpoints

-- 启动 

python webui.py --host=0.0.0.0 --port=9999 
 

 启动效果:

 

##################### 构建docker image #####################

1 确定需要用的cuda基础镜像 

查询地址:  https://hub.docker.com/r/nvidia/cuda/tags?name=11.8

2 确定本地cuda环境 

  • 如果 nvidia-smi 报错,需先安装 NVIDIA 驱动

  • # 检查工具包是否安装成功 没有则安装 NVIDIA Container Toolkit
    rpm -qa | grep nvidia-container

 # 编写脚本 Dockerfile  放 index-tts根目录

# 基础镜像依赖

FROM nvidia/cuda:11.8.0-base-ubuntu22.04

# 设置工作目录 类似mkdri + cd

WORKDIR /index-tts

# 本机内容复制进去 本地部署时包含了模型文件 会一起复制进去

COPY . /index-tts

# liunx基本环境

RUN apt-get update 
RUN apt-get install -y --no-install-recommends wget curl libgl1 libsm6 
RUN apt-get clean 
RUN rm -rf /var/lib/apt/lists/*

ENV CONDA_ROOT /opt/conda
ENV PATH $CONDA_ROOT/bin:$PATH

# 下载conda

RUN wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O /tmp/miniconda.sh && bash /tmp/miniconda.sh -b -p $CONDA_ROOT && rm /tmp/miniconda.sh

# 创建虚拟环境

RUN $CONDA_ROOT/bin/conda create -n index-tts python=3.10 -y 
RUN $CONDA_ROOT/bin/conda install -n index-tts -c conda-forge ffmpeg pynini=2.1.6 -y

# 设置pip镜像

RUN $CONDA_ROOT/envs/index-tts/bin/pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

# cuda

RUN $CONDA_ROOT/envs/index-tts/bin/pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

# 项目依赖

RUN $CONDA_ROOT/envs/index-tts/bin/pip install WeTextProcessing --no-deps

RUN $CONDA_ROOT/envs/index-tts/bin/pip install -r /index-tts/requirements.txt 

# 暴露端口

EXPOSE 9999

# 一定要监听0.0.0.0 容器网络环境问题

CMD ["/bin/bash", "-c", "source $CONDA_ROOT/bin/activate index-tts && python webui.py --host=0.0.0.0 --port=9999"]

 

### 构建日志 下载cuda很久 可以考虑本机下载后直接复制到容器

(base) [root@lsp-home-centos7 index-tts]# docker build -t lspindextts .

[+] Building 3775.5s (19/19) FINISHED docker:default

=> [internal] load build definition from Dockerfile 0.0s

=> => transferring dockerfile: 1.20kB 0.0s

=> [internal] load metadata for docker.io/nvidia/cuda:11.8.0-base-ubuntu22.04 32.1s

=> [internal] load .dockerignore 0.0s

=> => transferring context: 65B 0.0s

=> [internal] load build context 0.0s

=> => transferring context: 4.61kB 0.0s

=> [ 1/14] FROM docker.io/nvidia/cuda:11.8.0-base-ubuntu22.04@sha256:f895871972c1c91eb6a896eee68468f40289395a1e58c492e1be7929d0f8703b 0.0s

=> CACHED [ 2/14] WORKDIR /index-tts 0.0s

=> CACHED [ 3/14] RUN apt-get update 0.0s

=> CACHED [ 4/14] RUN apt-get install -y --no-install-recommends wget curl libgl1 libsm6 0.0s

=> CACHED [ 5/14] RUN apt-get clean 0.0s

=> CACHED [ 6/14] RUN rm -rf /var/lib/apt/lists/* 0.0s

=> CACHED [ 7/14] RUN wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O /tmp/miniconda.sh && bash /tmp/miniconda.sh -b -p /opt/conda && rm /tmp/miniconda.sh 0.0s

=> CACHED [ 8/14] RUN /opt/conda/bin/conda create -n index-tts python=3.10 -y 0.0s

=> CACHED [ 9/14] RUN /opt/conda/bin/conda install -n index-tts -c conda-forge ffmpeg pynini=2.1.6 -y 0.0s

=> [10/14] RUN /opt/conda/envs/index-tts/bin/pip install WeTextProcessing --no-deps -i Simple Index 3.0s

=> [11/14] RUN /opt/conda/envs/index-tts/bin/pip config set global.index-url Simple Index 0.6s

=> [12/14] RUN /opt/conda/envs/index-tts/bin/pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118 3416.4s

=> [13/14] COPY . /index-tts 64.6s

=> [14/14] RUN /opt/conda/envs/index-tts/bin/pip install -r /index-tts/requirements.txt -i Simple Index 143.9s

=> exporting to image 114.6s

=> => exporting layers 114.5s

=> => writing image sha256:d27bc73d51d8b4721b60c6bc5c58cc23b2fd226279bc33477317fcb81721d44c 0.0s

=> => naming to docker.io/library/lspindextts 0.0s

(base) [root@lsp-home-centos7 index-tts]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

lspindextts latest d27bc73d51d8 6 hours ago 18GB

portainer/portainer-ce latest 2a17f0992b45 7 weeks ago 268MB

milvusdb/milvus v2.5.7 9a1923427d52 3 months ago 1.72GB

apache/kafka latest 12b98f0f2c1f 3 months ago 425MB

zookeeper latest 3111c3ba944e 8 months ago 313MB

spoonest/clickhouse-tabix-web-client latest e872c1a905d9 7 years ago 245MB

(base) [root@lsp-home-centos7 index-tts]# docker run -it --name lsptts --gpus all -p 9999:9999 lspindextts

>> GPT weights restored from: checkpoints/gpt.pth

>> DeepSpeed加载失败,回退到标准推理: No module named 'deepspeed'

See more details Installation Details - DeepSpeed

>> Failed to load custom CUDA kernel for BigVGAN. Falling back to torch. Ninja is required to load C++ extensions

Reinstall with `pip install -e . --no-deps --no-build-isolation` to prebuild `anti_alias_activation_cuda` kernel.

See more details: https://github.com/index-tts/index-tts/issues/164#issuecomment-2903453206

Removing weight norm...

>> bigvgan weights restored from: checkpoints/bigvgan_generator.pth

2025-07-01 23:29:17,382 WETEXT INFO building fst for zh_normalizer ...

2025-07-01 23:29:52,757 WETEXT INFO done

2025-07-01 23:29:52,758 WETEXT INFO fst path: /index-tts/indextts/utils/tagger_cache/zh_tn_tagger.fst

2025-07-01 23:29:52,758 WETEXT INFO /index-tts/indextts/utils/tagger_cache/zh_tn_verbalizer.fst

2025-07-01 23:29:52,766 WETEXT INFO found existing fst: /opt/conda/envs/index-tts/lib/python3.10/site-packages/tn/en_tn_tagger.fst

2025-07-01 23:29:52,766 WETEXT INFO /opt/conda/envs/index-tts/lib/python3.10/site-packages/tn/en_tn_verbalizer.fst

2025-07-01 23:29:52,766 WETEXT INFO skip building fst for en_normalizer ...

>> TextNormalizer loaded

>> bpe model loaded from: checkpoints/bpe.model

* Running on local URL: http://0.0.0.0:9999

* To create a public link, set `share=True` in `launch()`.

##image 可以推送到自己的docker仓库 后续其他环境可以image pull拉取使用 ! 


网站公告

今日签到

点亮在社区的每一天
去签到