复现LDM 已解决所有报错
下载项目
https://github.com/CompVis/latent-diffusion
然后运行环境配置:
conda env create -f environment.yaml
conda activate ldm
下载预先训练的权重:
下载官方权重文件:
mkdir -p models/ldm/text2img-large/
wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
# 下载modelscope的权重文件:
# pip install modelscope
# modelscope download --model AI-ModelScope/stable-diffusion-v1-5 v1-5-pruned-emaonly.ckpt --local_dir ./models/ldm/stable-diffusion-v1-5
# 链接起来:
# ln -s /root/netdisk/latent-diffusion-main/models/ldm/stable-diffusion-v1-5/v1-5-pruned-emaonly.ckpt models/ldm/stable-diffusion-v1/model.ckpt
# 1. 克隆 taming-transformers 仓库
git clone https://github.com/CompVis/taming-transformers.git
cd taming-transformers
安装 taming 模块
pip install .
返回项目根目录
cd /root/netdisk/latent-diffusion-main
直接引用taming包需要将下载的包放到固定的环境目录下:
cp -r /root/netdisk/latent-diffusion-main/taming-transformers /root/.pyenv/versions/3.8.0/lib/python3.8/site-packages
python -c "import taming-transformers; print(my_package.__file__)"
手动下载bert-base-uncased:https://huggingface.co/google-bert/bert-base-uncased/tree/main
修改代码:
from transformers import BertTokenizerFast # TODO: add to reuquirements
# 从本地路径加载分词器
self.tokenizer = BertTokenizerFast.from_pretrained("./bert-base-uncased")
运行采样生成:
python scripts/txt2img.py --prompt "a virus monster is playing guitar, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0 --ddim_steps 50
python scripts/txt2img.py --prompt "Handsome man and beautiful woman walking in the rain, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0 --ddim_steps 50
效果展示: