腾讯混元hunyuan3d生成模型,本地搭建和使用
腾讯混元hunyuan3d生成模型,本地搭建和使用
腾讯终于出了一个比较厉害的AI,可以根据文本或者图片生成3d模型,下面是它的基本逻辑
一. 话不多说,我们直接上本地部署的详细步骤
1. 将仓库拉到本地
mkdir Hunyuan3D-1
git clone https://github.com/Tencent/Hunyuan3D-1.git
2. 下载2个模型, 注意这里推荐 先使用conda创建环境后在安装modelscope进行下载模型
2.1 关于第一个模型我们要在 Hunyuan3D-1 中创建文件夹 weights,然后将下载的模型都放进去。
mkdir weights
这里推荐使用模型库来下载,而不是git
pip install modelscope
modelscope download --model AI-ModelScope/Hunyuan3D-1 --local_dir ./weights
2.2 下载第二个模型Hunyuan3D-1\weights 下面新建文件夹 hunyuanDiT ,然后将下载下来的模型文件都放进去。
mkdir ./weights/hunyuanDiT
这里推荐使用模型库来下载,而不是git
pip install modelscope
modelscope download --model AI-ModelScope/HunyuanDiT-v1.1-Diffusers-Distilled --local_dir ./weights/hunyuanDiT
二. 创建虚拟环境
conda create -n hunyuan3d python=3.10
conda activate hunyuan3d
我的 cuda 是 12.1 把那本,使用下面命令安装 torch 等相关库
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu121
进入 D:\Hunyuan3D-1 目录下面,修改 requirements.txt 文件如下,安装下面的库
diffusers==0.31.0
numpy==1.26.4
transformers==4.46.2
rembg==2.0.59
tqdm==4.67.0
omegaconf==2.3.0
matplotlib==3.9.2
opencv-python==4.10.0.84
imageio==2.36.0
jaxtyping==0.2.34
einops==0.8.0
sentencepiece==0.2.0
accelerate==1.1.1
trimesh==4.5.2
PyMCubes==0.1.6
xatlas==0.0.9
libigl==2.5.1
# pytorch3d==0.7.6
git+https://github.com/facebookresearch/pytorch3d@stable
# nvdiffrast==0.3.3
git+https://github.com/NVlabs/nvdiffrast
open3d==0.18.0
ninja==1.11.1.1
执行下面的命令
pip install -r requirements.txt
文生3D模型测试
在 D:\Hunyuan3D-1 目录下面使用下面的命令进行生成。
python main.py --text_prompt "一颗红色的柳树" --save_folder ./outputs/liushu/ --max_faces_num 90000 --do_texture_mapping --do_render
运行报错: 1. dust3r的问题
cd third_party
git clone --recursive https://github.com/naver/dust3r.git
去下载这个模型然后放到third_party/weights中
https://download.europe.naverlabs.com/ComputerVision/DUSt3R/DUSt3R_ViTLarge_BaseDecoder_512_dpt.pth
cd ../third_party/weights
运行报错2. ModuleNotFoundError: No module named ‘roma’
pip install roma
然后再次运行, 出现下面的命令
(hunyuan3d) PS D:\Hunyuan3D-1> python main.py --text_prompt "一颗红色的柳树" --save_folder ./outputs/liushu/ --max_faces_num 90000 --do_texture_mapping --do_render
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 15.47it/s]
image2views unet model {'Total': 2567463684, 'Trainable': 0}
None pretrained model for dinov2 encoder ...
DEFAULT_RENDERING_KWARGS
{'ray_start': 'auto', 'ray_end': 'auto', 'box_warp': 1.2, 'white_back': True, 'disparity_space_sampling': False, 'clamp_mode': 'softplus', 'sampler_bbox_min': -0.6, 'sampler_bbox_max': 0.6}
SVRMModel has 458.69 M params.
Load model successfully
=====> mv23d model init time: 3.292725086212158
view2mesh model {'Total': 458688965, 'Trainable': 0}
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.22it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.13it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.pag.pipeline_pag_hunyuandit.HunyuanDiTPAGPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
text2image transformer model {'Total': 1516534048, 'Trainable': 0}
prompt is: 一颗红色的柳树
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:18<00:00, 1.34it/s]
[HunYuan3D]-[text to image], cost time: 20.4802s
[HunYuan3D]-[remove background], cost time: 0.4009s
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:13<00:00, 3.57it/s]
[HunYuan3D]-[image to views], cost time: 19.2197s
./outputs/liushu/
=====> Triplane forward time: 228.2751338481903
reduce face: 181869 -> 90000
=====> generate mesh with vertex shading time: 3.9938809871673584
Using xatlas to perform UV unwrapping, may take a while ...
=====> generate mesh with texture shading time: 16.907540321350098
[HunYuan3D]-[views to mesh], cost time: 249.4986s
[HunYuan3D]-[gif render], cost time: 9.6563s
总共显存消耗23.5G ,耗时 197s 左右。在 D:\Hunyuan3D-1\outputs\liushu 目录下面会有如下的目录:
### 图生3D模型测试
我用了一张实景三维的照片 building.png ,如下:
在 D:\Hunyuan3D-1 目录下面使用下面的命令进行生成,
python main.py --image_prompt ./demos/building.png --save_folder ./outputs/test/ --max_faces_num 90000 --do_texture --do_render
日志显示:
(hunyuan3d) PS D:\Hunyuan3D-1> python main.py --image_prompt ./demos/building.png --save_folder ./outputs/test/ --max_faces_num 90000 --do_texture --do_render
Downloading data from 'https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx' to file 'C:\Users\13900K.u2net\u2net.onnx'.
100%|###############################################| 176M/176M [00:00<?, ?B/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 9.86it/s]
image2views unet model {'Total': 2567463684, 'Trainable': 0}
None pretrained model for dinov2 encoder ...
DEFAULT_RENDERING_KWARGS
{'ray_start': 'auto', 'ray_end': 'auto', 'box_warp': 1.2, 'white_back': True, 'disparity_space_sampling': False, 'clamp_mode': 'softplus', 'sampler_bbox_min': -0.6, 'sampler_bbox_max': 0.6}
SVRMModel has 458.69 M params.
Load model successfully
=====> mv23d model init time: 4.093821048736572
view2mesh model {'Total': 458688965, 'Trainable': 0}
[HunYuan3D]-[remove background], cost time: 0.2740s
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:13<00:00, 3.65it/s]
[HunYuan3D]-[image to views], cost time: 14.5492s
./outputs/test/
=====> Triplane forward time: 76.13665747642517
reduce face: 116886 -> 90000
D:\Hunyuan3D-1\svrm\ldm\models\svrm.py:198: RuntimeWarning: invalid value encountered in power
color = [color[0]**color_ratio, color[1]**color_ratio, color[2]**color_ratio]
=====> generate mesh with vertex shading time: 2.1429805755615234
Using xatlas to perform UV unwrapping, may take a while ...
=====> generate mesh with texture shading time: 38.4321174621582
[HunYuan3D]-[views to mesh], cost time: 116.7878s
[HunYuan3D]-[gif render], cost time: 7.3994s
总共显存消耗23.4G ,耗时 135s 左右。在 D:\Hunyuan3D-1\outputs\test 目录下面会有如下的目录
### 点击 mesh.obj 进行查看,是不是很牛X,可以自动生成一幢楼的 3d 模型。
生成其他案例
使用 Gradio
我们准备了两个版本的多视图生成,std 和 lite。
std模式
# std
python3 app.py
python3 app.py --save_memory
lite 模式
python3 app.py --use_lite
python3 app.py --use_lite --save_memory
然后可以通过 http://0.0.0.0:8080 访问演示。需要注意的是,这里的 0.0.0.0 需要是你的服务器 IP 的 X.X.X.X。
参考:
- https://huggingface.co/tencent/Hunyuan3D-1