最近需要计算LLM的flops和参数量,我这里分享一下我的代码,供大家参考学习:
首先安装thop
pip install thop
然后加载模型,执行下面的命令:
import torch
from thop import profile
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_dir = "<your model path>"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, device_map='auto').to(device)
message = [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n\n### Instruction:\nGenerate a sequence of motion tokens matching the following human motion description.Your output should be valid JSON object:\n{\n\"motion_sequences\": <list of motion sequence>\n}\n\n### Input:\n\nperson has arms extended to side of body shoulder height then moves both hands into centre and holds together\n\n### Response:"
}
]
input_text = tokenizer.apply_chat_template(message tokenize=False, add_generation_prompt=True)
print(input_text)
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
print(input_ids)
# 使用 thop.profile 计算 FLOPs 和参数量
flops, params = profile(model, inputs=(input_ids,))
print(f"FLOPs: {flops}")
print(f"Parameters: {params}")
这就是我带chat_template的测试啦。