DeepSeek模型简介
DeepSeek-V3 为自研 MoE 模型,671B 参数,激活 37B,在 14.8T token 上进行了预训练。该模型多项评测成绩超越了 Qwen2.5-72B 和 Llama-3.1-405B 等其他开源模型,并在性能上和世界顶尖的闭源模型 GPT-4o 以及 Claude-3.5-Sonnet 不分伯仲。
且DeepSeek为国内模型,访问无需魔法上网,以下是如何在Unity调用DeepSeek-V3 API。
首先,在DeepSeek官方平台注册账号并登录,申请API Key
根据官方文档封装Post方法如下:
/// <summary>
/// post请求
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="urlTail"></param>
/// <param name="form"></param>
/// <param name="onOver"></param>
/// <returns></returns>
public static IEnumerator PostRequest<T>(string token, Action<T> onOver, object data, string urlTail = null)
{
using (UnityWebRequest request = new UnityWebRequest("https://api.deepseek.com/chat/completions" + urlTail))
{
request.method = "post";
// 设置请求体内容
string jsonStr = JsonConvert.SerializeObject(data);
byte[] bodyRaw = Encoding.UTF8.GetBytes(jsonStr);
request.uploadHandler = new UploadHandlerRaw(bodyRaw);
request.SetRequestHeader("Content-Type", "application/json");
request.downloadHandler = new DownloadHandlerBuffer();
if (token != "")
{
// 添加请求头部信息
request.SetRequestHeader("Authorization", token);
}
// 发送请求并等待响应
yield return request.SendWebRequest();
// 处理响应
if (request.result == UnityWebRequest.Result.Success)
{
Debug.Log("新post请求(" + request.url + ")成功,返回结果:" + request.downloadHandler.text);
// 在这里处理API响应
onOver.Invoke(JsonConvert.DeserializeObject<T>(request.downloadHandler.text));
}
else
{
Debug.Log("新post请求(" + request.url + ")失败,返回结果:" + request.downloadHandler.text);
// 在这里处理API请求失败的情况
onOver.Invoke(default(T));
}
}
}
调用方法
传入token为 “Bearer <DeepSeek API Key>”,DeepSeek API Key为上面获取到的Key,JSON类组成如下:
JSON属性解析
1.指定 model='deepseek-chat'
即可调用 DeepSeek-V3。
2.将 stream 设置为 true 来使用流式输出。messages是传入的文本。
JSON代码如下:
public class DeepSeekJson
{
public string model { get; set; }
public List<DeepSeekItem> messages { get; set; }
public bool stream { get; set; }
public class DeepSeekItem
{
public string role { get; set; }
public string content { get; set; }
}
}
Python OUTPUT样例代码如下:
import json
from openai import OpenAI
client = OpenAI(
api_key="<your api key>",
base_url="https://api.deepseek.com",
)
system_prompt = """
The user will provide some exam text. Please parse the "question" and "answer" and output them in JSON format.
EXAMPLE INPUT:
Which is the highest mountain in the world? Mount Everest.
EXAMPLE JSON OUTPUT:
{
"question": "Which is the highest mountain in the world?",
"answer": "Mount Everest"
}
"""
user_prompt = "Which is the longest river in the world? The Nile River."
messages = [{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}]
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages,
response_format={
'type': 'json_object'
}
)
print(json.loads(response.choices[0].message.content))
注意事项
设置
response_format
参数为{'type': 'json_object'}
。用户传入的 system 或 user prompt 中必须含有
json
字样,并给出希望模型输出的 JSON 格式的样例,以指导模型来输出合法 JSON。需要合理设置
max_tokens
参数,防止 JSON 字符串被中途截断。
Python使用上下文拼接样例代码:
from openai import OpenAI
client = OpenAI(api_key="<DeepSeek API Key>", base_url="https://api.deepseek.com")
# Round 1
messages = [{"role": "user", "content": "What's the highest mountain in the world?"}]
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages
)
messages.append(response.choices[0].message)
print(f"Messages Round 1: {messages}")
# Round 2
messages.append({"role": "user", "content": "What is the second?"})
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages
)
messages.append(response.choices[0].message)
print(f"Messages Round 2: {messages}")