用curl和python通过网络测试Ollama服务器的配置和状态

发布于:2025-03-21 ⋅ 阅读:(15) ⋅ 点赞:(0)

当一个Ollama服务器创建好后,除了用ollama指令来测试,还可以使用curl和python来通过网络测试Ollama的配置是否正确,远程是否能正常连上并与服务器进行交互。

目录

启动Ollama服务器

下载模型

curl测试

检查服务器状态

列出已安装的模型

调用deepseek-r1:1.5b模型测试

查看是否支持工具调用

 API测试

生成文本

 获取模型列表

显示模型信息

检验模型是否支持MCP


启动Ollama服务器

启动服务器一般使用命令:Ollama serve

一般Ollama都设置为开机即启动服务,不用再手工启动服务

下载模型

如果没有下载模型,用这条命令

ollama pull deepseek-r1:1.5b

curl测试

检查服务器状态

curl http://localhost:11434

输出:Ollama is running

列出已安装的模型

curl http://localhost:11434/api/tags

输出

{"models":[{"name":"deepseek-r1:14b","model":"deepseek-r1:14b","modified_at":"2025-02-25T12:33:39.9452166+08:00","size":8988112040,"digest":"ea35dfe18182f635ee2b214ea30b7520fe1ada68da018f8b395b444b662d4f1a","details":{"parent_model":"","format":"gguf","family":"qwen2","families":["qwen2"],"parameter_size":"14.8B","quantization_level":"Q4_K_M"}},{"name":"test:latest","model":"test:latest","modified_at":"2025-02-25T09:54:20.5132579+08:00","size":91739413,"digest":"b0b2a46174385c0adcaa77ff245ffeced5fc4a61447b6f221b2beb5c5a760133","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"134.52M","quantization_level":"Q4_0"}},{"name":"deepseek-r1:7b","model":"deepseek-r1:7b","modified_at":"2025-02-25T09:01:47.6178023+08:00","size":4683075271,"digest":"0a8c266910232fd3291e71e5ba1e058cc5af9d411192cf88b6d30e92b6e73163","details":{"parent_model":"","format":"gguf","family":"qwen2","families":["qwen2"],"parameter_size":"7.6B","quantization_level":"Q4_K_M"}},{"name":"smollm:135m","model":"smollm:135m","modified_at":"2025-02-20T15:48:18.8525987+08:00","size":91739413,"digest":"b0b2a46174385c0adcaa77ff245ffeced5fc4a61447b6f221b2beb5c5a760133","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"134.52M","quantization_level":"Q4_0"}},{"name":"snowflake-arctic-embed2:latest","model":"snowflake-arctic-embed2:latest","modified_at":"2025-02-13T22:47:06.1934237+08:00","size":1160296718,"digest":"5de93a84837d0ff00da872e90830df5d973f616cbf1e5c198731ab19dd7b776b","details":{"parent_model":"","format":"gguf","family":"bert","families":["bert"],"parameter_size":"566.70M","quantization_level":"F16"}},{"name":"deepseek-r1:8b","model":"deepseek-r1:8b","modified_at":"2025-02-13T09:09:39.7652506+08:00","size":4920738407,"digest":"28f8fd6cdc677661426adab9338ce3c013d7e69a5bea9e704b364171a5d61a10","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"8.0B","quantization_level":"Q4_K_M"}},{"name":"EntropyYue/chatglm3:latest","model":"EntropyYue/chatglm3:latest","modified_at":"2025-02-08T09:24:07.2296428+08:00","size":3583423760,"digest":"8f6f34227356b30a94c02b65ab33e2e13b8687a05e0f0ddfce4c75e36c965e25","details":{"parent_model":"","format":"gguf","family":"chatglm","families":["chatglm"],"parameter_size":"6.2B","quantization_level":"Q4_0"}},{"name":"nomic-embed-text:latest","model":"nomic-embed-text:latest","modified_at":"2025-02-07T19:57:28.4688464+08:00","size":274302450,"digest":"0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f","details":{"parent_model":"","format":"gguf","family":"nomic-bert","families":["nomic-bert"],"parameter_size":"137M","quantization_level":"F16"}},{"name":"llama3.2:latest","model":"llama3.2:latest","modified_at":"2025-02-07T19:55:39.7382597+08:00","size":2019393189,"digest":"a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"3.2B","quantization_level":"Q4_K_M"}},{"name":"deepseek-r1:1.5b","model":"deepseek-r1:1.5b","modified_at":"2025-02-02T21:38:01.4217684+08:00","size":1117322599,"digest":"a42b25d8c10a841bd24724309898ae851466696a7d7f3a0a408b895538ccbc96","details":{"parent_model":"","format":"gguf","family":"qwen2","families":["qwen2"],"parameter_size":"1.8B","quantization_level":"Q4_K_M"}}]}

 可以看到deepseek-r1的14b、7b以及1.5b模型都在。

调用deepseek-r1:1.5b模型测试

# linux
curl -X POST http://localhost:11434/api/generate \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-r1:1.5b",
    "prompt": "hello",
    "stream": true
  }'
# Windows 单行流输出
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d "{ \"model\": \"deepseek-r1:1.5b\", \"prompt\": \"hello\", \"stream\": true }"

# Windows 多行流输出
curl -X POST http://localhost:11434/api/generate ^
  -H "Content-Type: application/json" ^
  -d "{\"model\": \"deepseek-r1:1.5b\", \"prompt\": \"hello\", \"stream\": true }"

# Windows 多行非流输出
curl -X POST http://localhost:11434/api/generate ^
  -H "Content-Type: application/json" ^
  -d "{\"model\": \"deepseek-r1:1.5b\", \"prompt\": \"hello\", \"stream\": false}"

如果能输出类似Hello! How can I assist you today?这样的回答,就证明ollama的调用正常。

查看是否支持工具调用

windows下的指令:

curl http://localhost:11434/api/generate -d "{\"model\": \"deepseek-r1:1.5b\",\"prompt\": \"用 JSON 格式回答,包含北京当前温度\",\"format\": \"json\", \"stream\":false}"

 回答里有如下信息,证明一切正常:

"response":"{\"temperature\": \"10°C\"}"

如果无法输出结构化 JSON,则确认模型不支持工具调用。

当然,支持工具调用和支持MCP是两回事。当前我们的工作重心是MCP。

当前试出来支持MCP的是DeepSeek-v3和

 API测试

常用的AI集成软件,比如Cherry Studio、5ire 等,都是通过python api来调用AI大模型的,所以还需要测试API调用。

生成文本

import requests
import json

url_generate = "http://localhost:11434/api/generate"

data = {
    "model": "deepseek-r1:1.5b",  # 替换为你使用的模型名称
    "prompt": "hello",
    "stream": False
}

response = requests.post(url_generate, json=data)
response_dict = response.json()

print("生成的文本:", response_dict.get("response"))

输出:Hello! How can I assist you today?

其实只要这句测试通过,就证明Ollama服务器一切正常,并且可以本地API调用。

 获取模型列表

url_list = "http://localhost:11434/api/tags"

response = requests.get(url_list)
response_dict = response.json()

print("可用模型列表:")
for model in response_dict.get("models", []):
    print(model["name"])

输出:

deepseek-r1:14b
test:latest
deepseek-r1:7b
smollm:135m
snowflake-arctic-embed2:latest
deepseek-r1:8b
EntropyYue/chatglm3:latest
nomic-embed-text:latest
llama3.2:latest
deepseek-r1:1.5b

 在很多AI软件中,配置模型部分,就是在设置base_url和key之后,调用这个url:http://localhost:11434/api/tags 来获取模型列表,并让用户选择要使用的模型。

显示模型信息

url_show_info = "http://localhost:11434/api/show"

data = {
    "name": "deepseek-r1:1.5b"  # 替换为你想要查询的模型名称
}

response = requests.post(url_show_info, json=data)
response_dict = response.json()

print("模型信息:", json.dumps(response_dict, indent=2, ensure_ascii=False))

输出模型信息:

模型信息: {
  "license": "MIT License\n\nCopyright (c) 2023 DeepSeek\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n",
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM deepseek-r1:1.5b\n\nFROM E:\\ai\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc\nTEMPLATE \"\"\"{{- if .System }}{{ .System }}{{ end }}\n{{- range $i, $_ := .Messages }}\n{{- $last := eq (len (slice $.Messages $i)) 1}}\n{{- if eq .Role \"user\" }}<|User|>{{ .Content }}\n{{- else if eq .Role \"assistant\" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }}\n{{- end }}\n{{- if and $last (ne .Role \"assistant\") }}<|Assistant|>{{- end }}\n{{- end }}\"\"\"\nPARAMETER stop <|begin▁of▁sentence|>\nPARAMETER stop <|end▁of▁sentence|>\nPARAMETER stop <|User|>\nPARAMETER stop <|Assistant|>\nLICENSE \"\"\"MIT License\n\nCopyright (c) 2023 DeepSeek\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation 

检验模型是否支持tools或MCP

linux下 

curl -X POST \
  http://localhost:11434/api/generate \
  -H 'Content-Type: application/json' \
  -d '{"model": "deepseek-r1:1.5b", "prompt": "计算 2 + 2", "stream": false, "tools": ["calculator"]}'

python交互命令: 

import requests
import json

url_generate = "http://localhost:11434/api/generate"

data = {
    "model": "deepseek-r1:1.5b",  # 替换为你使用的模型名称
    "prompt": "计算2+2",
    "stream": False, 
    "tools": ["calculator"]
}

response = requests.post(url_generate, json=data)
response_dict = response.json()

print("生成的文本:", response_dict.get("response"))

来个获得温度的: 

import requests

# MCP 协议测试端点
url_generate = "http://localhost:11434/api/generate"

# 测试提示词(明确要求符合 MCP 协议的 JSON 响应)
test_prompt = '''根据 MCP 协议返回以下工具调用请求的响应:
{
  "action": "call_tool",
  "tool_name": "get_weather",
  "params": {"city": "北京"}
}'''

data = {
    "model": "deepseek-r1:1.5b",  # 替换为实际支持 MCP 的模型名称
    "prompt": test_prompt,
    "stream": False,
    "tools": ["get_weather"],  # 工具名称需与请求中的 tool_name 一致
    "response_format": {"type": "json"}
}

try:
    response = requests.post(url_generate, json=data)
    response.raise_for_status()  # 检查 HTTP 状态码
    response_dict = response.json()
    
    # 提取结构化响应
    generated_response = response_dict.get("response", "")
    print("生成的 JSON 响应:", generated_response)

    # 解析 JSON 字符串为字典(若模型返回的是字符串)
    tool_response = json.loads(generated_response) if isinstance(generated_response, str) else generated_response
    
    # 断言 MCP 协议的关键字段
    assert tool_response.get("action") == "tool_response", "响应动作不符合 MCP 协议"
    assert "status" in tool_response.get("tool_response", {}), "缺少状态字段"
    assert tool_response["tool_response"]["status"] == "success", "工具调用失败"
    
except requests.exceptions.RequestException as e:
    print("请求失败:", e)
except json.JSONDecodeError:
    print("响应非合法 JSON 格式")
except AssertionError as e:
    print("断言错误:", e)

 

测试下来,感觉有些模型返回的信息有干扰信息.....

deepseek-v3和llama3.2 3b模型都是支持MCP的,但无法保证每回都正确回应。