基于langchain+llama2的本地私有大语言模型实战

发布于:2025-03-11 ⋅ 阅读:(21) ⋅ 点赞:(0)

Langchain功能

LangChian 作为一个大语言模型(LLM, Large Language Model)开发框架,是 LLM 应用架构的重要一环。借助 LangChain,我们可以创建各种应用程序,包括聊天机器人和智能问答工具。

image-20250306092604294

AI模型:包含各大语言模型的LangChain接口和调用细节,以及输出解析机制。

提示模板(Prompts): 提示模板,激发大语言模型的潜力。

检索(Retrieval): 自建知识库,实现检索增强生成(Retrieval Augmented Generation ,RAG),包含文档加载、文本拆分、转换成向量、向量存储、知识检索。

向量数据库: 保存语料信息。

组件

架构 | 🦜️🔗 LangChain 框架

image-20250307173645572

快速上手

安装依赖

pip install langchain
pip install -qU langchain-openai
pip install "langserve[all]"
pip install -U langchain-community
python
Python 3.10.0 | packaged by conda-forge | (default, Nov 10 2021, 13:20:59) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
pip list
Package                  Version
------------------------ -----------
aiohappyeyeballs         2.5.0
aiohttp                  3.11.13
aiosignal                1.3.2
annotated-types          0.7.0
anyio                    4.8.0
async-timeout            4.0.3
attrs                    25.1.0
build                    1.2.2.post1
certifi                  2025.1.31
charset-normalizer       3.4.1
cmake                    3.31.6
colorama                 0.4.6
dataclasses-json         0.6.7
diskcache                5.6.3
dpcpp-cpp-rt             2024.0.2
exceptiongroup           1.2.2
frozenlist               1.5.0
greenlet                 3.1.1
h11                      0.14.0
httpcore                 1.0.7
httpx                    0.28.1
httpx-sse                0.4.0
idna                     3.10
importlib_metadata       8.6.1
intel-cmplr-lib-rt       2024.0.2
intel-cmplr-lic-rt       2024.0.2
intel-opencl-rt          2024.0.2
intel-openmp             2024.0.2
jsonpatch                1.33
jsonpointer              3.0.0
langchain                0.3.20
langchain-community      0.3.19
langchain-core           0.3.41
langchain-text-splitters 0.3.6
langsmith                0.3.12
llama_cpp_python         0.2.23
marshmallow              3.26.1
mkl                      2024.0.0
mkl-dpcpp                2024.0.0
multidict                6.1.0
mypy-extensions          1.0.0
numpy                    2.2.3
onednn                   2024.0.0
onemkl-sycl-blas         2024.0.0
onemkl-sycl-datafitting  2024.0.0
onemkl-sycl-dft          2024.0.0
onemkl-sycl-lapack       2024.0.0
onemkl-sycl-rng          2024.0.0
onemkl-sycl-sparse       2024.0.0
onemkl-sycl-stats        2024.0.0
onemkl-sycl-vm           2024.0.0
orjson                   3.10.15
packaging                24.2
pip                      25.0
propcache                0.3.0
pydantic                 2.10.6
pydantic_core            2.27.2
pydantic-settings        2.8.1
pyproject_hooks          1.2.0
python-dotenv            1.0.1
PyYAML                   6.0.2
requests                 2.32.3
requests-toolbelt        1.0.0
setuptools               75.8.2
sniffio                  1.3.1
SQLAlchemy               2.0.38
tbb                      2021.13.1
tenacity                 9.0.0
tomli                    2.2.1
typing_extensions        4.12.2
typing-inspect           0.9.0
urllib3                  2.3.0
wheel                    0.45.1
yarl                     1.18.3
zipp                     3.21.0
zstandard                0.23.0

推理问题示例

完成代码:

from langchain_community.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

# 1. 定义中文提示模板
template_zh = """[INST] <<SYS>>
你是一个智能 助手,需用简洁且口语化的回答用户问题。若问题不明确,请主动询问细节。
<</SYS>>

{question} [/INST]"""

prompt = PromptTemplate(template=template_zh, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# 2. 加载本地模型
# 配置参数
n_gpu_layers = 40  # 根据您的模型和GPU VRAM大小调整
n_batch = 512  # 应在1到n_ctx之间,考虑GPU的VRAM大小

llm = LlamaCpp(
    model_path="llama-2-7b-chat.Q4_K_M.gguf",
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    callback_manager=callback_manager,
    verbose=True,
)

# 3. 构建链
chain = prompt | llm | StrOutputParser()

# 4. 调用示例
question = "如何用Python实现快速排序?"
response = chain.invoke({"question": question})
print(f"\n回答:{response}")

输出结果

回答:  Hey there! 😊
To implement quicksort in Python, you can use the following code:
```python
def quicksort(arr):
    if len(arr) <= 1:
        return arr
    else:
        pivot = arr[0]
        less = [x for x in arr[1:] if x < pivot]
        greater = [x for x in arr[1:] if x >= pivot]
        return quicksort(less), pivot, quicksort(greater)
```
This is a basic implementation of the quicksort algorithm. The function takes an array as input and returns three values: the sorted list (or lists), the pivot element, and the results of recursively calling the quicksort function on the greater and less than elements.
Please let me know if you have any questions! 😃
(langchain) PS D:\code\trae> 

ref

一文详解最热的 LLM 应用框架 LangChain - 知乎