【SpringAI】3.结构化输出,初级版

发布于:2025-07-01 ⋅ 阅读:(18) ⋅ 点赞:(0)

SpringAI结构化输出

官方文档 https://www.spring-doc.cn/spring-ai/1.0.0/api_structured-output-converter.html

现有一个普通的流式接口

    @Operation(summary = "基础班流式生成")
    @CrossOrigin(origins = "*")
    @GetMapping(value = "/ai/generateStream", produces = "text/event-stream;charset=UTF-8")
    public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "讲个笑话") String message) {
        List<Message> messageList = new ArrayList<>();
        messageList.add(new UserMessage(message));
        messageList.add(new SystemMessage("中文回答"));
        Prompt prompt = new Prompt(messageList);
        return this.chatModel.stream(prompt);
    }

这个接口返回的默认官方的ChatResponse,收到响应数据如下:

{"result":{"metadata":{"finishReason":null,"contentFilters":[],"empty":true},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":"好的"}},"metadata":{"id":"","model":"qwen2.5vl:3b","rateLimit":{"requestsRemaining":0,"requestsLimit":0,"tokensLimit":0,"tokensRemaining":0,"tokensReset":"PT0S","requestsReset":"PT0S"},"usage":{"promptTokens":0,"completionTokens":0,"totalTokens":0},"promptMetadata":[],"empty":false},"results":[{"metadata":{"finishReason":null,"contentFilters":[],"empty":true},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":"好的"}}]}
{"result":{"metadata":{"finishReason":"stop","contentFilters":[],"empty":true},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":""}},"metadata":{"id":"","model":"deepseek-r1:1.5b","rateLimit":{"requestsRemaining":0,"tokensReset":"PT0S","requestsReset":"PT0S","requestsLimit":0,"tokensRemaining":0,"tokensLimit":0},"usage":{"promptTokens":48,"completionTokens":905,"totalTokens":953},"promptMetadata":[],"empty":false},"results":[{"metadata":{"finishReason":"stop","contentFilters":[],"empty":true},"output":{"messageType":"ASSISTANT","metadata":{"messageType":"ASSISTANT"},"toolCalls":[],"media":[],"text":""}}]}

这种数据前端解析时,其实只关心输出的text,其他信息不重要,
某些推理模型,前端可能需要判断输出的文本是思考内容还是正文然后使用不同的输出样式

我为了简化输出内容,定义VO如下:

@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class FluxVO {
    private String text;
    private String status = "running";
}

修改原来的接口返回Flux

@PostMapping(value = "/generateStreamWithFile", consumes = "application/json", produces = "text/event-stream;charset=UTF-8")
    public Flux<FluxVO> generateStreamWithFile(@RequestBody QuestionVO body) {
        String model = body.getModel();
        
        List<Message> messageList = new ArrayList<>();
        UserMessage userMessage = new UserMessage(body.getMessage());
        //其他消息(图片、历史消息)
        messageList.add(userMessage);

        // 使用自定义系统提示词或默认提示词
        String systemPrompt = body.getSystemPrompt();
        String finalSystemPrompt = (systemPrompt != null && !systemPrompt.trim().isEmpty())
                ? systemPrompt
                : "中文回答";
        messageList.add(new SystemMessage(finalSystemPrompt));

        // 通过DynamicModelFactory查找模型
        DynamicModelFactory.MyModel myModel = dynamicModelFactory.getModelByName(model);
        if (myModel == null) {
            throw new RuntimeException("未找到指定模型: " + model);
        }
        return getFluxVOFlux(messageList, myModel);
    }

我这里单独写一个处理输出内容的方法来处理

/**
 * 输出处理后端流式结果
 * @param messageList 模型消息,包括系统提示词、用户提示词、历史对话和文件
 * @param myModel 指定模型名称
 * @return 过滤掉思考标签,给思考内容文本打标记
 */
private Flux<FluxVO> getFluxVOFlux(List<Message> messageList, DynamicModelFactory.MyModel myModel) {
        if ("ollama".equalsIgnoreCase(myModel.getVendor()) && myModel.getChatModel() != null) {
            Prompt prompt = new Prompt(messageList);
            AtomicBoolean inThinking = new AtomicBoolean(false);
            return myModel.getChatModel().stream(prompt)
                    .filter(response -> {
                        String text = response.getResult().getOutput().getText();
                        if ("<think>".equals(text)) {
                            inThinking.set(true);
                            return false; // 不输出
                        }
                        if ("</think>".equals(text)) {
                            inThinking.set(false);
                            return false; // 不输出
                        }
                        return true; // 其它内容输出
                    })
                    .map(response -> {
                        String text = response.getResult().getOutput().getText();
                        Boolean isStop = response.getResult().getMetadata().getFinishReason() != null;
                        String status = inThinking.get() ? "think" : (isStop ? "stop" : "running");
                        return FluxVO.builder()
                                .text(text != null ? text : "")
                                .status(status)
                                .build();
                    })
                    .onErrorResume(error -> {
                        System.err.println("流式处理异常: " + error.getMessage());
                        return Flux.just(FluxVO.builder()
                                .text("AI服务异常,请稍后重试")
                                .status("stop")
                                .build());
                    });
        }
        throw new RuntimeException("暂不支持该供应商: " + myModel.getVendor());
    }

处理后得到的流式消息如下:

data:{"text":"我会","status":"think"}

data:{"text":"尽","status":"think"}

data:{"text":"我","status":"think"}

data:{"text":"所能","status":"think"}

data:{"text":"为您提供","status":"think"}

data:{"text":"帮助","status":"think"}

data:{"text":"。\n","status":"think"}

data:{"text":"\n\n","status":"running"}

data:{"text":"您好","status":"running"}


网站公告

今日签到

点亮在社区的每一天
去签到