【mllm】——qnn后端解读

发布于:2025-04-12 ⋅ 阅读:(33) ⋅ 点赞:(0)

llm, qnn, qualcomm
源码:https://github.com/UbiquitousLearning/mllm

1.qnn后端的接入

这里只是给一个核心的思路,不涉及具体的代码实现。

前言:

  • qualcomm htp后端,因为支持的数据类型有限,int8/int16/float16,如果要用int数据类型加速,需要对模型进行量化。mllm中使用的是int8,但是不是整个整个网络使用int8,而是部分使用,同时qnn npu推理时是cpu+npu混合的。

实现:

  • 因为qualcomm sdk使用方式的限制,没有单独使用qualcomm op进行计算的方式,需要构建graph。所以mllm中将decoder部分拆成几个qnn graph。具体代码如下:
Backend::global_backends[device_]->onSetUpStart(inputs_vec, outputs_vec, getUinqueName());

                // for xnnpack currently
                for (auto &i : inputs) {
                    i.uuid() = inputs[0].module()->activation_tensors[i.name()]->uuid();
                }

                auto outputs = Forward(inputs, anyArgs);
                for (auto &output : outputs) {
                    outputs_vec.push_back(inputs[0].module()->activation_tensors[output.name()]);
                }
                Backend::global_backends[device_]->onSetUpEnd(inputs_vec, outputs_vec, getUinqueName());

qnnbackend.onsetupstart中会初始化QnnModel

qnnModels_.push_back(qnn_wrapper_api::QnnModel());
    // create qnn context, assign context to qnn memory manager
    if (StatusCode::SUCCESS != this->createContext()) {
        this->reportError("Context Creation failure");
    }
#ifdef QNN_ARM
    auto qnnMM = std::static_pointer_cast<QNNMemoryManager>(mem_manager_);
    qnnMM->setQnnInterfaceAndContext(m_context);
#endif

    // initialize qnn graph info, set graph info, graph count
    // NOTE: currently not using it
    QnnHtpGraph_CustomConfig_t customConfig;
    // customConfig.option = QNN_HTP_GRAPH_CONFIG_OPTION_NUM_HVX_THREADS;
    // customConfig.numHvxThreads = 4; // set a number. MAX = number of HVX HW blocks for that SoC
    customConfig.option = QNN_HTP_GRAPH_CONFIG_OPTION_VTCM_SIZE;
    customConfig.vtcmSizeInMB = 8;

    QnnGraph_Config_t graphConfig;
    graphConfig.option = QNN_GRAPH_CONFIG_OPTION_CUSTOM;
    graphConfig.customConfig = &customConfig;

    const QnnGraph_Config_t *pGraphConfig[] = {&graphConfig, NULL};

    const QnnGraph_Config_t **graphConfigs = pGraphConfig;

    m_graphConfigsInfoCount = 1;

    qnn_wrapper_api::ModelError_t err = qnn_wrapper_api::getQnnGraphConfigFromInfo(
        graphName.c_str(), (const qnn_wrapper_api::GraphConfigInfo_t **)m_graphConfigsInfo, m_graphConfigsInfoCount, graphConfigs);
    if (err != qnn_wrapper_api::MODEL_NO_ERROR) {
        this->reportError("Graph Config Info failure");
    }

    err = qnnModels_[qnnModelIndex_].initialize(m_backendHandle,
                                                m_qnnFunctionPointers.qnnInterface,
                                                m_context,
                                                graphName.c_str(),
                                                m_debug,
                                                DO_GRAPH_NODE_VALIDATIONS,
                                                graphConfigs);
    if (err != qnn_wrapper_api::MODEL_NO_ERROR) {
        this->reportError("Graph Initialization failure: " + graphName);
    }

后续再通过op.setup方法往QnnModel中添加tensor和op,如QNNLinearINT8

graphAddNode(name() + ".linearint8", "Conv2d", {inputs[0]->name(), weight_.name()}, matmulOut, params_InceptionV3_InceptionV3_Conv2d_1a_3x3_Conv2D);

2. other

这都是qualcomm sdk中op不支持单op后端支持,必须要构建model来调用。不友好。