【空中计算】关于OAC用于边缘推理的总结

发布于:2025-06-25 ⋅ 阅读:(25) ⋅ 点赞:(0)

1-On the View-and-Channel Aggregation Gain in Integrated Sensing and Edge AI
2-Progressive Feature Transmission for Split Classification at the Wireless Edge
3-Over-the-Air Multi-View Pooling for Distributed Sensing
4-Energy-Efficient Edge Inference in Integrated Sensing, Communication, and Computation Networks
5-Over-the-Air Edge Inference for Low-Altitude Airspace: Generative AI-Aided Multi-Task Batching and Beamforming Design

1-基本问题

详细推导接收端迫零beamformer b k = H ( H H H ) − 1 e k \mathbf{b}_k = \mathbf{H}(\mathbf{H}^H\mathbf{H})^{-1}\mathbf{e}_k bk=H(HHH)1ek

这里假设不同用户信道正交 ⇒ \Rightarrow 用户之间的信道相互独立(或可区分)
Y = ∑ k = 1 K ρ k h k x k + Z = H ρ X + Z \mathbf{Y} = \sum_{k=1}^K \rho_k \mathbf{h}_k \mathbf{x}_k + \mathbf{Z} = \mathbf{H} \boldsymbol{\rho} \mathbf{X} + \mathbf{Z} Y=k=1Kρkhkxk+Z=HρX+Z

  • H = [ h 1 , ⋯   , h K ] ∈ C M × K \mathbf{H} = [\mathbf{h}_1, \cdots, \mathbf{h}_K] \in \mathbb{C}^{M \times K} H=[h1,,hK]CM×K:信道矩阵;
  • ρ = d i a g ( ρ 1 , ⋯   , ρ K ) \boldsymbol{\rho} = \mathrm{diag}(\rho_1, \cdots, \rho_K) ρ=diag(ρ1,,ρK):功率控制矩阵;
  • X = [ x 1 ⊤ , ⋯   , x K ⊤ ] ⊤ ∈ C K × T \mathbf{X} = [\mathbf{x}_1^\top, \cdots, \mathbf{x}_K^\top]^\top \in \mathbb{C}^{K \times T} X=[x1,,xK]CK×T:各用户的发射信号

b k \mathbf{b}_k bk的目标满足 H H b k = e k \mathbf{H}^H \mathbf{b}_k = \mathbf{e}_k HHbk=ek,即 b k H h i = 0 \mathbf{b}_k^H \mathbf{h}_i = 0 bkHhi=0并且 b k H h k = 1 \mathbf{b}_k^H \mathbf{h}_k = 1 bkHhk=1
最小二乘求解 b k = arg ⁡ min ⁡ b ∥ H H b − e k ∥ 2 2 \mathbf{b}_k = \arg\min_{\mathbf{b}} \left\|\mathbf{H}^H \mathbf{b} - \mathbf{e}_k\right\|_2^2 bk=argminb HHbek 22

b k = ( H H ) † e k = ( H H ) + e k \boxed{\mathbf{b}_k = \left(\mathbf{H}^H\right)^\dagger \mathbf{e}_k = \left(\mathbf{H}^H\right)^+ \mathbf{e}_k} bk=(HH)ek=(HH)+ek
( H H ) + = H ( H H H ) − 1 \left(\mathbf{H}^H\right)^+ = \mathbf{H}(\mathbf{H}^H\mathbf{H})^{-1} (HH)+=H(HHH)1(当 M ≥ K M \ge K MK H \mathbf{H} H 满秩时)

总结:

1-该逆矩阵用于在接收端解耦用户信号,实现了对第 k 个用户的线性投影滤波器
2-本质上是 MIMO 检测问题中的一类线性检测器方案(线性检测器)

g \mathbf{g} g 的含义

g \mathbf{g} g 表示当前物体在理想条件下提取到的完整、高质量的“真实特征”,即所有视角下整合后的全局特征表达,是当前物体所属类别在特征空间中的理想表达。
建模: P r ( g = μ ℓ ) = 1 L ,   ∀ ℓ \mathrm{Pr}\left(\mathbf{g}=\boldsymbol{\mu}_{\ell}\right) = \frac{1}{L},\ \forall \ell Pr(g=μ)=L1, 

只能用概率建模,离散随机变量是所有可能的类别,这里有均匀的先验

为什么最优的线性分类器是 ℓ ⋆ = arg ⁡ min ⁡ ℓ   ( f ˉ − P ˉ ℓ ) ⊤ C − 1 ( f ˉ − P ˉ ℓ ) . \ell^{\star}=\arg\min_{\ell}\ \left(\bar{\mathbf{f}}-\bar{\mathbf{P}}\boldsymbol{\ell}\right)^{\top}\mathbf{C}^{-1}\left(\bar{\mathbf{f}}-\bar{\mathbf{P}}\boldsymbol{\ell}\right). =argmin (fˉPˉ)C1(fˉPˉ).

下面证明高斯分布的最大值点等价于马氏距离(Mahalanobis distance)最小

f ˉ ∼ 1 L ∑ ℓ = 1 L N ( P ˉ μ ℓ , 1 K C ) \bar{\mathbf{f}} \sim \frac{1}{L} \sum_{\ell=1}^{L} \mathcal{N}(\bar{\mathbf{P}}\boldsymbol{\mu}_\ell, \tfrac{1}{K} \mathbf{C}) fˉL1=1LN(Pˉμ,K1C)
P r ( f ˉ ∣ μ ℓ ) = 1 ( 2 π ) M / 2 ∣ 1 K C ∣ 1 / 2 exp ⁡ ( − 1 2 ( f ˉ − P ˉ μ ℓ ) ⊤ ( 1 K C ) − 1 ( f ˉ − P ˉ μ ℓ ) ) \mathrm{Pr}(\bar{\mathbf{f}} | \boldsymbol{\mu}_\ell) = \frac{1}{(2\pi)^{M/2} |\frac{1}{K}\mathbf{C}|^{1/2}} \exp\left( -\frac{1}{2} (\bar{\mathbf{f}} - \bar{\mathbf{P}}\boldsymbol{\mu}_\ell)^\top \left(\frac{1}{K}\mathbf{C}\right)^{-1} (\bar{\mathbf{f}} - \bar{\mathbf{P}}\boldsymbol{\mu}_\ell) \right) Pr(fˉμ)=(2π)M/2K1C1/21exp(21(fˉPˉμ)(K1C)1(fˉPˉμ))
log ⁡ P r ( f ˉ ∣ μ ℓ ) = − K 2 ( f ˉ − P ˉ μ ℓ ) ⊤ C − 1 ( f ˉ − P ˉ μ ℓ ) + const \log \mathrm{Pr}(\bar{\mathbf{f}} | \boldsymbol{\mu}_\ell) = -\frac{K}{2} (\bar{\mathbf{f}} - \bar{\mathbf{P}}\boldsymbol{\mu}_\ell)^\top \mathbf{C}^{-1} (\bar{\mathbf{f}} - \bar{\mathbf{P}}\boldsymbol{\mu}_\ell) + \text{const} logPr(fˉμ)=2K(fˉPˉμ)C1(fˉPˉμ)+const

各类分布为高斯,且协方差相同,则最优线性分类器是马氏距离最小化
Linear Discriminant Analysis (LDA)

voxel的含义

Voxel 是 Volumetric Pixel 的缩写,意为“体素”;
是三维空间中的基本单位格子,类似于二维图像中的像素(Pixel),但用于三维环境建模
我的理解:感知的区域是立体的,每个传感器感知到部分voxel,然后将部分voxel映射到subcarrier传输

关于感知不确定度的上下界

H = E f ~ [ − ∑ ℓ = 1 L P r ( μ ℓ ∣ f ~ ) log ⁡ P r ( μ ℓ ∣ f ~ ) ] . H= \mathsf{E}_{\tilde{\mathbf{f}}}\left[-\sum_{\ell=1}^{L}\mathrm{Pr}\left(\boldsymbol{\mu}_{\ell}|\tilde{\mathbf{f}}\right)\log\mathrm{Pr}\left(\boldsymbol{\mu}_{\ell}|\tilde{\mathbf{f}}\right)\right]. H=Ef~[=1LPr(μf~)logPr(μf~)].

去期望变为积分形式: H = − ∑ ℓ = 1 L ∫ P r ( μ ℓ ∣ f ˉ ) log ⁡ P r ( μ ℓ ∣ f ˉ ) p ( f ˉ ) d f ˉ . H = -\sum_{\ell=1}^{L}\int\mathrm{Pr}\left(\boldsymbol{\mu}_{\ell}|\bar{\mathbf{f}}\right)\log\mathrm{Pr}\left(\boldsymbol{\mu}_{\ell}|\bar{\mathbf{f}}\right)p(\bar{\mathbf{f}})\mathrm{d}\bar{\mathbf{f}}. H==1LPr(μfˉ)logPr(μfˉ)p(fˉ)dfˉ.

基于贝叶斯公式改写
H = − ∑ ℓ = 1 L ∫ P r ( μ ℓ ∣ f ˉ ) log ⁡ P r ( μ ℓ ∣ f ˉ ) p ( f ˉ ) d f ˉ = − ∑ ℓ = 1 L ∫ p ( f ˉ ∣ P ˉ μ ℓ ) p ( P ˉ μ ℓ ) log ⁡ p ( f ˉ ∣ P ˉ μ ℓ ) p ( P ˉ μ ℓ ) p ( f ˉ ) d f ˉ = − ∑ ℓ = 1 L p ( P ˉ μ ℓ ) ∫ p ( f ˉ ∣ P ˉ μ ℓ ) log ⁡ p ( f ˉ ∣ P ˉ μ ℓ ) p ( P ˉ μ ℓ ) ∑ ℓ ′ p ( f ˉ ∣ P ˉ μ ℓ ′ ) p ( P ˉ μ ℓ ′ ) d f ˉ = − ∑ ℓ = 1 L p ( P ˉ μ ℓ ) ∫ p ( f ˉ ∣ P ˉ μ ℓ ) log ⁡ p ( f ˉ ∣ P ˉ μ


网站公告

今日签到

点亮在社区的每一天
去签到