当前位置: 首页 > news >正文

Difference of Convex Relaxation (DC)

问题背景

minimize ⁡ m ∥ m ∥ 2 s u b j e c t t o ∥ m H h k e ∥ 2 ≥ 1 , ∀ k . \begin{aligned}&\underset{m}{\operatorname*{minimize}}\quad\|\boldsymbol{m}\|^2\\&\mathrm{subject~to}\quad\|\boldsymbol{m}^\mathsf{H}\boldsymbol{h}_k^e\|^2\geq1,\forall k.\end{aligned} mminimizem2subject tomHhke21,k.

P 1 : minimize ⁡ M T r ( M ) s u b j e c t t o T r ( M H k ) ≥ 1 , ∀ k , M ⪰ 0 , r a n k ( M ) = 1 , \begin{aligned}\mathscr{P}_{1}:&\underset{M}{\operatorname*{minimize}}\quad\mathrm{Tr}(M)\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{M}\boldsymbol{H}_{k})\geq1,\forall k,\\&\boldsymbol{M}\succeq0,\mathrm{rank}(\boldsymbol{M})=1,\end{aligned} P1:MminimizeTr(M)subject toTr(MHk)1,k,M0,rank(M)=1,

find v subject to v ~ H R k v ~ + ∣ c k ∣ 2 ≥ 1 , ∀ k , ∣ v n ∣ 2 = 1 , ∀ v = 1 , ⋯ , N , \begin{aligned}\text{find}&v\\\text{subject to}&\tilde{v}^\mathsf{H}\boldsymbol{R}_k\tilde{\boldsymbol{v}}+|c_k|^2\geq1,\forall k,\\&|v_n|^2=1,\forall v=1,\cdots,N,\end{aligned} findsubject tovv~HRkv~+ck21,k,vn2=1,v=1,,N,

P 2 : f i n d V s u b j e c t t o T r ( R k V ) + ∣ c k ∣ 2 ≥ 1 , ∀ k , V n , n = 1 , ∀ n = 1 , ⋯ , N , V ⪰ 0 , r a n k ( V ) = 1. \begin{aligned}\mathscr{P}_{2}:\quad\mathrm{find}\quad\boldsymbol{V}\\\mathrm{subject~to}&\mathrm{Tr}(\boldsymbol{R}_{k}\boldsymbol{V})+|c_{k}|^{2}\geq1,\forall k,\\&\boldsymbol{V}_{n,n}=1,\forall n=1,\cdots,N,\\&\boldsymbol{V}\succeq0,\quad\mathrm{rank}(\boldsymbol{V})=1.\end{aligned} P2:findVsubject toTr(RkV)+ck21,k,Vn,n=1,n=1,,N,V0,rank(V)=1.

为了进一步解决问题P1和问题P2中的非凸性,一种流行的方法是通过SDR技术简单地丢弃非凸的秩一约束[17]。由此产生的SDP问题可以通过现有的凸优化解算器(例如CVX [19])来有效地求解。如果SDP问题的最优解是秩一的,则可以通过秩一分解来恢复原始问题的最优解。另一方面,如果SDP问题的最优解不是秩1,则需要应用高斯随机化[17]等附加步骤来提取原始问题的次优解。然而,观察到对于高维优化问题(例如,天线数量N增加),则对于SDR方法返回秩一解的概率变低,这产生显著的性能恶化[7]、[18]。为了克服SDR方法的局限性,我们在下面的章节中提出了一种新的DC框架来解决问题P1和问题P2。

所针对的一般问题

一阶约束问题的DC框架

为了便于介绍,我们首先考虑具有秩1约束的一般低秩矩阵优化问题的DC算法,如下所示,

minimize ⁡ X ∈ C T r ( A 0 X ) s u b j e c t t o T r ( A k X ) ≥ d k , ∀ k , X ⪰ 0 , r a n k ( X ) = 1 , (18) \begin{aligned}&\underset{\boldsymbol{X}\in\mathcal{C}}{\operatorname*{minimize}}\quad\mathrm{Tr}(\boldsymbol{A}_{0}\boldsymbol{X})\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{A}_{k}\boldsymbol{X})\geq d_{k},\forall k,\\&\boldsymbol{X}\succeq0,\mathrm{rank}(\boldsymbol{X})=1, \tag{18} \end{aligned} XCminimizeTr(A0X)subject toTr(AkX)dk,k,X0,rank(X)=1,(18)

其中约束集 C \mathcal{C} C是凸的。关于rank-one约束的一个关键观察是,它可以等效地写为DC函数约束,这在以下Proposition中正式陈述。

Proposition 1. For positive semidefinite (PSD) matrix X ∈ C N × N X \in \mathbb{C}^{N\times N} XCN×N and T r ( X ) ≥ 1 , we have \mathrm{Tr}( \boldsymbol{X}) \geq 1, \textit{we have} Tr(X)1,we have
rank ⁡ ( X ) = 1 ⟺ Tr ⁡ ( X ) − ∥ X ∥ 2 = 0 , (19) \operatorname{rank}(\boldsymbol{X})=1\Longleftrightarrow\operatorname{Tr}(\boldsymbol{X})-\|\boldsymbol{X}\|_2=0, \tag{19} rank(X)=1Tr(X)X2=0,(19)

w h e r e trace norm Tr ⁡ ( X ) = ∑ i = 1 N σ i ( X ) and spectral norm ∥ X ∥ 2 = σ 1 ( X ) with σ i ( X ) denoting the i-th largest singular value of matrix X. \begin{aligned}&where\textit{ trace norm }\operatorname{Tr}(\boldsymbol{X})=\sum_{i=1}^N\sigma_i(\boldsymbol{X})\textit{ and spectral norm}\\&\|\boldsymbol{X}\|_2=\sigma_1(\boldsymbol{X})\textit{ with }\sigma_i(\boldsymbol{X})\textit{ denoting the i-th largest singular}\\&\text{value of matrix X.}\end{aligned} where trace norm Tr(X)=i=1Nσi(X) and spectral normX2=σ1(X) with σi(X) denoting the i-th largest singularvalue of matrix X.

为了增强问题(18)的低秩解,我们提出将(19)中的DC函数作为惩罚分量添加到目标函数中,而不是经由SDR方法移除非凸秩一约束,从而得到

minimize ⁡ X ∈ C T r ( A 0 X ) + ρ ⋅ ( T r ( X ) − ∥ X ∥ 2 ) s u b j e c t t o T r ( A k X ) ≥ d k , ∀ k , X ⪰ 0 , (20) \begin{aligned}&\underset{X\in\mathcal{C}}{\operatorname*{minimize}}\quad\mathrm{Tr}(\boldsymbol{A}_0\boldsymbol{X})+\rho\cdot(\mathrm{Tr}(\boldsymbol{X})-\|\boldsymbol{X}\|_2)\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{A}_k\boldsymbol{X})\geq d_k,\forall k,\\&\boldsymbol{X}\succeq0, \tag{20} \end{aligned} XCminimizeTr(A0X)+ρ(Tr(X)X2)subject toTr(AkX)dk,k,X0,(20)

where ρ > 0 \rho>0 ρ>0 is the penalty parameter. Note that we are able to obtain an exact rank-one solution X ∗ X^* X when the nonnegative component (Tr ( X ∗ ) − ∥ X ∗ ∥ 2 ) (\boldsymbol{X}^*)-\|\boldsymbol{X}^*\|_2) (X)X2) in the objective function is enforced to be zero.

DC算法

虽然问题(20)仍然是非凸的,但它可以通过利用优化最小化技术(MM方法,见我之前的博客)以迭代方式求解,从而产生DC算法[20]。主要思想是通过线性化凹项 − ρ ∥ X ∥ 2 -\rho\|X\|_{2} ρX2将问题(20)转化为一系列简单的子问题X2、目标函数。具体地说,我们需要在第 t t t次迭代时求解由下式给出的子问题:

minimize ⁡ X ∈ C T r ( A 0 X ) + ρ ⋅ ⟨ X , I − ∂ ∥ X t − 1 ∥ 2 ⟩ s u b j e c t t o T r ( A k X ) ≥ d k , ∀ k , X ⪰ 0 , (21) \begin{aligned}&\operatorname*{minimize}_{\boldsymbol{X}\in\mathcal{C}}\quad\mathrm{Tr}(\boldsymbol{A}_{0}\boldsymbol{X})+\rho\cdot\langle\boldsymbol{X},\boldsymbol{I}-\partial\|\boldsymbol{X}^{t-1}\|_{2}\rangle\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{A}_{k}\boldsymbol{X})\geq d_{k},\forall k,\\&\boldsymbol{X}\succeq0, \tag{21} \end{aligned} XCminimizeTr(A0X)+ρX,IXt12subject toTr(AkX)dk,k,X0,(21)

where X t − 1 X^{t-1} Xt1 is the optimal solution of the subproblem at iteration t − 1. t-1. t1. It is clear that the subproblem (21) is convex and can be solved efficiently by existing solvers such as CVX [19]. In addition, the subgradient ∂ ∥ X ~ ∥ 2 \partial\|\tilde{X}\|_2 X~2 can be computed efficiently by the following proposition [3].

Proposition 2. For given PSD matrix X ∈ C N × N , t h e s u b 2. \textit{ For given PSD matrix }\boldsymbol{X}\in \mathbb{C} ^{N\times N}, thesub 2. For given PSD matrix XCN×N,thesub- gradient ∂ ∥ X ∥ 2 can be computed as v 1 v 1 H , w h e r e v 1 ∈ C N \partial \| \boldsymbol{X}\| _2\textit{can be computed as }v_1\boldsymbol{v}_1^\mathrm{H} , where\boldsymbol{v}_1\in \mathbb{C} ^N X2can be computed as v1v1H,wherev1CN is the leading eigenvector of matrix X.

所提出的DC算法从任意初始点收敛到问题(20)的临界点[20]。因此,我们在算法1中总结了所提出的DC算法。

在这里插入图片描述

Proposed Alternating DC Approach

In this subsection, we apply the proposed DC framework to problem P 1 \mathscr{P}_1 P1 and problem P 2 . \mathscr{P}_2. P2. Specifically, to find a rank-one solution to problem P 1 \mathscr{P}_1 P1, we propose to solve the following DC programming problem

minimize ⁡ M T r ( M ) + ρ ( T r ( M ) − ∥ M ∥ 2 ) s u b j e c t t o T r ( M H k ) ≥ 1 , ∀ k , M ⪰ 0 , (22) \begin{aligned}&\underset{M}{\operatorname*{minimize}}\quad\mathrm{Tr}(\boldsymbol{M})+\rho(\mathrm{Tr}(\boldsymbol{M})-\|\boldsymbol{M}\|_{2})\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{MH}_{k})\geq1,\forall k,\\&\boldsymbol{M}\succeq0, \tag{22}\end{aligned} MminimizeTr(M)+ρ(Tr(M)M2)subject toTr(MHk)1,k,M0,(22)

where ρ > 0 \rho>0 ρ>0 is the penalty parameter. When the penalty component is enforced to be zero, problem (22) shall induce a rank-one solution M ⋆ M^\star M, we can thus recover the solution m m m to problem (12) through Cholesky decomposition M ⋆ = m m H . M^\star=\boldsymbol{m}m^\mathrm{H}. M=mmH.

To detect feasibility for problem P 2 \mathscr{P}_2 P2, we propose to minimize the difference between trace norm and spectral norm as follows,

minimize ⁡ V T r ( V ) − ∥ V ∥ 2 s u b j e c t t o T r ( R k V ) + ∣ c k ∣ 2 ≥ 1 , ∀ k , V n , n = 1 , ∀ n = 1 , ⋯ , N , V ⪰ 0. (23) \begin{aligned}&\underset{V}{\operatorname*{minimize}}\quad\mathrm{Tr}(\boldsymbol{V})-\|\boldsymbol{V}\|_{2}\\&\mathrm{subject~to}\quad\mathrm{Tr}(\boldsymbol{R}_{k}\boldsymbol{V})+|c_{k}|^{2}\geq1,\forall k,\\&\boldsymbol{V}_{n,n}=1,\forall n=1,\cdots,N,\\&\boldsymbol{V}\succeq0. \tag{23}\end{aligned} VminimizeTr(V)V2subject toTr(RkV)+ck21,k,Vn,n=1,n=1,,N,V0.(23)

When the objective value of problem (23) becomes zero, we shall find an exact rank-one optimal solution V ⋆ V^{\star} V. By Cholesky decomposition V ⋆ = v ~ v ~ v ~ H V^\star=\tilde{v}\tilde{v}\tilde{v}^\mathrm{H} V=v~v~v~H, we can obtain a feasible solution v ~ \tilde{v} v~ to problem (14). If the objective value fails to be zero, we claim that problem P 2 \mathscr{P}_2 P2 (i.e., problem (14)) is infeasible.

In summary, the proposed alternating DC algorithm for
solving problem P \mathscr{P} P can be presented in Algorithm 2.

在这里插入图片描述


http://www.mrgr.cn/news/40917.html

相关文章:

  • 基于深度学习的任务序列中的快速适应
  • Hive数仓操作(十)
  • 课设实验-数据结构-线性表-手机销售
  • 2024多模态大模型发展调研
  • buuctf--->Youngter-drive
  • 复习HTML(基础)
  • 重置linux后vscode无法再次使用ssh连接
  • 【ShuQiHere】深入理解 LC-3 指令集架构(LC-3 ISA):硬件与软件的桥梁 ️
  • 华为OD机试真题---数大雁
  • 在Ubuntu 14.04上安装带SSL的Webmin的方法
  • profile-spec-ref元素
  • 【计算机毕业设计】springboot企业客户信息反馈平台
  • Linux基础命令parted详解
  • 【简介Sentinel-1】
  • 【1】野火STM32F103VET6开发板入门笔记之点亮RGB
  • express,MySQL 实现登录接口,如果用户未注册直接注册
  • 黑马linux笔记(转载)
  • 信息安全工程师(28)机房安全分析与防护
  • 深度学习:DCGAN
  • 什么是信息增益比