II. B Methods
III.5 Proofs of results in Section III.3
for k = r0+ 1, . . . , r. Let H ∈ Cm×∞ be as in (III.6) with A = HPM. Let x∈`2(N)ande∈Cm withkek2≤η for someη≥0. Sety˜=Ax+e. Then any solution xˆ of the optimization problem
minimise
z∈CM
kzk1,ω subject to kAz−yk˜ 2≤η satisfies
kPMx−xkˆ 1,ω≤Cσs,M(PMx)1,ω+D√ rη
kPMx−xkˆ 2≤(1 +r1/4)
Cσs,M(PMx)1,ω
√r +Dη
with probability1−, whereC= 2(2 +√
3)/(2−√
3) andD= 8√
2/(2−√ 3). Proof. Proposition III.4.10 gives G = √
PMU∗PNU PM = √
I = I. Next notice that Sω,s = r and that PMx ∈ {z ∈ CM : kAz−yk˜ 2 ≤ η} since kHPM⊥k= 0. Using TheoremIII.3.5we see that we can guarantee recovery of (s,M)-sparse vectors, ifAsatisfies the RIPL with constantδt,M≤1/2, where tl= min{Ml−Ml−1,8rsl}. Using TheoremIII.4.12gives the result.
III.5 Proofs of results in Section III.3
When deriving uniform recovery guarantees via the RIP, it is typical to proceed as follows. First, one shows that the RIP implies the so-calledrobust Null space Property (rNSP) of orders(see Def. 4.17 in [16]). Second, one shows that the rNSP implies stable and robust recovery. Thus the line of implications reads
(RIP) =⇒ (rNSP) =⇒ (uniform recovery).
A similar line of implications holds for the RIPL and the correspondingrobust Null Space Property in levels (rNSPL); see Def. 3.6 in [7]).
Both of the recovery guarantees for matrices satisfying the rNSP and rNSPL consider minimizers of the unweighed quadratically-constrained basis pursuit (QCBP) optimization problem. In our setup, we consider minimizers of the weighted QCBP. We have therefore generalized the rNSPL to what we call the weighted robust null space property in levels.
For the sufficient condition for the G-RIPL in Theorem III.3.6, the proof follows along similar lines as in [23]. We only sketch the main differences here.
156
Proofs of results in SectionIII.3
III.5.1 The weighted rNSPL and norm bounds
For a set Θ⊆ {1, . . . , M} and a vectorx∈CM we let the vectorxΘ be given by (xΘ)i =
(xi i∈Θ 0 i6∈Θ. We also define
Es,M={Θ⊆ {1, . . . , M}:|Θ∩ {Ml−1+ 1, . . . , Ml}| ≤sl, forl= 1, . . . , r}.
Definition III.5.1(weigthed rNSP in levels).Let M,s∈ Nr be sparsity levels and local sparsities, respectively. For positive weightsω∈Rr+1, we say that A∈Cm×M satisfies theweighted robust Null Space Property in Levels (weighted rNSPL) of order (s,M) with constants 0< ρ <1 andγ >0 if
kxΘk2≤ ρkxΘck1,ω
pSω,s
+γkAxk2 (III.28)
for allx∈CM and all Θ∈Es,M.
Lemma III.5.2(weighted rNSPL implies `(1,ω)-distance bound).Suppose that A∈Cm×M satisfies the weighted rNSPL of order(s,M)with constants0< ρ <1 andγ >0. Letx, z∈CM. Then
kz−xk1,ω≤1 +ρ
1−ρ(2σs,M(x)1,ω+kzk1,ω− kxk1,ω) + 2γ 1−ρ
pSω,skA(z−x)k2. (III.29) Proof. Letv=z−xand Θ∈Es,M be such thatkxΘck1,ω=σs,M(x)1,ω. Then
kxk1,ω+kvΘck1,ω≤2kxΘck1,ω+kxΘk1,ω+kzΘck1,ω
= 2kxΘck1,ω+kxΘk1,ω+kzk1,ω− kzΘk1,ω
≤2σs,M(x)1,ω+kvΘk1,ω+kzk1,ω, which implies that
kvΘck1,ω≤2σs,M(x)1,ω+kzk1,ω− kxk1,ω+kvΘk1,ω. (III.30) Now considerkvΘk1,ω. By the weighted rNSPL, we have
kvΘk1,ω≤p
Sω,skvΘk2≤ρkvΘck1,ω+p
Sω,sγkAvk2. Hence (III.30) gives
kvΘk1,ω≤ρ
2σs,M(x)1,ω+kzk1,ω− kxk1,ω+kvΘk1,ω +p
Sω,sγkAvk2,
157
III. Uniform recovery in infinite-dimensional compressed sensing
and after rearranging we get kvΘk1,ω≤ ρ
Therefore, using this and (III.30) once more, we deduce that kz−xk1,ω=kvΘk1,ω+kvΘck1,ω
which gives the result.
Lemma III.5.3 (weighted rNSPL implies `2 distance bound). Suppose that A∈Cm×M satisfies the weighted rNSPL of order(s,M)with constants0< ρ <1
l vin absolute value.
Then
Proofs of results in SectionIII.3
We now use the weighted rNSPL to get kvk2≤ III.5.2 Weighted rNSPL implies uniform recovery
Theorem III.5.4.LetM,s∈Nrbe sparsity levels and local sparsities, respectively, and let ω∈Rr+1 be positive weights. Letx∈CK, with K > M and e∈Cm then any solutionxˆ of the optimization problem
minimise
III. Uniform recovery in infinite-dimensional compressed sensing
We also note that (III.32) implies
ωr+1≥
1
3(1 + (Sω,s/ζs,ω)1/4)+ 2γkAPM⊥k1→2
pSω,s
≥
2
C(1 + (Sω,s/ζs,ω)1/4)+D
CγkAPKMk1→2
pSω,s
which can be written as (1 + (Sω,s/ζs,ω)1/4)(C/2) 1
pSω,s ≥
(D/2)(1 + (Sω,s/ζs,ω)1/4)γkAPKMk1→2+ 1 /ωr+1. (III.38)
Next setv=x−xˆ and consider the`(1,ω)-bound. First notice that since APM satisfies the weighted rNSPL, LemmaIII.5.2 gives
kPMvk1,ω≤(C/2) (2σs,M(PMx)1,ω+kPMxkˆ 1,ω− kPMxk1,ω) + (D/2)γp
Sω,skAPMvk2. (III.39)
Here the last term can be bounded by
kAPMvk2≤ kAv+y−yk2+kAPKMvk2≤2η+kAPKMk1→2
ωr+1 kPKMvk1,ω
(III.40)
≤2η+kAPKMk1→2
ωr+1 kPKMxk1,ω+kPKMxkˆ 1,ω
, (III.41)
since bothxand ˆxare feasible. Combining (III.37), (III.39) and (III.41) gives kvk1,ω≤kPMvk1,ω+kPKMxk1,ω+kPKMxkˆ 1,ω
≤(C/2) 2σs,M(PMx)1,ω+kPMxkˆ 1,ω− kPMxk1,ω
+kPKMxk1,ω+kPKMxkˆ 1,ω
+ (D/2)γp
Sω,skAPMvk2
≤(C/2) 2σs,M(PMx)1,ω+kPMxkˆ 1,ω− kPMxk1,ω+Dγp Sω,sη +
1 + (D/2)γp
Sω,skAPKMk1→2 ωr+1
kPKMxk1,ω+kPKMxkˆ 1,ω
≤(C/2) 2σs,M(x)1,ω+kxkˆ 1,ω− kxk1,ω
+Dγp Sω,sη.
Using that ˆxis a minimizer of (III.33) gives the desired bound.
We now consider the`2-bound. First note that kvk2≤ kPMvk2+kPKMvk2≤ kPMvk2+ 1
ωr+1
kPKMvk1,ω. (III.42) 160
Proofs of results in SectionIII.3 Again, since APM satisfies the weighted rNSPL we can apply LemmaIII.5.3, LemmaIII.5.2 and inequality (III.43) to obtain the bound
kPMvk2≤ Combining (III.38), (III.41), (III.42), (III.44) and now gives
kvk2≤
Using that ˆxis a minimizer of (III.33) completes the proof.
161
III. Uniform recovery in infinite-dimensional compressed sensing
III.5.3 G-RIPL implies weighted rNSPL
Theorem III.5.5.LetA∈Cm×M and let G∈CM×Mbe invertible. Let M∈Nr be sparsity levels,s,t∈Nr be local sparsities and let ω∈Rrbe positive weights.
Suppose that Asatisfies the G-RIPL of order (t,M)with constant0< δt,M<1, Then A satisfies the weighted rNSP in levels of order (s,M) with constants
0< ρ <1 andγ=kG−1k2/√ index set of the nexttl/2 largest values and so forth. In the case where there are less thantl/2 values left at iterationk, we letTl,k be the remaining indices.
LetTk=T1,k∪ · · · ∪Tr,k and letT{0,1}=T0∪T1. Since Θ⊆T{0,1} we have
Proofs of results in SectionIII.3
which establishes the weighted rNSPL of order (s,M) with 0 < ρ < 1 and γ=kG−1k2/√
1−δ.
III.5.4 Proof of TheoremIII.3.5
Proof of Theorem III.3.5. First notice that for 0< δ≤1/2 we have that Equation (III.45), simplifies to Equation (III.10). This implies that APM satisfies the weighted rNSPL of order (s,M), with constants ρ = √
3/2 and we know from TheoremIII.5.4 that any solution ˆxof (III.11) satisfies (III.12)
and (III.13).
III.5.5 Proof of TheoremIII.3.6
Proof of Theorem III.3.6. We recall thatU ∈ B(`2) is an isometry and that
III. Uniform recovery in infinite-dimensional compressed sensing notice that the matrixPΩk can be written as
PΩk= Xk,i are independent, and also that
E(A∗A) =PMU∗PNr0U PM +
where G∈CM×M is non-singular by assumption. Let Ds,M,G=
η∈CM :kGηk2≤1, |supp(η)∩ {Mk−1+ 1, . . . , Mk}| ≤sk, k= 1, . . . , r . We now define the following seminorm onCM×M:
|||B|||s,M,G:= sup
Proofs of results in SectionIII.3
Due to (III.46) and (III.47), we may rewrite this as δs,M=||| Having detailed the setup, the remainder of the proof now follows along very similar lines to that of [23, Thm. 3.2]. Hence we only sketch the details.
The first step is to estimate E(δs,M). Using the standard techniques of symmetrization, Dudley’s inequality, properties of covering numbers, and arguing as in [23, Sec. 4.2], we deduce that whereC2>0 is a constant. Using this, Talagrand’s theorem and using the fact thatkPNU PMk2≤ kUk2= 1 (see [23, Sec. 4.3]) we deduce that
Combining this with (III.50) and (III.51) now completes the proof.
III.5.6 Proof of CorollaryIII.3.7and LemmaIII.3.3
Proof of Corollary III.3.7. We must ensure that all the conditions are met to be able to apply TheoremIII.3.5 withPKx.
First notice that for weightsω= (s−1/21 , . . . , s−1/2r , ωr+1) we haveSω,s=r andζs,ω= 1. Next we note that condition (ii) implies thatPKxis a feasible point sincekHPKx−yk˜ 2≤ kHPK⊥xk2+ke1k2=η+η0.
165
III. Uniform recovery in infinite-dimensional compressed sensing Let G=√
PMU∗PNU PM. Combining condition (i) and LemmaIII.3.3 gives kG−1k2≤1/√
θand sincekGk2≤1 we also haveκ(G) =kGk2kG−1k2≤1/√ θ. Inserting the above equalities and inequalities into the weight condition forωr+1
in TheoremIII.3.5gives condition (iii).
Next we must ensure thatAPM satisfies the G-RIPL of order (t,M) with δt,M≤1/2 where
tl= min
Ml−Ml−1,2
4θ−1rsl . (III.52) According to Theorem III.3.6this occurs if themk’s satisfies condition (iv). The error bounds (III.12) and (III.13) now follows directly from TheoremIII.3.5.
Proof of lemma III.3.3. First notice that the balancing property is equivalent to requiring
σM(PNU PM)≥√
θ (III.53)
where σM(PNU PM) is theMth largest singular value ofPNU PM. Indeed, since U is an isometry, the matrixPM −PMU∗PNU PM is nonnegative definite, and therefore
kPMU∗PNUPM−PMk2= sup
x∈CM,kxk2≤1
h(PMU∗PNUPM −PM)x, xi (III.54)
= sup
x∈CM,kxk2≤1
(kPMxk2− kPNU PMxk2) (III.55)
= 1− inf
x∈CM,kxk2=1
kPNU PMxk2 (III.56) This gives (III.53). Next letG=√
PMU∗PNU PM and notice thatσM(G) = σM(PNU PM). This gives kG−1k2= 1/σM(G)≤1/√
θ.