9

Define an $n\times N$ sensing matrix $A$ by $A_{ij} = 0$ with probability $p$, and $A_{ij} = 1/\sqrt{n}$ with probability $1-p$. Does $A$ satisfy the restricted isometry property?

For reference, the symmetric case is answered in the following paper:

R.G. Baraniuk, M.A. Davenport, R.A. DeVore, and M.B. Wakin, "A simple proof of the restricted isometry property for random matrices," Constructive Approximation, 28(3) pp. 253-263, December 2008. (pdf)

datageist
  • 4,897
  • 4
  • 32
  • 53
olivia
  • 91
  • 2
  • This may be a pointer: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5512379 (unfortunately, it is paywalled and I have not found an OA copy of it). I do not know the paper in detail, but what I can see from a quick glance is that they do not consider as general a case as you ask for; they consider p = 1/2. Also, I do not know how thorough they are about the RIP of such matrices. – Thomas Arildsen Jun 20 '13 at 13:09
  • This could also be a hint: http://rauhut.ins.uni-bonn.de/RauhutSlidesLinz.pdf (page 98). Unfortunately, it looks like what he calls Bernoulli random variables are random +/-1 - not 0/1 (I would call these Rademacher). – Thomas Arildsen Jun 20 '13 at 13:16
  • 2
    Allow me to repeat the gist of a comment I made on the identical post (now deleted) on stats.SE: It would help to make this question more precise and indicate what exactly you are interested in and what you are struggling to adapt. @Thomas' comment is relevant; we also do not know what degree (i.e., order) of sparsity you are interested in. Even if we consider Rademacher functions, the answer is clearly no in any uniform (in $p$) sense, for let $p$ be $1$ (or, sufficiently close) so that there is (a high probability of) a submatrix being all ones. (cont.) – cardinal Jun 21 '13 at 20:15
  • 2
    By choosing a sequence $p_n \in (0,1)$ as a function of $n$, this will be made true for some $p$ for any size matrix. On the other hand, for fixed $p$, if, we modify the construction so that $A_{ij} = (1-p)/\sqrt{n}$ with probability $p$ and $-p/\sqrt{n}$ with probability $(1-p)$, then the answer is clearly yes, for this follows from the much more general theory related to zero-mean subgaussian random matrices. – cardinal Jun 21 '13 at 20:16
  • thanks @cardinal, the matrix $A$ is not zero-mean, but the theory of subgaussian random matrices does answer this question. I was wondering how $A$ could satisfy the RIP given it does not preserve norm, but it is obvious there is an appropriate scaling of $A$ that does – olivia Jun 24 '13 at 19:16

1 Answers1

1

As others have stated in the comments, the answer is "No". The non-zero mean of the matrix dictates that a nonzero mean vector (say, all ones), will have substantially higher gain than a random vector with zero mean (say uniformly random +1,-1).

Consider the squared norm of A times a constant vector y is expected to be n*(p*N)^2. (iteration of expectations)

The squared norm of A times a vector x drawn uniformly from (-1,+1) is expected to be n*(p*N). (calculable by sum of variances of Binomial distribution)

The norms of x and y are the same, but the expectation of transformed norms differ by a factor of p*N -- diverging as the dimensions grow large.

Here's matlab code to help demonstrate.

n=2000;
N=1000;
p=.9;
A=double(rand(n,N)<p); 
x=sign(randn(N,1)); 
y=ones(N,1);
Ex_normSqAx = n*(N*p);  % E[ squared norm of A times random signs ]
Ex_normSqAy = n*(N*p)^2; % E[ squared norm of A times constant vector ]
normSqAx = norm(A*x)^2;
normSqAy = norm(A*y)^2;
Mark Borgerding
  • 3,020
  • 19
  • 26