5

I was wondering about if there is compressive sensing algorithm to estimate the sparse vector where the number of non-zeros values and amplitude of every non-zeros value are known. For example, assume we have a vector $x$ whose length is $N$x$1$ with only $N/2$ equal non-zeros known values but unknown location of those values. That vector is convolved with a channel $h$ resulting a vector $y$. it means that:

$y = h⊗x$, where $⊗$ is the convolution operation.

Is it possible to use compressive sensing to estimate the locations of non-zeros values in $x$ based on $y$.

Here is the code of an example where the length of vector $x$ is $32$ and channel $h$ = 16:

clear all; clc;

%%%% Build the sparse vector 
X = hadamard(32); 
X2 = randi([1 length(X)-1], 1);

x = X(1,:) + X(X2+1,:);         % Here the built sparse vector
x = x / max(x);                 % To make the sparse vector either one or zeros

h = randn(1,16);                % channel 
y = conv(x,h); 
y = y(1:end-length(h)+1);       % To remove the delay of convolution

Thank you

Royi
  • 19,608
  • 4
  • 197
  • 238
Gze
  • 640
  • 3
  • 11
  • Do you know something about $ h $? How big is the problem ($ N $)? What's the noise level? – Royi Apr 22 '20 at 15:35
  • @Royi for example let's take $N = 128$ and $h$ a random vector of $32$x$1$. We don't care of the noise for the moment. – Gze Apr 23 '20 at 06:44
  • Do we know anything about $ h $? – Royi Apr 23 '20 at 06:58
  • No .. we don't know anything about it. . . "We know it's sparse too, but I don't know if that will help or no" – Gze Apr 23 '20 at 07:23
  • @Royi I was thinking to use the frequency-domain since the abs(fft($x$)) is constant, hence we can estimate $h$ in that case. is that right ? – Gze Apr 23 '20 at 10:01
  • 1
    So our prior is $ h $ is sparse yet unknown and $ x $ is sparse with known constant values for all non zero elements? – Royi Apr 23 '20 at 13:30
  • Yes, exactly @Royi – Gze Apr 23 '20 at 13:33
  • Could you post mat file with example of $ y $, $ h $ and $ x $? – Royi Apr 23 '20 at 19:44
  • @Royi I added the mat code into the question. – Gze Apr 24 '20 at 07:22
  • Can we assume $ h $ to be LPF channel? So at least it has sum of 1 or something? If you have no prior on $ h $ the problem is ill poised. – Royi Apr 24 '20 at 07:35
  • LPF = low-pass filter, right? I don't work on that case, how will that help ? I maybe can extend your idea into general case. – Gze Apr 24 '20 at 13:06
  • LPF filters usually maintain the DC component. So we can at least say their sum is 1. – Royi Apr 24 '20 at 13:37
  • Yep .. we can assume that has a sum of constant $N$. @Royi – Gze Apr 26 '20 at 09:55

2 Answers2

4

Basically your problem is called Blind Deconvolution.
It means we want to estimate both the operator and the input given the output.

You model is Linear Time Invariant Operator so we have LTI Blind Deconvolution.
In general blind deconvolution is ill poised problem.
So we need to make assumptions about the model. The more assumptions the better the chance to solve this really hard problem.

What do we have in your case:

  1. The input signal is sparse.
  2. The input signal has 2 values, either zero or other known value.

What's missing is some assumptions on the operator $ h $.

Deconvolution in Image Processing

The field which pushes the deconvolution problem farther and farther is mostly the image processing field.
There are many models of real world images and convolution kernels.

Let's talk about the most common for each:

  • In most cases the convolution kernel is assumed to be LPF with its sum of coefficients equal 1 and each coefficient is non negative.
  • In most cases the image is assumed to be "Piece Wise Smooth. Enforcing it using the Total Variation Model which basically says the Gradients are distributed according to Laplace Distribution.

With those 2 models we can model the problem as:

$$\begin{aligned} \arg \min_{h, x} \quad & \frac{1}{2} {\left\| h \ast x - y \right\|}_{2}^{2} + \lambda \operatorname{TV} \left( x \right) \\ \text{subject to} \quad & \sum h = 1 \\ & {h}_{i} \geq 0 \\ \end{aligned}$$

As can be seen this is highly non convex problem. The method used to solve it is by splitting methods.

So we solve it by iterations:

We set $ {h}_{i}^{0} = \frac{1}{N} $, then:

  • For the estimated signal:

$$\begin{aligned} {x}^{k + 1} = \arg \min_{x} \quad & \frac{1}{2} {\left\| {h}^{k} \ast x - y \right\|}_{2}^{2} + \lambda \operatorname{TV} \left( x \right) \\ \end{aligned}$$

  • For the estimated kernel:

$$\begin{aligned} {h}^{k + 1} = \arg \min_{h} \quad & \frac{1}{2} {\left\| h \ast {x}^{k + 1} - y \right\|}_{2}^{2} \\ \text{subject to} \quad & \sum h = 1 \\ & {h}_{i} \geq 0 \\ \end{aligned}$$

So, in your case we can do the following:

  1. Replace the regularization by the Sparsity Model. Solve the $ x $ iteration by the methods in Thomas' answer (Yaghoobi, Blumensath, Davies, 2007, Quantized Sparse Approximation with Iterative Thresholding for Audio Coding - DOI, Nagahara, 2015, Discrete Signal Reconstruction by Sum of Absolute Values - DOI). Solve for $ h $ as for Least Squares with Simplex Constraint.

  2. Use model without convolution using Dictionary and use Dictionary Learning Methods like K-SVD. For the signal estimation iteration still you should use the methods above.

Some related questions:

  1. Using Total Variation Denoising to Clean Accelerometer Data.
  2. The Meaning of the Terms Isotropic and Anisotropic in the Total Variation Framework.
  3. Why Sparse Priors Like Total Variation Opts to Concentrate Derivatives at a Small Number of Pixels?
  4. How Can I Use MATLAB to Solve a Total Variation Denoising / Deblurring Problem?
  5. Intuitive Meaning of Regularization in Imaging Inverse Problems.
  6. Estimation / Reconstruction of an Image from Its Missing Data 2D DFT.
  7. Deconvolution of an Image Acquired by a Square Uniform Detector.
Royi
  • 19,608
  • 4
  • 197
  • 238
  • Thank you so much ... I will try to write the matlab code for that methods and let you know. . . Do you have the matlab code for them ? What is the $TV$ – Gze Apr 27 '20 at 20:05
  • @Gze, I added some related references. Each of them should assist with some angle. Please mark those which assisted you. – Royi Apr 28 '20 at 05:21
  • We set the the first value of $h$ which is $h_i^0$ as known, is that right ? – Gze May 01 '20 at 18:04
  • Yep. You can set it uniformly. – Royi May 01 '20 at 18:18
  • when setting $h_i^0x$, what you mean $x$ ? Is it $x_i^0$ too, which can be found as $y_i^0/h_i^0$? and $$ is multiplication or convolution ? – Gze May 02 '20 at 20:48
  • We don't set it. We're after getting $ x $ from the optimization problem. – Royi May 02 '20 at 21:56
  • I try bulding the matlab code of that method, but I coudn't. Could you please send me the matlab code of that method ? – Gze May 08 '20 at 19:27
  • I don't have such code. It is not trivial and takes time. If you open a new question about implementation and what you did I will certainly have a look and try to assist. – Royi May 08 '20 at 22:27
2

You can approach this problem as a special case of the "$k$-simple bounded signal" class described in (Donoho & Tanner, 2010 - Precise Undersampling Theorems ), see page 2, Example 3. Particularly, your signal is a "0-simple" signal, i.e. your values are either 0 or some constant. The problem can easily be scaled to 0 or "some constant" instead of 0 or 1.
In addition, you need to re-write your sensing equation with a matrix-vector product instead of the convolution as explained in my answer.
Notice that you will not be able to successfully under-sample by more than a factor ½ with this interpretation of the problem - see (Donoho & Tanner, 2010 - Precise Undersampling Theorems ), page 5, Fig. 3.

Edit - two more solutions: Another approach can be Masaaki Nagahara's (Nagahara, 2015, Discrete Signal Reconstruction by Sum of Absolute Values - DOI). In particular, your case corresponds to the binary case in the mentioned paper. That is, $r_1 = 0$ and $r_2$ is your known amplitude or vice versa if the amplitude is negative. Use the probabilities $p_1$ and $p_2$ to express your known sparsity.

Finally, a third solution I came to think of is (Yaghoobi, Blumensath, Davies, 2007, Quantized Sparse Approximation with Iterative Thresholding for Audio Coding - DOI). In this framework, your case corresponds to having two quantisation levels; 0 and your known amplitude. The philosophy here is a bit similar to (Nagahara, 2015), but the algorithm is a greedy thresholding algorithm as opposed to the convex optimisation approach in (Nagahara, 2015).

I do not know which of these approaches would work best for your case.

Thomas Arildsen
  • 1,322
  • 7
  • 17
  • Thank you so much for your continuous help.. I really appreciate that, but I don't know $h$, so how can I build the measurement matrix ? I still didn't understand your exact meaning,is it possible to add more details to your answer? .. I maybe should read the paper (Donoho & Tanner, 2010) carefully. – Gze Apr 23 '20 at 11:27
  • 1
    Please, when you link to papers, write their titles. One day the link won't be available yet the search will always work :-). – Royi Apr 23 '20 at 19:42
  • @Royi I'm trying to explore this idea too, but I still didn't understand it well. Awaiting for Mr. Thomas if he can provide more details about it. – Gze Apr 24 '20 at 07:24
  • @Royi good point. It's a reflex from writing research papers – Thomas Arildsen Apr 24 '20 at 13:42
  • @Gze if you do not know $h$, then you do not know the measurement matrix. In that case, the problem does not really lend itself to compressed sensing. In compressed sensing (and lots of estimation theory in general), it is assumed that you know the measurement matrix. Do you have a way of estimating it? – Thomas Arildsen Apr 24 '20 at 13:45
  • Thank you for clarifying .. I thought there is such algorithm where we can detect the sparsity based on their number and amplitude. . I got it. @ThomasArildsen – Gze Apr 24 '20 at 16:17
  • @ThomasArildsen .. In case if we know the number of sparsity and their amplitudes, is it possible to use another algorithm of CS with less complexity compared over OMP? and have similar or better performance compared to OMP? I'm trying to read about stagewise Orthogonal Matching Pursuit, but I think it's not the best one we can choose in that case. thanks again. – Gze Apr 24 '20 at 17:41
  • @Gze I am not sure, actually, if there exists specialised algorithms for the known-ampllitude case. My overview of the literature is getting a bit rusty. I may be getting an idea to try, though. I need to think about that... – Thomas Arildsen Apr 25 '20 at 21:11
  • I edited my answer with another possible solution I thought of. – Thomas Arildsen Apr 25 '20 at 21:37
  • 1
    Any of the approaches you added assume known $ h $? – Royi Apr 25 '20 at 22:33
  • @ThomasArildsen Thank you so much for your help .. – Gze Apr 26 '20 at 10:06
  • @Gze, Have you verified the methods Thomas suggested can handle unknown $ h $? – Royi Apr 26 '20 at 11:12
  • @Royi The first method assumes the channel $h$ is known, now i'm checking the second suggestion.. but, as you said, if we assumed that the summation of sparsity is known, and their amplitude and number, is it possible to estimate them ? – Gze Apr 26 '20 at 11:30
  • I didn't understand why you marked this as an answer as I think all those methods assume the channel $ h $ is known. I think I have idea how to take care the case $ h $ is unknown. But first verify that indeed those papers do not fit. If they fit to your case, enjoy, you got the solution. – Royi Apr 26 '20 at 11:32
  • @Royi I marked it as an answer because I thought that is impossible to solve it without knowing the channel $h$ , but when you asked about the sum of $h$, I expected you can provide a better way based on description of sparse signal and without knowing $h$. I verified that papers, and they are all assuming that $h$ is known. thanks for your understanding. – Gze Apr 26 '20 at 13:00
  • Of course there are method when $ h $ isn't known. I will write about it later. – Royi Apr 26 '20 at 13:23
  • @Royi OK .. thank you so much ,,, I'm waiting – Gze Apr 26 '20 at 16:07
  • @Royi you are right, these methods assume known $h$. When $h$ is unknown, it is indeed blind deconvolution and my suggestions are not suitable for that. – Thomas Arildsen Apr 27 '20 at 07:36
  • @Gze, I edited my answer to give you a full guideline how to solve it. – Royi Apr 27 '20 at 18:06