I am playing with some Compressed Sensing (think single pixel camera) applications and would like to have a Mathematica equivalent of a Matlab package call Convex Optimization (CVX). [...] very slow compared to the Matlab code written by a colleague. I don’t want to give him the pleasure of thinking Matlab is superior
CVX is the result of years of theoretical and applied research, a book on convex optimization and a company focused on researching, developing and supporting convex optimization tools. You simply cannot create a Mathematica clone overnight that parallels the performance and features of CVX, and certainly not via a question on Stack Exchange! :)
There are plenty (way more than you'll ever need!) of examples with code for doing compressed sensing/L1 optimizations in MATLAB for different constraints and your best bet would be to leverage those existing scripts.
Use CVX via MATLink
The best way to do convex optimization (beyond what NMinimize and friends allow you) in Mathematica if you also have a MATLAB license is to use CVX via MATLink. In fact, I co-wrote it with Szabolcs specifically because I personally needed to use CVX + Mathematica.
As an example workflow of using CVX with Mathematica, we can solve this convex optimization problem as follows:
MEvaluate["
n = 6;
m = 40;
randn('state',0);
% generate 50 ponts ui, vi
u = linspace(-1,1,m);
v = 1./(5+40*u.^2) + 0.1*u.^3 + 0.01*randn(1,m);
A = vander(u');
A = A(:,m-n+[1:n]);
% L-infty fit
cvx_begin quiet
variable x_inf(n)
minimize (norm(A*x_inf - v', inf))
cvx_end
"]
CVX is quite verbose and it is a bit more efficient to use quiet when running it via MATLink. There are other CVX global variables (starting with cvx_) that you can use to programmatically check which solver was used, whether the problem was solved or if it is infeasible, etc. rather than relying on the textual output. If you are doing active algorithm development using CVX, then it might be better to use MATLAB directly for this part and then use MATLink once you are satisfied and have frozen the routine.
Next, we transfer the optimal solution and the original quantities to Mathematica using MGet:
{u, v, A, xinf} = MGet[{"u", "v", "A", "x_inf"}];
Finally, we compute the optimal $L_2$ norm solution in Mathematica, construct the polynomial using Horner's method and plot them in Mathematica:
With[{hornerPoly = Function[{var, x}, Fold[# var + #2 &, x]]},
Show[
ListPlot[Transpose@{u, v}, PlotStyle -> {AbsolutePointSize[5], Gray}],
Plot[{hornerPoly[x, x2], hornerPoly[x, xinf]}, {x, -1, 1}, Evaluated -> True,
PlotLegends -> {"!(*SubscriptBox[(L), (2)])", "!(*SubscriptBox[(L), ([Infinity])])"}]
]
]
Compressed sensing/Sparse representations in Mathematica
If you don't have access to MATLAB, then you'll have to implement the algorithms yourself. You'll find NMinimize thoroughly lacking when you want to implement $\ell_1$ minimization and enforce sparsity or do $\ell_p$ minimizations for $p < 1$, etc. There's no magic way around actually learning the theory behind sparse representations yourself, learning the standard algorithms which are the building blocks to developing more sophisticated algorithms for your specific problem, understanding standard computational tricks to enforce sparsity, restricted isometry property, etc. I can recommend the following books which I found quite helpful:
Not to leave you completely in the dark, here are my implementations of a few basic pursuit algorithms (written for version 9). Also note that I did not write this (2-3 years old at this point) with the intent of sharing with others or as an expository piece of code, so it might be quite cryptic but reasonably idiomatic. Nevertheless, they're based on the pseudocode in Chapter 3 of Elad's book, so if you're using that, you might be able to follow.
MatchingPursuit[A_?MatrixQ, b_, epsilon_] := Module[
{solution, residual},
solution[] = ConstantArray[0, Last@Dimensions@A];
solution[z_] := With[
{pos = First@Position[z, Max@z]},
solution[] = ReplacePart[solution[], pos -> First@(solution[][[ pos ]] +
Transpose[A[[;;, pos]]].residual[])];
pos
];
residual[] = b;
residual[pos_] := residual[] = residual[] - (A[[;;, pos]].Transpose@A[[;;, pos]]).residual[];
Sow@NestWhile[Composition[residual,solution,Abs[Transpose@A.#]&], b, Flatten@#.Flatten@# > epsilon&];
solution[] // Chop
]
IRLS[A_?MatrixQ, b_, epsilon_, p_:1] := NestWhile[
Composition[
DiagonalMatrix@SparseArray@Flatten@Abs@#^p&,
#.Transpose@A.PseudoInverse[A.#.Transpose@A].b&,
#.#&],
Last@Dimensions@A ~IdentityMatrix~ SparseArray,
Sow@Norm[#-#2] > epsilon&, 2
]["NonzeroValues"] // Chop
Orthogonal Matching Pursuit
OrthogonalMatchingPursuit[A_?MatrixQ, b_, epsilon_] := Module[
{support,residual},
support[] = {};
support[z_] := support[] = Flatten@Sort[support[] ~Join~ First@Position[z, Max@z]];
residual := With[{AS = A[[;;,#]]}, b - (AS.PseudoInverse@AS).b]&;
Sow@NestWhile[Composition[residual,support,Abs[Transpose@A.#]&], b, Flatten@#.Flatten@# > epsilon&];
SparseArray[Thread[# -> Flatten[PseudoInverse@A[[;; , #]].b] &@support[]], Last@Dimensions@A] // Normal // Chop
]
Finally, here's a little snippet showing that each of these algorithms recover the sparse solution (again, these might make more sense if you're following Elad's book, but nevertheless):
SparseRandomVector[n_Integer,k_Integer] /; k < n := RandomSample[Table[RandomReal[], {k}] ~Join~ ConstantArray[0, n-k], n]
RandomMatrix[n_Integer,m_Integer] := RandomReal[NormalDistribution[], {n,m}]
NormalizeColumns[mat_?MatrixQ] := Normalize /@ Transpose[mat] // Transpose
SeedRandom[1234];
{#, MatrixForm@With[{A = NormalizeColumns@RandomMatrix[20, 40],
x = SparseRandomVector[40, 2], tol = 10^-4}, {x, #[A, A.x, tol]}]
} & /@ {MatchingPursuit, OrthogonalMatchingPursuit, IRLS} // TableForm

FindMinimumis likely to be much faster thanNMinimize. – Daniel Lichtblau Jul 31 '14 at 23:33Parallelize[]around your code. This is a highly parallelizable operation. You can be even faster if you explicitly split your iterations and such into the number of your processors. – David G. Stork Jan 28 '15 at 17:41NMinimize-approach) and some timing comparisons, so as to know what we are up against? – Jinxed Feb 02 '15 at 17:51tandn1in your code? – M.R. Feb 03 '15 at 23:14The restriction "equidistant" does not match the question here.
I am confused.
– Jinxed Feb 07 '15 at 18:07