3

I have this optimization problem:

$$ \arg \min_{ X \left( i, j \right) } \sum_{i, j} \left\| X \left( i, j \right) - 255 \right\|_{2}^{2} + \lambda \sum_{i, j} \left\| \nabla X \left( i, j \right) - \nabla Y \left( i, j \right) \right\|_{2}^{2} $$

Where $ X $ is the output image and $ Y $ is the input image.

Let's say the input image is $ y $ and the output image is $ x $ (Transform image to vector) then the problem can be rewritten:

$$ \hat{x} = \arg \min_{x} \frac{1}{2} {\left\| x - 255 \cdot \boldsymbol{1} \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| {D}_{h} \left( x - y \right) \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| {D}_{v} \left( x - y \right) \right\|}_{2}^{2} $$

Where $D_h$ is the horizontal Derivative Operator, $D_v$ is the vertical Derivative Operator and $1$ is vector of ones.

Then the solution is given by:

$$ \hat{x} = { \left( I + \lambda {D}_{h}^{T} {D}_{h} + \lambda {D}_{v}^{T} {D}_{v} \right) }^{-1} \left( \lambda {D}_{h}^{T} {D}_{h} y + \lambda {D}_{v}^{T} {D}_{v} y + 255 \cdot \boldsymbol{1} \right) $$

My question is given the input $ y $ how to apply $ {D}_{h} $ and $ {D}_{v} $ to this specific equation.

Thanks for your reply.

Royi
  • 19,608
  • 4
  • 197
  • 238
lafi raed
  • 85
  • 6
  • I'm confused how you can write your in- and output images as vectors $x$ and $y$ and still differentiate them. I'm missing something here, or you're doing a transformation that doesn't work. What exactly is the set from which $x$ and $y$ come? (also, bad choice re-using letters that you already used as coordinates in the first equation) – Marcus Müller Jul 06 '18 at 21:48
  • i edited my question, also here is the reference https://dsp.stackexchange.com/questions/50329/automatic-image-enhancement-of-images-of-scanned-documents/50330?noredirect=1#comment98462_50330 – lafi raed Jul 06 '18 at 23:57
  • Why did you remove thee better half of your question? That was your own attempt! Without that, your question really is just "solve this problem for me"! Instead of deleting, you should have explained your approach! – Marcus Müller Jul 07 '18 at 00:06
  • i add the resolution details – lafi raed Jul 07 '18 at 00:52
  • That's just now copies of the answer you've got there. What is your question, precisely? – Marcus Müller Jul 07 '18 at 00:53
  • i don't know how to solve the last equation using python or matlab, how i can proceed to solve it? – lafi raed Jul 07 '18 at 00:54
  • Again, no idea what your x and y are supposed to be. We're running in circles here. Do you understand every symbol in that equation? – Marcus Müller Jul 07 '18 at 00:56
  • yes i already mention that x and y are vectors, since we can vectorize image(matrix) , please reveiw this link: https://en.wikipedia.org/wiki/Vectorization_(mathematics), i already mention that we can transform matrix to vector in the question, x is the vectorized input image and y is the vectorized output image – lafi raed Jul 07 '18 at 00:59
  • I don't mean to be rude, but please understand that I know what vectorization is, but a) there's more than one way to vectorize your image, and I was hoping you'd tell me which one you're planning to use so that we have common ground and b) I'm really not sure what your exact problem is if you understand every single symbol in the last equation. I, myself, find it nontrivial to find the inverse of the sum of the identity, and dyadic products of directional derivative operators on vectorised matrices, but if that's clear to you, you're only asking for code written to a spec - that's explicitly – Marcus Müller Jul 07 '18 at 01:06
  • Off-topic here. – Marcus Müller Jul 07 '18 at 01:06
  • can you tell me why off-topic? – lafi raed Jul 07 '18 at 01:06
  • There is a rule that "questions for code written to a specification are off-topic" mainly because they bear no future value for other readers and we don't want this to become a free code-writing service. We mostly consider programming to be "handiwork" that becomes more or less trivial once you understand the algorithm, and there's better sister sites for programming questions. The idea is that this site helps you understand the thing you want to implement well enough so that you can implement it yourself. That's why I'm really desperately trying to get out of you where you need help! – Marcus Müller Jul 07 '18 at 01:13
  • i don't need code here, i just need to know how to proceed on the solution, yes i have the mathematical expression of the solution, how i can proceed to solve it, there is conjugate gradient decent method to solve this equation but my problem how to get Dh and Dv since i just have the input matrix(vector) y – lafi raed Jul 07 '18 at 01:17
  • These are operators. You don't "get" them from the input. – Marcus Müller Jul 07 '18 at 01:18
  • can you explain more please, how to apply them in this specific equation – lafi raed Jul 07 '18 at 01:20
  • No, can't. But maybe that's the precise question you should be asking? – Marcus Müller Jul 07 '18 at 01:21
  • yes i will edit the question – lafi raed Jul 07 '18 at 01:22
  • @lafiraed, Please use the site LaTeX capabilities instead of pasting images. I answered your question. Enjoy... – Royi Jul 07 '18 at 10:54

1 Answers1

2

It is pretty simple to create those Matrices.
The real issue with them is their size which is enormous for real world images.

For small kernels they are sparse which saves the day.
Indeed for the Derivative Operator, which has only 2 elements, they are highly sparse.

I built them in MATLAB using:

mI = im2double(imread(imageFileName));
mI = mI(11:410, 201:600, 1);

% mI = mI(11:20, 201:210, 1);

numRows = size(mI, 1); numCols = size(mI, 2); numPixels = numRows * numCols;

mO = zeros([numRows, numCols, length(vParamLambda)]); %<! Output

vDx = [1, -1]; %<! Matrix is doing Correlation, this for Convolution vDy = [1; -1];

% Sanity Check mIxRef = conv2(mI, vDx, 'valid'); mIyRef = conv2(mI, vDy, 'valid');

% mIx = reshape(mDx * mI(:), numRows, numCols - 1); % mIy = reshape(mDy * mI(:), numRows - 1, numCols);

mDh = sparse(numPixels - numCols, numPixels); mDv = sparse(numPixels - numRows, numPixels);

% mDx = zeros(numPixels - numCols, numPixels); % mDy = zeros(numPixels - numRows, numPixels);

tic(); colShift = 0; for ii = 1:(numPixels - numRows) if(mod(ii + colShift, numRows) == 0) colShift = colShift + 1; end mDv(ii, ii + colShift) = -1; mDv(ii, ii + colShift + 1) = 1; end

for ii = 1:(numPixels - numCols) mDh(ii, ii) = -1; mDh(ii, ii + numCols) = 1; end toc();

mIx = reshape(mDh * mI(:), numRows, numCols - 1); mIy = reshape(mDv * mI(:), numRows - 1, numCols);

mE = mIy - mIyRef; disp(['Maximum Absolute Error Between Matrix Form to Convolution (Vertical Derivative) - ', num2str(max(abs(mE(:))))]);

mE = mIx - mIxRef; disp(['Maximum Absolute Error Between Matrix Form to Convolution (Horizontal Derivative) - ', num2str(max(abs(mE(:))))]);

If you load image into mI you see the result is identical to the convolution operator.

The full MATLAB code which implements the solution is at my Signal Processing StackExchange Q50329 - GitHub Repository (Look at the SignalProcessing\Q50329 folder).

Remark
In practice usually those kind of equation are solved using Preconditioned Conjugate Gradient Method.
The nice trick is that method requires only the result of the operators applied on a vectorized image. This can be done using the convolution operator itself instead of using the matrix form.

Royi
  • 19,608
  • 4
  • 197
  • 238