9

I would like to write an algorithm in a very verbose matter. I want to write a couple of for loops and the rest is pretty much text. I include here the code I am using with algpseudocode imported as follows:

\usepackage[noend]{algpseudocode}
\usepackage{algorithm}

Code for the algorithm:

\begin{algorithm}
  \caption{Backpropagation learning algorithm}
  \begin{algorithmic}
  \For {d in data}
    Forwards Pass \hfill \\
      Starting from the input layer, use eq. \ref{update} to do a forward pass trough the network, computing the activities of the neurons at each layer.
    Backwards Pass \hfill \\
    Compute the derivatives of the error function with respect to the output layer activities
      \For {layer in layers}
        Compute the derivatives of the error function with respect to the inputs of the upper layer neurons
        Compute the derivatives of the error function with respect to the weights between the outer layer and the layer below
        Compute the derivatives of the error function with respect to the activities of the layer below
      \EndFor
      Updates the weights.
\EndFor
\end{algorithmic}

\end{algorithm}

Of course this works, but it is a complete hack and it is rendered horribly. From the documentation of the package, I figure that it is not made to support this.

Am I right? Is there a different package I should use?

Edit: I would like the final output to look something like this:

for e in epochs:
  for d in data:
     Forward pass:
        Starting from the input layer, use eq. \ref{update} to do a forward pass trough the network, computing the activities of the neurons at each layer.
     Backward pass:
          Compute the derivatives of the error function with respect to the output layer activities 
          for layer in layers:
              Compute the derivatives of the error function with respect to the inputs of the upper layer neurons
              Compute the derivatives of the error function with respect to the weights between the outer layer and the layer below
              Compute the derivatives of the error function with respect to the activities of the layer below

          Updates the weights.

Thank you!

elaRosca
  • 503
  • Welcome to TeX.SX! Without an idea of what you expect, it's difficult to say. – egreg Jan 23 '14 at 15:58
  • Thank you! I made an edit to show how I would like the output to be. I am generally interested in the spacing and the clear division. I want it to be easy to see that ForwardPass is a name that describes the line that comes after it, and that Backward pass is just a name for what is performed later on. – elaRosca Jan 23 '14 at 17:24
  • Would a verbatim like environment such as lstlisting work? See e.g. http://tex.stackexchange.com/a/145143/586 – Torbjørn T. Jan 23 '14 at 17:41
  • I would really like to keep it in the algorithms package so that I can use \listofalgorithms at the end of the report and have it included – elaRosca Jan 23 '14 at 20:19

1 Answers1

3

You can use the standard algorithmic environment (I suppose you can do the same with algpseudocode, if you insist).

If you want the headers "Forward Pass" and "Backward Pass" to be formatted as you have them in your question, you'll have to provide the extra indentation; I'm defining two extra commands (intended to be nested) for this purpose, using underlined small caps for the header.

\usepackage{algorithmic}
\usepackage{algorithm}

% Define a \HEADER{Title} ... \ENDHEADER block
\makeatletter
\newcommand{\HEADER}[1]{\ALC@it\underline{\textsc{#1}}\begin{ALC@g}}
\newcommand{\ENDHEADER}{\end{ALC@g}}
\makeatother

And then:

\begin{algorithm}
  \caption{Backpropagation learning algorithm}
  \begin{algorithmic}
  \FOR{d in data}
    \HEADER{Forwards Pass}
      \STATE Starting from the input layer, use eq.~\ref{update} to do
      a forward pass trough the network, computing the activities of the
      neurons at each layer.
    \ENDHEADER
    \HEADER{Backwards Pass}
      \STATE Compute the derivatives of the error function with respect
      to the output layer activities
      \FOR{layer in layers}
        \STATE Compute the derivatives of the error function with respect
        to the inputs of the upper layer neurons
        \STATE Compute the derivatives of the error function with respect
        to the weights between the outer layer and the layer below
        \STATE Compute the derivatives of the error function with respect
        to the activities of the layer below
      \ENDFOR
      \STATE Updates the weights.
    \ENDHEADER
  \ENDFOR
\end{algorithmic}
\end{algorithm}

result


UPDATE: If you want the extra indentation mentioned in your comment, you can add a \STATEI command together with \HEADER ... \ENDHEADER in your preamble, and use it instead of \STATE wherever there's more than one line:

% Define a \STATE command with hanging indentation
\newcommand{\STATEI}[1]{\STATE
  \begin{tabular}{@{}p{\dimexpr \textwidth-\labelwidth-\ALC@tlm}@{}}%
    \hangindent \algorithmicindent
    \hangafter 1
    #1
  \end{tabular}
}

then use it like:

\STATEI{Compute the derivatives of the error function with respect
        to the output layer activities.}

updated result

nickie
  • 4,378
  • Thank you. It definitely looks better than what I had. Is there a way to somehow also ensure that the Compute the derivatives of the error function with respect to the inputs of the upper layer neurons \STATE Compute the derivatives of the error function with respect to the weights between the outer layer and the layer below \STATE Compute the derivatives of the error function with respect to the activities of the layer below lines are aligned? In the sense that the compute will not be exactly above the line below it, in the same sentence. – elaRosca Jan 24 '14 at 21:33
  • Sorry, I didn't understand what you want to achieve. Aligned w.r.t. what? The three Compute are aligned w.r.t. each other, and are all one indentation level to the right of the enclosing for loop. If that's not what you want, please explain. – nickie Jan 25 '14 at 10:16
  • That is what I want, yes, but ideally when the sentence wraps around (like in the case of of the upper layer neurons) that will not be of the same level as the beginning of the sentence, but it will have a difference in alignment, just as with the for loop and the compute below it. – elaRosca Jan 25 '14 at 12:47
  • I added an update to the answer. Cheers... – nickie Jan 25 '14 at 22:08