Possible Duplicate:
How to select a finite number of samples from the file when plotting using pgfplot
Compiling my document with many PGF Plots, each containing thousands of data points from a CSV file takes too long (minutes). I realize that it does not make sense to have this many data points -- were there, say, a tenth, it would be enough to recreate the same plot on paper.
One of my files, for example, consists of 216000 lines of such a format:
0.000000000000000000e+00 0.000000000000000000e+00
1.388888888888888888e-04 -2.182787284255027771e-11
Of course, that is far too much. I then read the file as:
\documentclass{scrartcl}
\usepackage{pgfplots}
\pgfplotsset{compat=1.5.1}
\begin{document}
\begin{tikzpicture}
\begin{loglogaxis}[
\addplot[mark=*, color=red] file {Data/plotXYZ.dat};
\end{loglogaxis}
\end{tikzpicture}
\end{document}
Is it possible for the package to take only every x-th line to speed up the compilation process? Or are there other ways to make it (significantly) faster?
I am running Ubuntu and a friend of mine recommended me to write an awk script that would go through a .csv file and retain only every x-th line. However, I have no clue how I would do that, as I have never used awk before. If this would turn out to be the option of choice, could someone help me setting up such a script?