I have a program for counting pixel elements in RGB with a precision of two decimal places. Each image consists of 280 x 250 pixels, and the images are stored in a folder containing 1200 images. At the end of my computations, I get a matrix of around 70 million elements, each element consisting of three components. So I get around 210 million elements. It is not only time consuming, but I run out of memory. I have 16GB memory and I added a swap space of around 56GB. All my RAM was consumed and around 29GB of my swap space was used.
I have four questions.
Is there any way to speed up and memory-optimize my code?
While I use
SetPrecisionwith 2, I still see elements with more than 2 digits of precision. Why?Can I do a parallel sort to ascending/descending output?
How about parallel counting?
Here is the code:
Export["/home/rjo/FINAL-DATA-TESTING.csv", SetPrecision[Flatten[ParallelTable[Flatten[ImageData[Import["/home/rjo/Documents/Wolfram Mathematica/delta/"<>"delta"<>ToString[x]<>".bmp"]], 1],{x, FileNames["*","/home/rjo/Documents/Wolfram Mathematica/delta/",Infinity]//Length}], 1], 2], "CSV"];
SetPrecisionmay force arbitrary precision arithmetic, which will slow things down and cause higher memory consumption. If you use machine-precision numbers (avoidSetPrecision) and ensure that you always work with packed arrays, then 210 elements will take 8*210 = 1680 MB = 1.7 GB. If the array ever gets unpacked, it will typically inflate to at least 3 times this size. – Szabolcs May 09 '17 at 08:35$HistoryLength=0(or some other small number). Pack the data withToPackedArray[]. Don't use Parallel or SetPrecision, do this with machine arithmetic. – Kelly Lowder May 09 '17 at 15:34