0

I don't know if its right place to ask this question since chemistry and programming both are involved.

I was working on particle tracking of series of pictures (.tiff) of colloids. And I was Using trackpy in python.

The code gives me the position of particle to suppixel accuracy. code snippets

There is another step involved is of checking the subpixel accuracy. In this method we check the uniformity in the decimal part of the position

a quick way to check for subpixel accuracy is to check that the decimal part of the x and/or y positions are evenly distributed. Trackpy provides a convenience plotting function for this called subpx_bias

enter image description here

Mask size is more or less the diameter of particle.

I am not getting how does evenly distribution of decimal part ensures that we are on the right track? And how a dip shows that we are wrong somewhere?

You can also refer to Eric weeks website. on this website it is briefly mentioned that

One failure mode is if the length scale in feature is made too small, then all the x and y coords get 'rounded off' to the nearest pixel value. The above command plots a histogram of the fractional part of the x-coords of the image. The physically distributed positions should be random -- giving a flat histogram. If the histogram has two peaks (near 0 and 1), set the size parameter in 'feature' a little bigger, determine a new masscut, and repeat until everything is happy.

crabNebula
  • 101
  • 1
  • 1
    Suppose you generate uniformly distributed random real numbers between 2 and 3, having 3 decimal places. Then subtracting 2 from each one will give uniformly distributed (roughly flat histogram) decimal fractions between 0 and 1. But if you had simply rounded off the original numbers to integers, you would only get 2 and 3. So that test is just to make sure the “length scale” is not corrupting the data. Make sense? – Ed V May 26 '21 at 02:16
  • But , then how does increasing of diameter (size parameter of feature) makes the histogram flat – crabNebula May 26 '21 at 02:24
  • If the mask size is too small, you are biasing toward integers: the particles are being sorted, as it were, into integer sizes. This is a quantization error or discretization error, With a larger mask size, you effectively “dither” the size estimation and reduce the quantization error. Kind of like adding a little white noise to dither the pixels and avoid the chunky quantization. – Ed V May 26 '21 at 02:38
  • can you please write a answer, I am just a beginner and not so clear about mask size too – crabNebula May 26 '21 at 02:44
  • 1
    This is not really a chemistry question: it is an instrument behavior or statistics question. I have not done any particle tracking experiments. Maybe this question should be migrated (not cross-posted) at the signal processing stack exchange or the CV stack exchange. – Ed V May 26 '21 at 02:57
  • This reads a little bit like the automated blob and cell counting in microbiology. Maybe there already is a library about this in ImageJ / Fiji (example). Because you aim to tackle the problem with Python, and it is kind of image processing, I infer from OpenCV's advertised use in ML OpenCV SimpleBlobDetector could be helpful. – Buttonwood May 26 '21 at 10:14

0 Answers0