There are some generally applicable advices, and some application-specific advices.
Shi and Tomasi's paper, Good features to track explains the criteria for choosing patterns: two-dimensional localizability, or "cornerness".
To put it simply, suppose you are trying to find an object at position (x,y), but instead the object appears in the image at (x + dx, y + dy). It is not very useful if our vision system can only tell us that "no, the position is wrong." Instead, we expect the vision system to be able to estimate the amounts dx and dy provided that it is not too far off.
A sharp point (dot) is the most cornerful, but it is also easily buried in random pixel noise. By following through with the mathematics, we learn that there are other patterns that are just as cornerful as a sharp point. (Think about a 1D "edge" being a 1D delta transformed by integration.)
Some applications would call for localizability in fewer, or higher dimensions.
Added 8/25
Two line-like patterns can also be "intersected" to yield a point during calibration, provided that lens distortion is not significant or has been parameterized.
In deblurring applications, a sharp point is often used for recovering the point spread function (psf). However, in theory any arbitrary-shaped objects could be used, provided that the ground truth is available to the calibration software.
In some applications, we would deliberately make the pattern un-sharp. Depth from defocus uses the bluriness to reason about the position of the focal plane relative to the object, which gives an estimate of the object depth.