1

Assume that I have samples from an image, but the samples aren't on a grid, but instead there are multiple lowres versions of the original where the sampling locations are slightly jittered, so the pixels in the lowres images haven't been sampled from the exact same location (i.e. the lowres images aren't identical, however assume I now the precise sampling location for each pixel in the lowres images). What methods exist for reconstructing the original image?

The most general case of this is assume you have random samples from the image (making the assumption here that the image is a continuous signal). Are there optimal algorithms for reconstruction? For example, this assumes that the sampling density can vary spatially in the image, so that some regions have very few samples while others are densely sampled.

My DSP books seem to concentrate on methods where some signal has been sampled at fixed periodic intervals, so they don't seem to be of much use.

EDIT: Pointers to literature that discusses this would be great, even knowing what such reconstruction problems are called!

eof
  • 11
  • 2
  • Welcome to SE.DSP. I am not sure about "multiple lowres versions of the original". Do you consider single "bigger" pixels, or patches of pixels"? – Laurent Duval Jun 05 '19 at 18:22
  • Assume that the original has resolution 2Wx2H while the lowres is WxH. Then each pixel in the lowres corresponds to a 2x2 patch in the high-res. What I mean is that the lowres pixel is not sampled at the center of the 2x2 patch but it's sampled at a random small subpixel offset from the center, i.e. jittered. – eof Jun 05 '19 at 18:53
  • That sounds a bit like anti-aliasing random sampling in raytracing, which raises a worry: is the continuous-space image, which is being randomly sampled, band-limited? – Olli Niemitalo Jun 05 '19 at 19:00
  • You're right. I wanted to test various filtering algorithms without having to ray trace lots of images and the internet is full of high res images of complex scenes. – eof Jun 05 '19 at 19:12
  • The standard way to sample lowres images is to apply a lowpass filter and then sample the pixels. This produces a good looking lowres image without aliasing. It would be impossible to reconstruct the original image from these samples, since the high frequency information is gone. However, if I would render the same frame N times with a slight modification in the camera angle, then I would get "good looking" lowres images that would have enough information in them to reconstruct a rendering of the same scene at higher resolution. However, I didn't want to worry about this part yet... – eof Jun 05 '19 at 19:12
  • there are some point cloud codes on the math works community site that rasterizes lidar point clouds –  Jun 05 '19 at 20:00

0 Answers0