It seems this addon is for rendering Light fields.
From the wiki:
Description
This script helps setting up rendering of lightfields. It also
supports the projection of lightfields with textured spotlights.
Usage
A simple interface can be accessed in the tool shelf panel in 3D View
(T key).
A base mesh has to be provided, which will normaly be a subdivided
plane. The script will then create a camera rig and a light rig with
adjustable properties. A sample camera and a spotlight will be created
on each vertex of the basemesh object axis (maybe vertex normal in
future versions).
Vertex order
The user has to provide the number of cameras or lights in one row in
an unevenly spaced grid, the basemesh. Then the right vertex order can
be computed as shown below.
6-7-8
| | |
^ 3-4-5
| | | |
y 0-1-2
x->
There is also a tool to create a basemesh, which is an evenly spaced
grid. The row length parameter is taken to construct such a NxN grid.
Someone would start out by adding a rectengular plane as the slice
plane of the frustrum of the most middle camera of the light field
rig. The spacing parameter then places the other cameras in a way, so
they have an offset of n pixels from the other camera on this plane.
What are light fields?
Light fields were originally invented to allow advanced tweaking of a rendered image that would normally only be possible with defined geometry (i.e. a 3D model):
Light fields were introduced into computer graphics in 1996 by Marc Levoy and Pat Hanrahan. Their proposed application was image-based-rendering - computing new views of a scene from pre-existing views without the need for scene geometry.
This even works for photographs taken with a special camera:
LightField cameras (also called plenoptic cameras) have a microlense
array just in front of the imaging sensor.
Such arrays consist of many
microscopic lenses (often in the range of 100,000) with tiny focal
lengths (as low as 0.15 mm), and split up what would have become a
2D-pixel into individual light rays just before reaching the sensor.
The resulting raw image is a composition of as many tiny images as
there are microlenses. Here’s the fascinating part: every sub-image
differs a little bit from its neighbours, because the lightrays were
diverted slightly differently depending on the corresponding
microlense’s position in the array.
Next, sophisticated software is
used to find matching lightrays across all these images. Once it has
collected a list of
(1) matching lightrays,
(2) their position in the
microlense array and (3) within the sub-image, the information can be
used to reconstruct a sharp 3D model of the scene.
Using this model,
you have all of the LightField capabilities at your fingertips: you
can define what parts of the image should be in focus or out of focus,
define the depth of field, you can set everything in focus, you can
shift the perspective or parallax a bit, … You can even use the
parallax data to create 3D pictures from a single LightField lense and
capture. All of this can be done after you’ve recorded the image.