The eigenvector corresponding to the largest eigenvalue of the autocorrelation matrix indicates the direction of fastest change, while the eigenvector of the smallest eigenvalue indicates the direction of slowest change. Nothing strange here.
What's confusing is the Uncertainty Ellipse (Figure 4.6, page 213). The direction of slowest change is longer because it's scaled by the square root of the inverse eigenvalue of A. Really what it represents is the direction of the highest uncertainty.
Uncertainty and speed of change are different sides of the same coin:
- high uncertainty <--> slow change
- low uncertainty <--> fast change
The purpose of all this is to help us find good match points in an image. An example of a good match point is shown in Figure 4.5b. The reason it's "good" is because it has a well defined optimum. This occurs when the image has fast change in more than one direction (i.e. both eigenvalues of the autocorrelation matrix are large).
Figure 4.5c shows a match point that doesn't have a well defined optimum (it's more like a ridge). The fact that we cannot precisely point to the optimum means that there is uncertainty in the direction of the ridge. The autocorrelation function of this patch will have one large eigenvalue and one small eigenvalue. The large eigenvector is in the direction of fastest change (a direction perpendicular to the ridge), while the small eigenvector is in the direction of the ridge.
In the linked answer, Matrix A were mentioned as covariance matrix. But how come partial derivatives can be come covariance?
– nglinh Dec 02 '14 at 20:41i.e covarianceMat = [[var(x), covar(x,y)], [covar(x,y), var(y)]]
How does partial derivatives become var?
– nglinh Dec 03 '14 at 08:33