I am trying to use a stereo camera for scene reconstruction, but I can usually only obtain sparse point clouds (i.e. over half the image does not have any proper depth information).
I realize that stereo processing algorithms rely on the presence of texture in the images and have a few parameters that can be tweaked to obtain better results, such as the disparity range or correlation window size. As much as I tune these parameters, though, I am never able to get results that are even remotely close to what can be obtained using an active sensor such as the Kinect.
The reason why I want that is because very often point clouds corresponding to adjacent regions don't have enough overlap for me to obtain a match, so reconstruction is severely impaired.
My question to the Computer Vision experts out there is the following: what can I do to obtain denser point clouds in general (without arbitrarily modifying my office environment)?
I would be very interested in hearing about parameter settings that might not be directly accessible to me through the ROS node or other algorithms that are known to provide better results.
– georgebrindeiro Nov 18 '12 at 15:43