Thursday, April 14, 2011

First LUMIS2 Data: Calibration + Registration

Reflections on First LUMIS2 Image Set
Paul sent me calibration images today (pictured above) that were a little different from what I expected. Above are the images taken from the upper right camera. As a training set, it has some pretty radical changes in position (which is a good thing), but I did not count on some features like, I don't know, the big rock on the calibration pattern. Last week I wrote some OpenCV calibration code that is really difficult to modify to account for the fact that half the pattern is occluded, my world coordinate system's origin can't be in the extreme upper left of the pattern (because its missing in a lot of views), and I have 90 degree changes in angle. Bouguet's toolbox should handle it fine though. It was kind of silly for me to assume a perfect training set would be given to me when I have so much trouble collecting images from a compound camera system ABOVE water.

Recap: Last week I was writing calibration code in OpenCV C++ rather than just using Bouguet Matlab because I thought OpenCV's object-oriented code would be easier to modify to 'augment the distortion model.' It looks like I'm back to Matlab, though. At least for the time being. Coding the OpenCV calibration code was good practice and I might go back to it later.

Bouguet Calibration
Next up, I tested how well distortion models in the Bouguet toolbox accounted for radial distortion.

'Typical' consumer camera radial distortion













LUMIS2's radial distortion components


Yikes! But even with this extreme distortion I got pretty good reprojection error (measured in pixels):

Upper Left Camera
Before recomp corners + 2nd calib: [ 0.65965   0.88664 ]
After  recomp corners + 2nd calib:  [ 0.61634   0.83943 ]

Upper Right Camera
Before recomp corners + 2nd calib: [ 0.55489   0.85606 ]
After  recomp corners + 2nd calib:   [ 0.54128   0.70904 ]

Lower Left Camera

Before recomp corners + 2nd calib: [ 0.55489   0.85606 ]
After  recomp corners + 2nd calib:   [ 0.54128   0.70904 ]

As a frame of reference, I can usually get the FUJI 2-view camera to calibrate to around .2 pixel error, so these are pretty good results. Its small enough that we can work with it, but large enough that there's room for improvement and more investigation. 

Undistortion in OpenCV
A practical piece of software for the LUMIS2's GUI will be undistorting the images given these estimated parameters. I spent a good part of today getting Matlab to output OpenCV matrix files, and having OpenCV undistort based on them. (the documentation was awful)

Distorted Image


After Undistortion


Image Registration
There's a large class of image registration algorithms that assume that your two views have a small baseline. In CSE 252a we implemented one to do video stabilization. The gyst is that given two frames, you use Lucas Kanade to get the optical flow, you use the optical flow to define an affine transformation from one pose to the other, and you transform one image onto the other. The advantage of an algorithm like this is that you don't need to know any scene geometry, and its extremely well documented. I thought this would be out of the question given our wide baseline with respect to the image depth, but the disparity didn't seem too extreme. 

Left Camera Image Subtracted from Right


I remember from cse252a that a 'jump' between frames like this happen often and can be corrected, especially if you refine your algorithm with course-to-fine techniques. 

These algorithms are so common I knew there had to be some good starter code somewhere, and I hit pay-dirt here: http://www.codeproject.com/KB/recipes/ImgAlign.aspx

I fired up the algorithm and here are my results:

Cropped Portion of Right Cameras View


Left Camera View with Above Image Outlined in White


The algorithm was slow as dirt though, so there's definitely plenty of room for refinement. Initializing the affine transformation matrix to something like the extrinsic parameters from calibration should speed things up. Also, Serge turned me onto a paper that uses a similar algorithm that's catered specifically for multi-spectral images.  Additionally, this algorithm only works well when the cropped portion of one image is ENTIRELY contained in the other, and that's a weak point of the implementation.

Finally, stitching the images and presenting them is a whole science on its own. 

No comments:

Post a Comment