Wednesday, December 7, 2011

How we get the 3D coordinate from the depth data.

With the disparity, we can get the distance for each feature based on the formulation we cited in our former blog:


Then we can get the distance zs, which is the z-axis coordinate in the camera-based real-world coordinate system. And based on Z value, we can calculate the x-axis and y-axis coordinate with the following codes:


%calculate y and z values
xs = -(xis-cx) ./ fx .* zs;
ys = -(yis-cy) ./ fy .* zs;

The 'fx' and 'fy' are the focal length for x-direction and y-direction respectively. The 'cx' and 'cy' are the central spot for both axes based on the unit 'pixel'. The 'xis' and 'yis' are the pixel coordinates for the feature points. The 'xs', 'ys' and 'zs' are the world coordinate we get.

Thus, we get the 3D coordinates for those feature points. And for next step, we would like to use RANSAC to get the 3D homography translation.

Tuesday, December 6, 2011

Camera Alignment

For the simplicity of usage, we want to get our camera aligned to the same direction.
The approach we tried first is to draw the points to a paper as show in the figure:
The distance of the two points is exactly the distance between the two cameras.
Then we can open the image acquisition tool box in Matlab, aim the center of the first camera to the first point:
And aim the center of second camera to the second point:
And after I aimed two cameras to both of the points, they should be aligned: (See figure)

Scripts that we wrote to acquire the images.

The first way we thought about is to use the Matlab image acquisition tool box. (use imaqtool to call the toolbox)

It provides pretty good results for a single camera. But our demand is to get two images from the two cameras at the same time.
So I write a simple script to capture the images synchronously.
// Get the videoinput object:
cam1 = videoinput('dcam', 1, 'Y422_800x600');
cam2 = videoinput('dcam', 2, 'Y422_800x600');
// Change the camera property:
src = getselectedsource(cam1);
set(src, 'AutoExposure', 80);
// ... a lot of properties adjustments here...


// start capturing:
preview(cam1);

preview(cam2);
index = 1;
while 1
    key = input('x = exit acquisition, enter = acquire a pair of frame', 's');
    if key == 'x'
        break;
    else
       im1 = getsnapshot(cam1);
       im2 = getsnapshot(cam2);
       im1 = ycbcr2rgb(im1);
       im2 = ycbcr2rgb(im2);
       img_pair = {im1, im2};
       img_pairs{index} = img_pair;
       index = index + 1;
    end
end
stoppreview(cam1);
stoppreview(cam2);
closepreview(cam1);
closepreview(cam2);

this code get the keyboard as input, it gets every frame once I clicked enter end exits when I hit 'x'.
If I want get continues frames as videos, I can simply comment the "keyboard input" line and let it acquire images every frame.

Monday, December 5, 2011

Camera Calibration with the help of MATLAB toolbox

In order to get an accurate result from disparity to depth, we need to know the intrinsic parameter, such as focal length and skew coefficient, for our camera. So we use the Caltech Camera Calibration toolbox to get those parameters for the two cameras separately.

We take around 200 pictures with the checkerboard and pick up 12 pictures for calibration for each camera. And then follow the instruction on the website to obtain the parameters.


Picture.1 calibration images
Picture.2 extract the grid corner

And here is the result of calibration:

Calibration results (with uncertainties):
Focal Length:          fc = [ 1762.62339   1771.13354 ] ± [ 33.53921   33.64017 ]
Principal point:       cc = [ 657.10654   494.14503 ] ± [ 45.24948   41.64433 ]
Skew:             alpha_c = [ 0.00000 ] ± [ 0.00000  ]   => angle of pixel axes = 90.00000 ± 0.00000 degrees
Distortion:            kc = [ -0.24440   0.25448   0.00050   -0.00360  0.00000 ] ± [ 0.06230   0.19754   0.00330   0.00715  0.00000 ]
Pixel error:          err = [ 0.25417   0.35936 ]

The links for the website for the calibration toolbox is as follows:
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html

And we would like to use the extrinsic parameters( such as rotation and translation) from the toolbox, to help us to adjust the relative motion between our cameras as well.