In the Step 7 (Back projection) of stereo vision, there are two parameters (knownZs and knownDs) have been used. Can somebody please explain what are these parameters. Here is the 7th step as obtained from Matlab 2012b documentation:

1 view (last 30 days)
%% Step 7. Backprojection
% With a stereo depth map and knowledge of the intrinsic parameters of the
% camera, it is possible to backproject image pixels into 3D points [1,2].
% One way to compute the camera intrinsics is with the MATLAB Camera
% Calibration Toolbox [6] from the California Institute of Technology®.
% Such a tool will produce an intrinsics matrix, K, of the form:
% % K = [focal_length_x skew_x camera_center_x;
% 0 focal_length_y camera_center_y;
% 0 0 1]; %
% This relates 3D world coordinates to homogenized camera coordinates via: %
% $$[x_{camera} \, y_{camera} \, 1]^T = K \cdot [x_{world} \, y_{world} \, z_{world}]^T$$
% % With the intrinsics matrix, we can backproject each image pixel into a 3D
% ray that describes all the world points that could have been projected
% onto that pixel on the image. This leaves unknown the distance of that
% point to the camera. This is provided by the disparity measurements of
% the stereo depth map as: % % $$z_{world} = focal\_length \cdot \frac{1 + stereo\_baseline}{disparity}$$ % % Note that unitless pixel disparities cannot be used directly in this
% equation. Also, if the stereo baseline (the distance between the two
% cameras) is not well-known, it introduces more unknowns. Thus we
% transform this equation into the general form: % % $$z_{world} = a + \frac{b}{disparity}$$ % % We solve for the two unknowns via least squares by collecting
% a few corresponding depth and disparity values from the scene and using
% them as tie points. The full technique is shown below.
% Camera intrinsics matrix
K = [409.4433 0 204.1225 0 416.0865 146.4133 0 0 1.0000];
% Create a sub-sampled grid for backprojection.
dec = 2;
[X,Y] = meshgrid(1:dec:size(leftI,2),1:dec:size(leftI,1));
P = K\[X(:)'; Y(:)'; ones(1,numel(X), 'single')];
Disp = max(0,DdynamicSubpixel(1:dec:size(leftI,1),1:dec:size(leftI,2)));
hMedF = vision.MedianFilter('NeighborhoodSize',[5 5]);
Disp = step(hMedF,Disp); % Median filter to smooth out noise.
% Derive conversion from disparity to depth with tie points:
knownDs = [15 9 2]'; % Disparity values in pixels
knownZs = [4 4.5 6.8]';_*
% World z values in meters based on scene measurements.
ab = [1./knownDs ones(size(knownDs), 'single')] \ knownZs; % least squares
% Convert disparity to z (distance from camera)
ZZ = ab(1)./Disp(:)' + ab(2);
% Threshold to [0,8] meters.
ZZdisp = min(8,max(0, ZZ ));
Pd = bsxfun(@times,P,ZZ);
% Remove near points
bad = Pd(3,:)>8 | Pd(3,:)<3;
Pd = Pd(:,~bad);
  2 Comments
ESHAN
ESHAN on 12 Apr 2013
What exactly is knownDs and knownZs? And how there values have been determined to be as:
knownDs=[15 9 2]
and
knownZs=[4 4.5 6.8
snehalata
snehalata on 13 Apr 2013
Disparity is inversely proportional to depth. I think its assigning proportionality like disparity values ranging from 15 to 9 will have depth ranging from 4 to 4.5 meter. I am not sure about this logic.

Sign in to comment.

Answers (1)

Dima Lisin
Dima Lisin on 20 Jul 2014
This example assumes that you know distances to certain points in the scene. knownZs are those distances in meters. knownDs are the corresponding disparity values in pixels.
As of release R2014a the Computer Vision System Toolbox supports stereo camera calibration and 3-D scene reconstruction from a calibrated stereo pair of cameras, which does not require any scene measurements. Please see this example .

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!