Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

How can I find the spatial calibration factor?

Asked by Tomas on 8 Dec 2012

Hi, According to this link real heigth of the image (mm)= image heigth of the image (pixel)/image resolution (ppp)

So the spatial calibration factor can be find by this formula??

calibration factor=real heigth of the image (mm)/image heigth (pixel)

thaks in advance




No products are associated with this question.

1 Answer

Answer by Image Analyst on 10 Dec 2012
Edited by Image Analyst on 10 Dec 2012

The calibration factor is (the actual "true" height of a known object) / (the height in pixels of that same object in your image).

Then when you measure any other distances, multiply by that factor. See demo code below:

% Code to spatially calibrate and image.
% Code asks user to draw a line and then to specify the length
% of the line in real world units.  It then calculates a spatial calibration factor.
% User can then draw lines and have them reported in real world units.
% Take out the next two lines if you're transferring this to your program.
close all;  % Close all figures (except those of imtool.)
clear;  % Erase all existing variables.
clc;    % Clear the command window.
workspace;  % Make sure the workspace panel is showing.
format longg;
format compact;
fontSize = 20;
% Check that user has the Image Processing Toolbox installed.
hasIPT = license('test', 'image_toolbox');
if ~hasIPT
	% User does not have the toolbox installed.
	message = sprintf('Sorry, but you do not seem to have the Image Processing Toolbox.\nDo you want to try to continue anyway?');
	reply = questdlg(message, 'Toolbox missing', 'Yes', 'No', 'Yes');
	if strcmpi(reply, 'No')
		% User said No, so exit.
% Read in a standard MATLAB gray scale demo image.
folder = fullfile(matlabroot, '\toolbox\images\imdemos');
baseFileName = 'cameraman.tif';
% Get the full filename, with path prepended.
fullFileName = fullfile(folder, baseFileName);
% Check if file exists.
if ~exist(fullFileName, 'file')
	% File doesn't exist -- didn't find it there.  Check the search path for it.
	fullFileName = baseFileName; % No path this time.
	if ~exist(fullFileName, 'file')
		% Still didn't find it.  Alert user.
		errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName);
grayImage = imread(fullFileName);
% Get the dimensions of the image.  
% numberOfColorBands should be = 1.
[rows columns numberOfColorBands] = size(grayImage);
% Display the original gray scale image.
subplot(2, 1, 1);
imshow(grayImage, []);
title('Original Grayscale Image', 'FontSize', fontSize);
% Enlarge figure to full screen.
set(gcf, 'units','normalized','outerposition',[0 0 1 1]);
% Give a name to the title bar.
set(gcf,'name','Demo by ImageAnalyst','numbertitle','off') 
% Initialize
units = 'pixels';
spatialCalibration = 1.0;
button = 1;
while button ~= 3
	% Get which action the user wants to do.
	button = menu('Choose an action', 'Calibrate', 'Measure', 'Exit');
	if button == 3
		% Bail out because they clicked Exit.
	% Make caption the instructions.
	subplot(2, 1, 1);
	title('Left-click first point.  Right click last point.', 'FontSize', fontSize);
	% Ask user to plot a line.
	[x, y, profile] = improfile();
	% Restore caption.
	title('Original Grayscale Image', 'FontSize', fontSize);
	% Calculate distance
	distanceInPixels = sqrt((x(1)-x(end))^2 + (y(1)-y(end))^2);
	% Plot it.
	grid on;
	% Initialize
	realWorldNumericalValue = distanceInPixels;
	caption = sprintf('Intensity Profile Along Line\nThe distance = %f pixels', ...
	title(caption, 'FontSize', fontSize);
	ylabel('Gray Level', 'FontSize', fontSize);
	xlabel('Pixels Along Line', 'FontSize', fontSize);
	if button == 1
		% They want to calibrate.
		% Ask user for a number.
		userPrompts = {'Enter true size:','Enter units:'};
		defaultValues = {'180', 'cm'};
		titleBar = 'Enter known distance';
		caUserInput = inputdlg(userPrompts, titleBar, 2, defaultValues);
		if isempty(caUserInput),return,end; % Bail out if they clicked Cancel.
		% Initialize.
		realWorldNumericalValue = str2double(caUserInput{1});
		units = char(caUserInput{2});
		% Check for a valid integer.
		if isnan(realWorldNumericalValue)
			% They didn't enter a number.  
			% They clicked Cancel, or entered a character, symbols, or something else not allowed.
			message = sprintf('I said it had to be an number.\nI will use %d and continue.', distanceInPixels);
			realWorldNumericalValue = distanceInPixels;
			units = 'pixels';
			spatialCalibration = 1.0;
% 			continue; % Skip to end of loop.
		spatialCalibration = realWorldNumericalValue / distanceInPixels;
	realWorldDistance = distanceInPixels * spatialCalibration;
	caption = sprintf('Intensity Profile Along Line\nThe distance = %f pixels = %f %s', ...
		distanceInPixels, realWorldDistance, units);
	title(caption, 'FontSize', fontSize);
	ylabel('Gray Level', 'FontSize', fontSize);
	xlabel('Pixels Along Line', 'FontSize', fontSize);


Image Analyst on 15 Dec 2012

No. That device has a sensor that is 960 pixels high and 1280 pixels wide. It can also operate in a mode with 480 pixels high by 640 pixels wide. Perhaps the most common way that is done is to just take the central chunk out of the sensor. The LCD information refers to a display, not the sensor.

Walter Roberson on 15 Dec 2012

I would have thought that a more common way would be averaging sensor pixels in the camera firmware ?

Image Analyst on 16 Dec 2012

You can do that but I don't think it's more common. It's called binning, and it's where you combine adjacent pixels. It might depend on the camera sensor architecture. I just had a case where I wanted to reduce the resolution of a camera to get a higher frame rate so the users could position an object in front of the camera with no delay/lag. When set up with a lower resolution, it took the central portion of the sensor, so the field of view changed drastically. When I asked for the same field of view and use subsampling (say every 4th pixel) the engineer thought at first he could do it and then said the chip architecture wouldn't allow it but he could do binning. However with binning the image had to be monochrome because it combined adjacent pixels which are different colors (remember it couldn't do subsampling). And this was with a CMOS sensor which are supposed to be more addressable than CCD sensors. Anyway, neither of these was what I wanted so I just went with a lower resolution camera because the higher frame rate was really what was needed more than the resolution.

Image Analyst

Contact us