Multi Face Detection-Tracking

4 views (last 30 days)
antonio
antonio on 13 Jun 2014
Commented: Dima Lisin on 14 Jul 2014
Hi, I'm a student from Politecnico of Milano; I'm preparing a project abount face detection and tracking using this code:
:::START MODIFIED CODE BY ME:::
clear all
close all
clc
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader('video.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
[N M] = size(bbox);
% Draw the returned bounding box around the detected face.
for i = 1:N
videoFrame = insertShape(videoFrame, 'rectangle', bbox(i,:));
end
figure; imshow(videoFrame); title('Detected face');
% Detect feature points in the face region.
figure, imshow(videoFrame), hold on, title('Detected features');
for i = 1:N
points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox(i,:));
punti{i} = points.Location;
plot(points);
end
::: START ORIGINAL CODE :::
% Create a point tracker and enable the bidirectional error constraint to % make it more robust in the presence of noise and clutter.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Initialize the tracker with the initial point locations and the initial % video frame.
points = points.Location;
initialize(pointTracker, points, videoFrame);
videoPlayer = vision.VideoPlayer('Position',... [100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);
% Make a copy of the points to be used for computing the geometric % transformation between the points in the previous and the current frames
oldPoints = points;
while ~isDone(videoFileReader)
% get the next frame
videoFrame = step(videoFileReader);
% Track the points. Note that some points may be lost.
[points, isFound] = step(pointTracker, videoFrame);
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if size(visiblePoints, 1) >= 2 % need at least 2 points
% Estimate the geometric transformation between the old points
% and the new points and eliminate outliers
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box
[bboxPolygon(1:2:end), bboxPolygon(2:2:end)] ...
= transformPointsForward(xform, bboxPolygon(1:2:end), bboxPolygon(2:2:end));
% Insert a bounding box around the object being tracked
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);
% Display tracked points
videoFrame = insertMarker(videoFrame, visiblePoints, '+', ...
'Color', 'white');
% Reset the points
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
% Display the annotated video frame using the video player object
step(videoPlayer, videoFrame);
end
% Clean up
release(videoFileReader);
release(videoPlayer);
release(pointTracker); __________________________________________________________________________
I need to modify it for detecting and tracking more faces in a video streaming;
I want ask if it's possible to set point.tracker to initialize all faces (the code inizialize only one face!) and track multi faces for remaining frames; Thanks a lot for helping me!!
  2 Comments
antonio
antonio on 13 Jun 2014
I adjusted the code like this:
clear all
close all
clc
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader('video2.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
[N M] = size(bbox);
% Draw the returned bounding box around the detected face.
for i = 1:N
videoFrame = insertShape(videoFrame, 'rectangle', bbox(i,:));
end
figure; imshow(videoFrame); title('Detected face');
% Detect feature points in the face region.
figure, imshow(videoFrame), hold on, title('Detected features');
for i = 1:N
points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox(i,:));
punti{i} = points.Location;
plot(points);
end
% Create a point tracker and enable the bidirectional error constraint to % make it more robust in the presence of noise and clutter.
for i = 1:N
pointTracker{i} = vision.PointTracker('MaxBidirectionalError', 2);
% Initialize the tracker with the initial point locations and the initial
% video frame.
initialize(pointTracker{i}, punti{i}, videoFrame);
% Make a copy of the points to be used for computing the geometric
% transformation between the points in the previous and the current frames
oldPoints{i} = punti{i};
end
videoPlayer = vision.VideoPlayer('Position',... [100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);
while ~isDone(videoFileReader) % get the next frame
videoFrame = step(videoFileReader);
% Track the points. Note that some points may be lost.
for i =1:N
[punti{i}, isFound] = step(pointTracker{i}, videoFrame);
visiblePoints{i} = punti{i}(isFound, :);
oldInliers{i} = oldPoints{i}(isFound, :);
if size(visiblePoints{i}, 1) >= 2 % need at least 2 points
% Estimate the geometric transformation between the old points
% and the new points and eliminate outliers
[xform, oldInliers{i}, visiblePoints{i}] = estimateGeometricTransform(...
oldInliers{i}, visiblePoints{i}, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box
bbox(i:M:end)=transformPointsForward(xform,bbox(i:M:end));
% Insert a bounding box around the object being tracked
videoFrame = insertShape(videoFrame, 'Rectangle',bbox(i,:));
% Reset the points
oldPoints{i} = visiblePoints{i};
setPoints(pointTracker{i}, oldPoints{i});
plot(visiblePoints{i});
end
end
% Display the annotated video frame using the video player object
step(videoPlayer, videoFrame);
end
% Clean up
release(videoFileReader);
release(videoPlayer);
release(pointTracker);
_____________________________________________________________________
but it doesn't work very well..Any help please??
Dima Lisin
Dima Lisin on 14 Jul 2014
I have added a link to a working example of multiple face tracking to my answer.

Sign in to comment.

Answers (1)

Dima Lisin
Dima Lisin on 21 Jun 2014
Edited: Dima Lisin on 14 Jul 2014
A better way to do this is to concatenate all the detected corner points from all faces into one M-by-2 matrix and call the step() method of the point tracker only once for each video frame. You would have to create a 1-D array of indices to remember which point belongs to which face.
Here's a working example.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!