# Hand Tracking And Gesture Detection (OpenCV)

The aim of the project was to device a program that is able to detect out hands, track them in realtime and perform some guesture recognition. It is do be done with simple signal processing performed on images obtained from a regular laptop web-camera.It was an under 2 week project so theshold values in the code including the canny filter needs to be tweaked a bit more. It doesnt work well with changing background . If a better background detection and subtraction algorithm is used , you can get better results.

1. Detecting Background
Given the feed from the camera, the 1st thing to do is to remove the background. We use running average over a sequence of images to get the average image which will be the background too.
$CurBG[i][j]=alpha CurBG[i][j] + (1- alpha )CurFrame[i][j]$
This equation works because of the assumption that the background is mostly static. Hence for those stationary item , those pixels arent affected by this weighted averaging and $alpha x+(1-alpha)x=x$ . Hence those pixels that are constantly changing isnt a part of the background, hence those pixels will get weighed down. Hence the stationary pixels or the background gets more and more prominent with every iteration while those moving gets weighed out. Thus after a few iterations , you get the above average which contains only the background. In this case , even my face is part of the background as it needs to detect only my hands.
2. Background Subtraction
A simple method to start with is we can subtract the pixel values.However this will result in negative values and values greater than 255, which is the maximum value used to store an integer. And what if we have a black background? Nothing gets subtracted in that case. Instead we use an inbuilt background subtractor based on a Gaussian Mixture-based Background/Foreground Segmentation Algorithm.Background subtraction involves calculating a reference image, subtracting each new frame from this image and thresholding the result which results is a binary segmentation of the image which highlights regions of non-stationary objects . We then use erosion and dilation to enhance the changes to make it more prominant.
3. Contour Extraction
Contour extraction is performed using OpenCV’s inbuilt edge extraction function. It uses a canny filter. You can tweak paramemters to get better edge detection.
4. Convex Hull and Defects

Now given the set of points for the contour, we find the smallest area convex hull that covers the contours . The observation here is that the convex hull points are most likely to be on the fingers as they are the extremeties and hence this fact can be used to detect no of fingers. But since our entire arm is there, there will be other points of convexity too. So we find the convex defects ie, between each arm of the hull, we try to find the deepest point of deviation on the contour.
5. Tracking and Finger Detection
Thus the defect points are most likely to be the center of the finger valleys as pointed out by the picture. Now we find the average of all these defects which is definitely bound to be in the center of the palm but its a very rough estimate. So we average out and find this rough palm center. Now we assume that the palm is angled in such a way that its roughly a circle. So to find the palm center , we take 3 points that closes to the rough palm center and find the circle center and radius of the circle passing though these 3 points. Thus we get the center of the palm. Due to noise , this center keeps jumping , so to stabilize it , we take an average over a few iterations. Thus the radius of the palm is a indication of the depth of the palm and we know the center of the palm . Using this we can track the position of the palm in realtime and even know the depth of the palm using the radius. The next challenge is detecting the no of fingers. We use a couple of observations to do this. For each maxima defect point which will be the finger tip, there will be 2 minimal defect points to indicate the valleys. Hence the maxima and the 2 minimal defects should form a triangle with the distance between the maxima and the minimas to be more or less same. Also the minima should be on or pretty close to the circumference of the palm. We use this factor too. Also the the ratio of the palm radius to the length of the finger triange should be more or less same . Hence using these properties, we get the list of maximal defect points that satisfy the above conditions and thus we find the no of fingers using this. If no of fingers is 0 , it means the user is showing a fist.

In theory it sounds all perfect and good. The no of fingers given by the program is a bit so so. So use it at your own risk. I didnt get time to play with the thresholds. So I guess you should get pretty good results if you get the right thresholds like the ratio of radius to fingers, length of finger, etc. Currently with the program given, you can move the mouse pointer and click.
The code is located at: https://github.com/jujojujo2003/OpenCVHandGuesture
The report is located at: https://s-ln.in/?attachment_id=320

## 32 thoughts on “Hand Tracking And Gesture Detection (OpenCV)”

1. tha'er

Hi
thank you for your great article.
when i compile the code i get this errors and i need the help as you can as possible
#include
#include
—————————————
BackgroundSubtractorMOG2 bg;
bg.set(“nmixtures”,3);
error in bg.set();
——————————————-
identifier “convexityDefects” is undefined
——————————————-
rough_palm_center+=ptFar+ptStart+ptEnd
error in sum “ptFar+ptStart+ptEnd”
———————————————
please i need you’r help and thank you again
i am sorry for my English
thaer.at@gmail.com

1. Vandana Parekh

hello, i want header files. i am facing problem because i don’t have header file
#include
#include

how i can get this??

2. me

Hi, how do you integrate X11 in windows?
Or i must make the changes accordingly to the windows environment?
e.g change the code so that it works for windows.

1. sanjayslnarayanan Post author

U can try cygwin. But i havnt tried it on cygwin. If you want it to natively work on windows, yes ,you need to modify the click function. THats the only place where i have used X11 specific functions.

3. me

thanks. i will try. You code will help me a lot.
I am doing a real-time hand gesture app on an android device and using your code it might increase the speed because i am interested in tracking only the hand and not the entire environment.
thanks.

1. Mario

have you made it running on android? i am trying the same till now without sucess…. would you like to share your code or have you any hints for me?

Thanks a lot. Greeez Mario

2. yasmin

Hello , can me u please update me with ur findings , i understood hi code till background subtraction but can’t complete after that

4. Mollie

I am completely new to all this. I am trying to bulid a virtual mouse application for my windows laptop. Im using Opencv2.4.6 and Visual Studio 2012.

The X11 libraries you have included, I don’t know what they are. and if I’m to replace them with some windows equivalent, what should that be? Please help me out here…

1. sanjayslnarayanan Post author

I havnt included the X11 libraries, you need to install if . If you are using windows try cygwin.
If you want it natively to run on windows you need to change the mouse clicking code thats it.

5. Mollie

I have to ask what version of OpenCV did you use while writing your code. I’m using the newest 2.4.6, and its giving me the weirdest errors in Visual Studio. I’m not entirely sure if the errors have anything to do with the version, but I’d still like to try another version.

6. Mollie

I’m sorry to be troubling you with this. But Visual Studio wasn’t working, so I switched to Ubuntu now.
And I’m having a most basic problem, of including header files. How do i make g++ look for the header files in the “include” folder. It only locates them if i give the entire path… home\…\include\opencv2\…
I really have been struggling here, and i know my question isn’t related to this post much, but please i would really appreciate any help.

7. Mollie

After the cap >> frame; I wrote the line std::cout<<frame.channels(); The result is 1. Meaning that cap is giving a single channel image to frame. getBackgroundImage and other functions need a 3 channel image. This is giving me errors. How do i fix this??

1. Mollie

I also tried cv::imshow(“view”,frame); after cap >> frame;

I got this error in my application window along with a n unhadled exception.
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in unknown function in file ..\..\..\src\opencv\modules\highgui\src\window.cpp, line 261

The line 261 of window.cpp is:

261 CV_Assert(size.width>0 && size.height>0);
262 {
263 Mat img = _img.getMat();
264 CvMat c_img = img;
265 cvShowImage(winname.c_str(), &c_img);
266 }

I really appreciate all your help. I’m really thankful, for taking the time to give all those prompt replies.

1. sanjayslnarayanan Post author

I am not exactly sure why, it works perfectly for me under linux. I think the problem is with the image recevied from the camera. Try to get that working in a separate code and then replace that portion .

Works great!!! thnx for the code. Btw, it seems like it is not illumination invariant i.e. the lighting condition differs a great deal when trying to find the contour of the hand. Nevertheless, great job.

Works perfectly. Great job. However, this system is not illumination invariant i.e. it greatly differs in different lightning condition. Nevertheless, good work buddy! 🙂

10. Vatsala

In your code, while drawing contours,

drawContours(frame,tcontours,-1,cv::Scalar(0,0,255),2);

is not drawing any contours. But if instead of ‘tcontours’, I use ‘contours’, the contours get drawn. What is the difference between the two? I’m unable to understand. If i replace tcontours with contours in whole code, would that change the semantics?

11. Misbah

Hi,

Great work!
However, I am getting stuck in convexityDefects(); It says the identifier is not found. I have tried adding all the libraries and directories. But its not working.
You have any idea in this regards ?
Thanks buddy for open sourcing it.

12. Mike

This looks really interesting. The git hub repo appears empty though. Does anybody know where I can obtain the source code from? It would be highly appreciated

13. Vatsala

http://s229.photobucket.com/user/Mollie_VX/media/1-1.jpg.html

The blue circles come from here:
vector palm_points;
for(int j=0;j<defects.size();j++)
{
int startidx=defects[j][0]; Point ptStart( tcontours[0][startidx] );
circle(fgimg, ptStart,5, Scalar(255,0,0), -1);
int endidx=defects[j][1]; Point ptEnd( tcontours[0][endidx] );
int faridx=defects[j][2]; Point ptFar( tcontours[0][faridx] );

Why are there so many blue circles? and they are supposed to be defects right? Can you please explain why this is happening?

14. 007

Hi , Can you please tell me in brief How can i detect hand gesture in android or iDevice ?