Ever since I started playing Portal or TF2 I have had a infatuation with the “Sentry Gun” from each game. The complexity in getting a computer to track a object is both a interesting and hard problem. After looking online I found may projects use a “Master” frame technique. They take a image when the device starts up and then compare each image after with the “Master” frame. Each image is then “diffed” using openCV to find what has changed, using that information constructs a target. However this system has many major downfalls, mainly; if the master image is corrupted or the background of the frame changes it can “lock” the system into detecting a target even when once is not there. This third generation of my Sentry Tracking program attempts to solve this problem by comparing each frame with the frames before and after, creating a better understanding of the target and actively adjusting to the environment.
This project is written in C++ using the OpenCV imaging API to analyze each frame. The systems takes both live webcam feeds and pre saved videos. Demos below: