I created this as a final project for one of my favorite courses in college — “Computer Vision.” This was a very difficult project. Simple tracking is not very difficult, because the background (everything except object you want to track) is mostly static. However, when both the object AND the background may move (as is the case when the tracking camera is mounted on a servo), trying to pick out the object you want and track it becomes much more difficult. I wrote this program and test a number of different methods, for processing speed, accuracy of results, and difficulty of implementation.
The simplest implementation and quickest processing of the bunch is to use simple color tracking. Either using a full-frame or localized color thresholding/weighting comes up with decent results as long as a single color can be picked out easily.
Another way is to use multi-frame differencing algorithms. The delta of 2 images shows where it has changed (i.e. the movement). However, the correct direction/movement cannot be easily determined via this method, without a third frame for reference. This provides a full analysis of where an object has been and where it is going, assuming the object is not oscillating above the Nyquist frequency. This method of tracking is slower, but works where color tracking does not.
Active contours, or “snakes” are yet another method, which are very well suited to tracking objects which morph, disappear, or are otherwise affected by noise/motion within a computer vision system. However, they require large amounts of processing, which is slow on old computers.
The following video was my first attempt at tracking using “active vision.” This is where both the camera and the object to track are mobile. I used multiple methods to track, however the video here is based on multi-frame differencing with color-skewing.
Video download available here.