Tracking Color Blobs in Webcam Feed Using Tracking.js

September 25, 2017 Jean-Philippe Côté
published in

This article will show you how you can pick a color from a live webcam feed and track its movement in real-time. To do that, we will be using the tracking.js library and a bit of custom code.

If you want to see what the end result is before diving right in, you can check out the completed example right away:


If you wish, you can also skip ahead and download the finished code.

Getting started

Create a folder for our new project. Then, download tracking.js and uncompress the zip package. Grab  tracking-min.js from the build subfolder and put it inside your new project folder. 

Create an index.html at the root of the project folder. Ours look like this (you will need to scroll to see the whole thing):

There are a few things to notice. First, we are linking a css file that handles the display of the video and controls. This is all standard stuff so we won’t look at it in details. Here’s the content of the file:

Then, we have links to the the library and to our own JavaScript file which we will look at in a few moments.

Next in our HTML file, we have a   video and a  canvas tag. The  canvas tag is layered on top of the  video tag using the above CSS. We will be using the canvas to draw the detected zones color so we can actually see what’s happening. Obviously, we set both to be the same size. Also notice that the video tag has the  autoplay property set so the webcam feed actually plays (even if it’s live…).

Then, we have a range  input slider which will control the color tolerance and a div to display the currently selected color. Pretty standard stuff. Let’s look at the code.

The code

The first step, as usual, is to wait for the page to be ready. We do that by listening to the  load event:

To be extra methodical, open the index.html page in a browser, open the development tools and check if “Page loaded!” is being output to the console. If it is, you are good to go. If it’s not, double-check that your files are properly linked.

Now, let’s create an object to hold the currently selected color. This object will have properties r,   g  and   b to hold the red, green and blue values respectively. We’ll use red as the default color:

Then, we grab references to the tags we will be using in our project. This is simply for convenience:

In my opinion, the tracking.js API works in a peculiar way. The reason I say this is because event before creating the  ColorTracker object, we need to statically register a color tracking function inside the   ColorTracker class:

This code defines a function that will be called for each pixels of each frame of our webcam feed. This function receives the red, green and blue values of said pixel and should return true if we consider the pixel’s color to be matching the target color or  false otherwise. 

The word   'dynamic' that is passed as the first parameter is simply an identifier for that color tracking function. We could define multiple color tracking function each with a different name.

The way we will be using that is by calculating the Euclidian distance between the pixel’s color and our target color. If that distance is within the tolerance, we will consider it a match.

The   getColorDistance() function returns the actual distance. Then, we check if that distance is smaller than the value in our GUI slider ( slider.value).

The  getColorDistance() function takes two colors as arguments – the target color and the actual pixel color – and returns the distance between the two. This is what the function looks like:

The colors are expressed as objects with  r,   g  and   b properties (integers between 0-255) which is what we used for our target   color property.

Now that we have a way to know if a pixel is within our desired target zone, we can create a  ColorTracker() object:

As a parameter, we pass in the name we used to identify our color tracking function from earlier. 

There’s one last thing to do before starting the actual tracking. We must listen for the  track event. This will allow us to execute code whenever the target color has been tracked:

In this case, we will simply log the position of the tracked color if found. The reason there is a  foreach() block is because the function may have tracked more than one isolated color blobs. As you can see  data contains a  rect object with  xywidth and  height properties. There will be one  rect for each detected color blob.

The last thing to do is to start the tracking:

We pass this function three things:

  1. The container ( webcam) which is the video tag we added to the HTML file. The library can also work with static images or video files;
  2. The tracker we just created;
  3. Some options. In this case, it’s not just a regular  video tag, it’s the feed from a webcam. Knowing this, tracking.js will attach the first webcam it finds to the  video tag for us.

At this stage, if you open the  index.html page in the browser, you should see the feed from your webcam. If you put a bright red object within camera view and increase the tolerance, you should also see some output in the console.

However, you will quickly realize two things:

  1. It is hard to guess the color to track. The ambient lighting affects color rendition, a lot. Our hardcoded bright red (255, 0, 0) probably does not exist in your current environment.
  2. It is also hard to know if the tracking is working properly by simply looking at the console output.

Let’s fix those two things.

First, instead of manually entering a color to track, we will pick a color by clicking on it in the video feed. This way, we will know the color actually exists under the current lighting conditions.

To do that, let’s fetch the color of the pixel we are clicking on. To do that, we will create the getColorAt() function:

To get access to pixel color values, we actually need to create a temporary canvas and draw the current video frame onto it. Then, we are able to use  getImageData() to fetch the color. 

Since we want to do that upon  click events, add the following bit of code at the end of the  load event callback function:

This simply calls the  getColorAt() function, assigns the returned RGB values to our  color object and display the selected color as the background of our color swatch  div.

Click on the video. The color swatch should reflect the color you picked. This way, it’s going to be much easier to pick the right color to track.

But is it really working? The only way to know for sure is to actually draw the detected zones in the canvas sitting on top of the video. To do that, let’s create another function:

This simply draws a rectangle matching the zone that was detected. We are using the same color to draw the rectangle’s outline. 

This function will need to be drawn each time a track event is triggered. Also, we need to wipe old rectangles when drawing new ones. This means our  track callback function will now look like this:

Go ahead and try it! You should be able to pick a color from the video feed and see rectangles showing zones of that color in the feed. You can also use the tolerance slider to increase or decrease the precision of the detected area.

Finishing touches

You may have noticed that the webcam image is horizontally inverted. If you want the webcam to act like a mirror, you can flip the video along the x-axis using CSS. Just look inside the CSS file and uncomment that line for both the  video and  canvas tags:

By now you might be wondering about the performance of this library. I have not done any benchmarking but I can offer a way simplify the heavy calculations if performance is an issue for you. What you can do is use a lower resolution version of the webcam feed. Color tracking often works very well with a video as small as 160×120 for objects that are not too small.

However, tracking.js (as far as I can tell) does not offer a way to control the size or frame rate of the video that is retrieved from the webcam. That’s why I submitted a pull request on GitHub that allows you to do that. If you want the following code to work, you will have to grab the modified version of tracking.js from my own GitHub (that is until the PR is merged in the main repo).

As a sidenote, I noticed tracking.js’ GitHub repo seems to have been somewhat abandoned. Some pull requests and issues are 3-4 years old, which is never a good sign. 

Basically, my version allows you to pass in a media constraints object to the  track() method like so:

This will allow you to retrieve a lower resolution version of the video feed. If you do that, you should also modify the  video and  canvas tags to use the same dimensions.

You could also use the media constraint object to do things such as pick a specific camera (if you had more than one connected) or use a lower frame rate.

You can learn more about media constraints by checking out the relevant MDN article.

One last thing: if you want to use this example online (as opposed to an Electron or NW.js app), you will need to host it via SSL. Otherwise, it won’t work and you will get this in the console:

Wrapping things up

I guess that’s it. If you have not done so already, you can download the full code to help you get started. 

As I stated earlier, I do have some reservations with regards to the way tracking.js’ API was built. I’m also a bit concerned by the current state of the repo on GitHub. But, having said that, the code does work and you are free to contribute to make it better.

Also know that alternatives exist. A great one is the JavaScript port of OpenCV simply called OpenCV.js. The learning curve is higher but the payback also is. When I get around to it, I’m probably going to write an intro tutorial for it.

I hope this tutorial was useful and, as usual, do not hesitate to leave comments below.

Post a comment

Your email address will not be published.