Object tracking in video with OpenCV and Deep Learning

Source Node: 749912

This code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path.

Summary

Whether you are counting cars on a road or products on a conveyor belt, there are many use cases for computer vision with video. With video as input, you can use automatic labeling to create a better classifier with less manual effort. This code pattern shows you how to create and use a classifier to identify objects in motion and then track and count the objects as they enter designated regions of interest.

Description

Whether it is car traffic, people traffic, or products on a conveyer belt, there are many applications for keeping track of potential customers, actual customers, products, or other assets. With video cameras everywhere, a business can get useful information from them with some computer vision. Applying this technology to videos is much more practical than older methods (for example, using special hardware or a person counting vehicle traffic).

This code pattern explains how to create a video car counter using the IBM Maximo Visual Inspection Video Data Platform, OpenCV, and a Jupyter Notebook. You’ll use a little manual labeling and a lot of automatic labeling to train an object classifier to recognize cars on a highway. You’ll load another car video into a Jupyter Notebook where you’ll process the individual frames and annotate the video.

You’ll use the deployed model for inference to detect cars on a sample of the frames at a regular interval, and you’ll use OpenCV to track the cars from frame to frame in between inference. In addition to counting the cars as they are detected, you’ll also count them as they cross a “finish line” for each lane and show cars per second.

When you’ve completed this code pattern, you will understand how to:

  • Use automatic labeling to create an object detection classifier from a video
  • Process frames of a video using a Jupyter Notebook, OpenCV, and IBM Maximo Visual Inspection
  • Detect objects in video frames with IBM Maximo Visual Inspection
  • Track objects from frame to frame with OpenCV
  • Count objects in motion as they enter a region of interest
  • Annotate a video with bounding boxes, labels, and statistics

Flow

object detection

  1. Upload a video using the IBM Maximo Visual Inspection web UI.
  2. Use automatic labeling and train a model.
  3. Deploy the model to create an IBM Maximo Visual Inspection inference API.
  4. Use a Jupyter Notebook to detect, track, and count cars in a video.

Instructions

Find the detailed steps for this pattern in the README. The steps will show you how to:

  1. Create a data set in Video Data Platform.
  2. Train and deploy the model.
  3. Automatically label objects.
  4. Run the notebook.
  5. Create the annotated video.

Conclusion

This code pattern showed how to create and use a classifier to identify objects in motion and then track and count the objects as they enter designated regions of interest. The code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path. To continue the series and learn about more IBM Maximo Visual Inspection features, look at the next code pattern, Validate deep learning models.

Source: https://developer.ibm.com/patterns/detect-track-and-count-cars-in-a-video/

Time Stamp:

More from IBM Developer