Glean insights with AI on live camera streams and videos

Source Node: 748599

This code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path.

Summary

This code pattern provides a web application that can display live RTSP camera streams or prerecorded videos. Frames from these video streams can then be captured at an interval (1 fps default) and analyzed by an object detection or classification model. The inference results are rendered in several different visualizations such as a list, a pie chart, and a data table.

Description

There are millions of security cameras deployed around the world. However, most of the useful footage captured by these cameras goes unwatched because the operator must either monitor the video feeds in real time or manually sift through hours of footage to track incidents.

In this application, there’s a dashboard that can stream from any internet connected camera and detect specific objects or classifications in real time. This lets operators view a high-level analytics report of activity observed over a time period as well as customizable visualizations.

This enables you to quickly get an overview of trends, such as the most commonly recognized objects or the busiest times and days, and annotated images with the inference results drawn in bounding boxes or a heat map. Captured images are forward to the IBM® Maximo® Visual Inspection service, which has the capability to apply object detection, image classification, and action detection on images and videos. Each observed event can be logged in a database with relevant metadata, such as the date, time, location, and objects observed.

Flow

Glean insights with AI flow diagram

  1. The user accesses the web application and provides login credentials for both the camera system and IBM Maximo Visual Inspection. The user also fills out a form to select the model and to classify images as positive or negative.
  2. The Node.js back end connects to the camera RTSP stream and forwards to the front-end web application.
  3. As the front end plays live video, the user clicks Capture Frame or Start Interval.
  4. The captured frames are forwarded to the IBM Maximo Visual Inspection back end for analysis.
  5. The analysis results are rendered in the web app and grouped by the user-defined positive or negative labels.

Instructions

Find the detailed steps for this pattern in the readme file. The steps show you how to:

  1. Build a visual analysis model using IBM Maximo Visual Inspection.
  2. Set up the web application and display the video or live camera feed.
  3. Configure the model, and select the objects or classes of interest.
  4. Configure the frame capture interval, and begin the video analysis.
Source: https://developer.ibm.com/patterns/glean-insights-with-ai-on-live-camera-streams-and-videos/

Time Stamp:

More from IBM Developer