Crie um aplicativo da web para interagir visualmente com objetos detectados usando aprendizado de máquina

Nó Fonte: 1849328

Resumo

The IBM Model Asset eXchange (MAX) models that are hosted on the Machine Learning eXchange (https://ml-exchange.org/models/) have given application developers without data science experience easy access to prebuilt machine learning models. This code pattern shows how to create a simple web application to visualize the text output of a MAX model. The web app uses the Detector de Objetos from MAX and creates a simple web UI that displays bounding boxes around detected objects in an image and lets you filter the objects based on their label and probable accuracy given by the model.

Descrição

This code pattern uses one of the models from the Model Asset eXchange, an exchange where you can find and experiment with open source deep learning models. Specifically, it uses the Object Detector to create a web application that recognizes objects in an image and lets you filter the objects based on their detected label and prediction accuracy. The web application provides an interactive user interface backed by a lightweight Node.js server using Express. The server hosts a client-side web UI and relays API calls to the model from the web UI to a REST end point for the model. The web UI takes in an image and sends it to the model REST endpoint through the server and displays the detected objects on the UI. The model’s REST endpoint is set up using the Docker image provided on MAX. The Web UI displays the detected objects in an image using a bounding box and label and includes a toolbar to filter the detected objects based on their labels or a threshold for the prediction accuracy.

Ao concluir esse padrão de código, você entende como:

  • Build a Docker image of the Object Detector MAX model
  • Implante um modelo de aprendizado profundo com um endpoint REST
  • Recognize objects in an image using the MAX model’s REST API
  • Execute um aplicativo da web que usa a API REST do modelo

Fluxo

fluxo

  1. The user uses the web UI to send an image to the Model API.
  2. The Model API returns the object data and the web UI displays the detected objects.
  3. The user interacts with the web UI to view and filter the detected objects.

Instruções

Pronto para usar esse padrão de código? Detalhes completos sobre como começar a executar e usar este aplicativo estão no README.

Source: https://developer.ibm.com/patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning/

Carimbo de hora:

Mais de Desenvolvedor IBM