Image classification and object detection using Amazon Rekognition Custom Labels and Amazon SageMaker JumpStart

In the last decade, computer vision use cases have been a growing trend, especially in industries like insurance, automotive, ecommerce, energy, retail, manufacturing, and others. Customers are building computer vision machine learning (ML) models to bring operational efficiencies and automation to their processes. Such models help automate the classification of images or detection of objects of interest in images that are specific and unique to your business.

To simplify the ML model building process, we introduced Amazon SageMaker JumpStart in December 2020. JumpStart helps you quickly and easily get started with ML. It provides one-click deployment and fine-tuning of a wide variety of pre-trained models, as well as a selection of end-to-end solutions. This removes the heavy lifting from each step of the ML process, making it easier to develop high-quality models and reducing time to deployment. However, it requires you to have some prior knowledge to help in model selection from a catalog of over 200 pre-trained computer vision models. You then have to benchmark the model performance with different hyperparameter settings and select the best model to be deployed in production.

To simplify this experience and allow developers with little to no ML expertise to build custom computer vision models, we’re releasing a new example notebook within JumpStart that uses Amazon Rekognition Custom Labels, a fully managed service to build custom computer vision models. Rekognition Custom Labels builds off of the pre-trained models in Amazon Rekognition, which are already trained on tens of millions of images across many categories. Instead of thousands of images, you can get started with a small set of training images (a few hundred or less) that are specific to your use case. Rekognition Custom Labels abstracts away the complexity involved in building a custom model. It automatically inspects the training data, selects the right ML algorithms, selects the instance type, trains multiple candidate models with different hyperparameters, and outputs the best trained model. Rekognition Custom Labels also provides an easy-to-use interface from the AWS Management Console for the entire ML workflow, including labeling images, training, deploying a model, and visualizing the test results.

This example notebook within JumpStart using Rekognition Custom Labels solves any image classification or object detection computer vision ML task, making it easy for customers familiar with Amazon SageMaker to build a computer vision solution that best fits your use case, requirements, and skillset.

In this post, we provide step-by-step directions to use this example notebook within JumpStart. The notebook demonstrates how to easily use Rekognition Custom Labels existing training and inference APIs to create an image classification model, a multi-label classification model, and an object detection model. To make it easy for you to get started, we have provided example datasets for each model.

Train and deploy a computer vision model using Rekognition Custom Labels

In this section, we locate the desired notebook in JumpStart, and demonstrate how to train and run inference on the deployed endpoint.

Let’s start from the Amazon SageMaker Studio Launcher.

  1. On the Studio Launcher, choose Go to SageMaker JumpStart.

    The JumpStart landing page has sections for carousels for solutions, text models, and vision models. It also has a search bar.
  2. In the search bar, enter Rekognition Custom Labels and choose the Rekognition Custom Labels for Vision notebook.

    The notebook opens in read-only mode.
  3. Choose Import Notebook to import the notebook into your environment.

The notebook provides a step-by-step guide for training and running inference using Rekognition Custom Labels from the JumpStart console. It provides the following four sample datasets to demonstrate single- and multi-label image classification and object detection.

    • Single-label image classification – This dataset demonstrates how to classify images as belonging to one of a set of predefined labels. For example, real estate companies can use Rekognition Custom Labels to categorize their images of living rooms, backyards, bedrooms, and other household locations. The following is a sample image from this dataset, which is included as part of the notebook.
    • Multi-label image classification – This dataset demonstrates how to classify images into multiple categories, such as the color, size, texture, and type of a flower. For example, plant growers can use Rekognition Custom Labels to distinguish between different types of flowers and if they are healthy, damaged, or infected. The following image is an example from this dataset.
    • Object detection – This dataset demonstrates object localization to locate parts used in production or manufacturing lines. For example, in the electronics industry, Rekognition Custom Labels can help count the number of capacitors on a circuit board. The following image is an example from this dataset.
    • Brand and logo detection – This dataset demonstrates locating logos or brands in an image. For example, in the media industry, an object detection model can help identify the location of sponsor logos in photographs. The following is a sample image from this dataset.
  1. Follow the steps in the notebook by running each cell.

This notebook demonstrates how you can use a single notebook to address both image classification and object detection use cases via the Rekognition Custom label APIs.

As you proceed with the notebook, you have the option to select one of the aforementioned sample datasets. We encourage you to try running the notebook for each of the datasets.

Conclusion

In this post, we showed you how to use the Rekognition Custom Labels APIs to build an image classification or an object detection computer vision model to classify and identify objects in images that are specific to your business needs. To train a model, you can get started by providing tens to hundreds of labeled images instead of thousands. Rekognition Custom Labels simplifies the model training by taking care of parameter choices such as such as machine type, algorithm type, or algorithm-specific hyperparameters (including the number of layers in the network, learning rate, and batch size). Rekognition Custom Labels also simplifies hosting of a trained model and provides a simple operation for performing inference with a trained model.

Rekognition Custom Labels provides an easy-to-use console experience for the training process, model management, and visualization of dataset images. We encourage you learn more about Rekognition Custom Labels and try it out with your business-specific datasets.

To get started, you can navigate to the Rekognition Custom Labels example notebook in SageMaker JumpStart.


About the Authors

Pashmeen Mistry is the Senior Product Manager for Amazon Rekognition Custom Labels. Outside of work, Pashmeen enjoys adventurous hikes, photography, and spending time with his family.

Abhishek Gupta is the Senior AI Services Solution Architect at AWS. He helps customers design and implement computer vision solutions.

LATEST ARTICLE

See Our Latest

Blog Posts

admin October 9th, 2024

When scaling data science and ML workloads, organizations frequently encounter challenges in building large, robust production ML pipelines. Common issues […]

admin October 9th, 2024

For the past couple years, generative AI has been the hot-button topic across my conversations with customers, prospects, partners and […]

admin October 9th, 2024

Natural language is rapidly becoming the bridge between human and machine communication. But hallucinations — when a model generates a […]