Oracle Vision: Harnessing Machine Learning for Smarter Insights - Part 3

Author: Phil Godfrey

In the previous blogs, we introduced the Oracle Vision and creating a Custom Vision model, referencing a previously created dataset and training a model.

The next blog in the series will work through reviewing the model metrics of a provisioned model and testing the model by passing in new unseen data.

What is Oracle Vision?

One of many AI Services introduced by Oracle, OCI Vision serves as an advanced AI tool aimed at facilitating extensive image analysis powered by deep learning algorithms.

It streamlines the process of integrating image and text recognition features into applications, eliminating the need for specialised knowledge in machine learning.

Accessing Oracle Vision

In OCI, navigate to Analytics & AI, and under the AI Services subheading, you will find Vision.

A screenshot of a computer

Description automatically generated

Once you click here, providing the necessary pre-requisites have been granted, you should see the overview of Oracle Vision.

In the previous blogs we created a project, and trained a model using our labelled dataset, make sure to look back on that if you missed it.

 

Review performance of a trained model

When we select the trained model, we can review the relevant training metrics:

A screenshot of a graph

Description automatically generated

Evaluation metrics (or training metrics in our case) are quantitative measures used to assess the performance and effectiveness of a statistical or machine learning model.

These metrics provide insights into how well the model is performing and help us to compare different models or approaches.

Fields we have available to us are:

  • Mean Average Precision: The mean Average Precision (mAP) score with a threshold of 0.5 is provided only for custom object detection models. calculated by taking the mean Average Precision over all classes. It ranges from 0.0 to 1.0 where 1.0 is the best result.

  • Precision: The fraction of relevant instances among the retrieved instances.

  • Recall: The fraction of relevant instances that were retrieved.

  • Confidence Threshold: (always 0.5 for custom models)

In addition, we receive the number of total valid images, test images and trained duration (in hours).

These figures allow us to compare different model versions, to then understand which one is best performing based on our requirements.

Analyze

Below the model metrics, we have the analyze section, where we can invoke our created Custom Vision model.

A screenshot of a computer

Description automatically generated

 

We can do so from object storage, or by passing in a local file to then provide an output of the invoked model, with a confidence score of each identified object.

A screenshot of a parking lot

Description automatically generated

 

Training metrics are available through the console, this provides additional information on labelled metric performance in model training. An example can be seen below which explains how both labelled metrics are performing in the current model.

 As an output of the model invocation, we are presented with bounding boxes around each identified object (in our case it’s cars) as there are no spaces in this test image. For each we receive a confidence score on the right-hand side of the screen. Underneath that we have two JSON fields (request and response).

 

Code Inferencing

A new feature that has recently been included is code inferencing – this provides boilerplate code sample to then work with Vision model. This is currently available for Python and Java languages. This is an excellent addition to the service and will certainly assist anyone working with Vision to integrate the model output into additional services / applications.

A screenshot of a computer

Description automatically generated

Comments