Ango Hub Feature Preview: Magic Prediction

Our data science and ML teams are working hard on delivering our customers an all-new AI assistance feature: Magic Prediction. We aim to release this feature in the upcoming weeks. In the meantime, we’d like to share with you the progress done so far, as well as explaining exactly what Magic Prediction does.

The rise of data-centric ML has shown that the development of ML models still has some issues that can only be solved effectively with supervised learning. As such, data annotation is extremely relevant, especially for most industrial AI projects. Despite this, data annotation is also a labor-intensive activity. This makes it so that in data annotation processes, even small improvements are extremely valuable. At Ango, constantly improving end-to-end data labeling processes is our north star.

To speed up the process of data labeling even further, we are pleased to introduce our new feature, which we call “Magic Prediction”. It is a simple yet effective technique, which we believe will speed up the data annotation pipeline drastically.

In a typical bounding-box annotation scenario, an annotator performs two tasks:

  1. Drawing a bounding box containing the object, and
  2. Selecting the class name among possible candidates.

With Magic Prediction, we are eliminating this second step, by classifying the object inside the drawn bounding box automatically. To make it happen, we are training image classifiers as labeled data accumulates. As the number of annotated objects increases, we are able to train better-performed image classifiers.

Here’s a sample illustration of class predictor:

Let us introduce our lovely Smoky. She is cute, and crazy at the same time;
and of course, she is a cat!


To prove and test the effectiveness of Magic Predictor, we conducted a number of experiments with a large variety of different images, classes, and labeling conditions. We are publishing the results here.

For the experiments, the COCO object detection and segmentation dataset is used. In the COCO dataset, there are a total of 80 classes, but for the sake of simplicity, only animal classes (sheep, bird, cow, horse, elephant, dog, giraffe, zebra cat, and bear) are selected. The distribution of the classes is shown in the figure below:

Class Distribution of COCO Dataset (Only Animal Classes)

To measure the performance of our classifier, we brought correctly and wrongly classified samples from the test set. From the figures below, if the animal is obvious and if there is no occlusion, it is correctly classified. However, if there is occlusion, or if the animal is far away from the camera, or if the lighting conditions are bad, the probability of the classifier making a mistake increases.

Correctly Classified Samples
Wrongly Classified Samples

The Effect of Training Size

It is good to know the minimum of how many data instances are needed for training and how frequently we should train our model. For these reasons, the model is trained with various sample sizes and its performance is measured on the same test data.

In the figure below, in order to see the effect of training size, accuracy vs. the number of training sample size figure is plotted. With the maximum sample size, an 88.17% accuracy score is obtained. As expected, as the sample size decreases, the accuracy also decreases. However, with only 250 annotated objects, the classifier reached 74.26% accuracy which is low, but still satisfactory. 83.02% accuracy is obtained when the sample size reaches 2172. Also, it is good to note that our classifiers are raw yet open to any improvement. Therefore, it is not wrong to say that this is the absolute minimum performance of the classifiers in this setup.

Sensitivity to Bounding Box Size

Until now, we used COCO bounding box annotations directly as an annotator input. In this section, we will discuss the effect of bounding box tightness on accuracy. In the figure below, the classifier was tested with various bounding box tightness levels. Obviously, with extremely tight and broad bounding boxes, the classifier begins to make mistakes. On the other hand, the classifier is tolerable to bounding box broadness at a certain level.

Sensitivity to Bounding Box Size

Object Detection vs. Magic Prediction

After seeing the magic prediction tool, you may think: rather than a magic prediction tool, can an object detection model be used directly?

In general, object detection models are more complex than classifiers, and it makes them more data-hungry, which means that you need more annotated data to reach a certain level. In addition to that, the runtime complexity of object detection models is higher.

What’s Next?

We are still working on getting highly accurate image classifiers to reach the best accuracy values. As a next property, we are planning to bring the ability to detect out-of-distribution cases. In addition to that, we are planning to combine the class predictor with our other AI assistance tools.

Author: Onur Aydın
Editor: Lorenzo Gravina
Technical Editor: Balaj Saleem

Originally published at on January 28, 2022.




Next-gen data labeling solutions.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ango AI

Ango AI

Next-gen data labeling solutions.

More from Medium

A revision from the traditional to a SaaS-based business model

How to communicate telepathically using brain-computer interface technology

Travelling Salesman Problem Using Simulated Annealing

Kharagpur Winter of Code — Project