Medical Image Segmentation: A Complete Guide

Ango AI
7 min readJul 29, 2022

Object segmentation is the process of extracting the pixel-wise mask of an object or region of interest from an image. We usually use mask to train models to detect and localize similar objects. Essentially, this is a process of assigning a label to each pixel in an image, thus assigning them to various classes. Medical image segmentation is a simple extension of this concept within the medical domain.

Commonly, medical image segmentation entails segmenting regions/objects of medical interest such as organs, bones, tumors etc.

Medical Image Segmentation of an Abdomen CT Scan.
From Top Left to Bottom Right: Axial, Sagittal, Coronal, and 3D Projections

For most machine learning applications, segmentation is first done by human intervention, which implies labeling various pixels of an image and assigning them categories under human supervision. Then, we can use this data to train models which can segment objects they have not encountered before.

In this article we will go through the complete process on segmenting medical data. If you need a lower-level explanation of what labeling medical data means, you can check our complete guide on labeling medical images, or if you need data, our guide on creating synthetic medical data.

And if you are just looking for a software to label your medical data with, Ango Hub is free and fully supports all medical workflows.

How to Perform Medical Image Segmentation

Polygons and Segmentation Masks

Labeling each pixel one by one is an incredibly tedious task, and for practical purposes, not one that humans perform to annotate (segment) objects in a medical image.

A more efficient way to segment objects is through the use of contour (edge) points. This is the so-called ‘polygonal annotation’. The annotator draws a polygon that encloses the object, and the pixels remaining inside this polygon belong to that specific class.

Another way to tackle pixel-wise annotation manually is by drawing segmentation masks similar to how one would paint using a brush. To segment the object of interest the annotator ‘paints’ over all the pixels that belong to an object.

Automated Interactors for Medical Image Segmentation

Medical objects can be incredibly complex and the process of manually clicking each edge of a polygon and painting over each pixel can be tedious. For this reason at Ango, we employ the following two interactors to speed up the process manifold


FrameCut segmenting a lung

The AI Assistance tool FrameCut essentially is a class agnostic object segmentation tool that takes a region of interest in the form of a bounding box and a couple of positive and negative clicks to segment an object.

This reduces the interaction needed to segment complex medical objects considerably allowing the annotator to label more data in less time. In the image above you may observe that with 4 clicks the organ is segmented, which normally would have taken up to 20 clicks.

Smart Scissors

Intelligent Scissors segmenting the heart

The Smart Scissors tool, like FrameCut are an automated way of segmenting an object. The way this tool works is by sticking to the edge of the salient object. Given a few anchor points, it forms the boundary of the object simply as the user hovers their mouse over the object.

This also reduces the interaction of the annotator with the image considerably thus saving time. In the image below simply using 4–5 anchor points the organ is fully segmented which would comprise more than 40 points (edges) otherwise.


Organs being tracked across multiple frames

Often, medical images come in the form of volumes. These volumes consist of multiple slices (i.e. multiple still frames), and often these frames are temporally and spatially interrelated. Since labeling, multiple slices is a tedious task as these slices can be in the hundreds per volume. Segmentation interpolation is another technique for speeding up annotation up to 40 times since multiple slices are automatically labeled.

In the image above you can see the results of this on the kidney and another body within a CT scan with pixel-perfect mask accuracy during tracking.

Some Useful Applications of Medical Image Segmentation

Covid-19 Detection and Localization

During the recent pandemic, detection of COVID-19 using X-Rays and Deep Learning was highly researched. Some very promising results came out of this research.

Using segmentation of lungs with COVID-19 in X-Rays, researchers and engineers were able to predict with high accuracy cases that were positive and negative.

The following is an image from COVID-Net an open source deep learning initiative aiming to tackle the detection of COVID-19. The convolutional neural network returns segmented regions of interest.

Covid-19 Segmentation

For a deeper dive into COVID-Net, I recommend reading Linda Wang and Alexander Wong’s paper on the subject.

Abdomen Organ Detection

Abdomen Segmentation from TransUNet

As the name suggests, this problem relies on the segmentation of various organs within the abdomen. The problem often tackles the modality of CT scans and segments organs across multiple slices.

Brain Tumor Segmentation

Brain Tumor Detection (Source)

The region of interest for this problem is the area affected by the tumor in the brain. Once this area is identified and segmented, various further studies can be performed. This greatly alleviates the burden on radiologists.

Skin Lesion

Skin Lesion Masks (Source)

This is an application for dermatology where the goal is to segment the area of skin that contains a lesion, this can then be further classified into benign and malignant.

Cell Nuclei Segmentation

Cell Nuclei Segmentation (Source)

This application belongs to the field of microscopy and can be used for lab-related applications. The goal is to segment out the region containing nuclei of various types of cells. This can assist in disease detection, cell counting, and various other medical use cases.

Deep Learning Frameworks for Medical Image Segmentation


An especially popular neural network model for the medical domain is the U-Net. It follows an encoder-decoder architecture using CNNs and extracts the features of importance from an image, outputting them as segmentation masks.

An extremely wide variety of medical applications use U-Net directly or a derivation of it. Majority of the teams working with Medical Image Segmentation interact with this framework in one form or the other.

The architecture of U-Net


V-Net’s architecture is fairly similar to U-Net and may even be considered a derivation of it, however the addition to the V-Net architecture is its ability to handle 3D medical volumes rather than 2D images, thus most operations within this architecture are catered towards the 3D nature of the inputs

V-Net Architecture as presented by the authors (source)


TransUNet Architecture as presented by the authors

This is another popular medical image segmentation architecture, combining the merits of Transformers and U-Net. The encoder layer of the U-Net is replaced by a transformer for feature extraction. The decoder layer remains the same. The results on abdomen CT scans given by the authors are promising.


Segmenting a CT Scan with Ango Hub

Medical image segmentation is at the core of AI and Computer Vision efforts in healthcare. The research and applications of segmentation are sure to revolutionize this field in the near future. However, the fuel to get to this state of the future is well-segmented (annotated) medical data.

To help teams achieve their turn-key medical AI projects we ensure that their training data needs are met in the most efficient and highest quality manner possible. For this, we utilize our array of AI assistance tools for medical data labeling, an extremely skilled and capable workforce, and state of the art platform for labeling: Ango Hub.

If you’d like to see how your organization can get its data segmented to start its cutting-edge medical AI project, get in touch with us to talk about how to solve your data labeling needs.

Originally published at on July 29, 2022.