Machine Learning is disrupting the world of medicine and healthcare, allowing professionals to diagnose patients better and faster than before. But to train ML models, we need high-quality annotated medical images. This is where medical image annotation comes into play.
ML has the potential to revolutionize the entire medical pipeline, from the moment the patient enters the institution to the moment they leave.
That said, training neural networks and ML models is no easy feat. It needs a large quantity of high-quality labeled data. Here, medical data labeling becomes important. In this article, we’ll tell you everything you need to know about it.
And if you already know all about labeling medical images, then you can sign up to Ango Hub and start labeling your medical images right away, completely for free, with no time limit.
What is Medical Image Annotation?
We have an entire blog post dedicated to explaining what medical data labeling is. But if you’re short on time, here’s the gist.
Medical data labeling is the process of annotating medical data, be it imaging data such as CT scans, X-rays, MRIs, ultrasounds, and retina fundus shots.
The healthcare industry also requires other types of data labeled, such as document data like medical records in PDF or PNG/JPG formats. Occasionally, medical data labeling can include sound labels, such as patient conversations, cough sounds, and more. This article will focus on medical imaging, however.
AI teams then use this labeled data to train their ML models, which, once trained, can then automatically detect what has been labeled before.
Getting Medical Images Ready for Labeling
To train a machine learning model that will give reliable results, it needs to be shown a decent amount of data labeled at the highest quality. Often, the data, even in its unlabeled state, can be hard to come by. And even when you do have the data at hand, there are a couple of things to remember.
Variety of Datasets
It’s important that your data does not all come from the same source, and that it does not all look the same. This is because we want the model to be as reliable as possible for all the different cases reality will throw at it.
If the model were trained only on a subset of the data, or on data that all look very similar, it won’t know what to do when we show it data that looks different.
In short, use data that comes from different sources or different stages, institutions, or places.
The Dataset Vetting Process
We recommend splitting your dataset into three parts: training, validation, and testing. Training will make up about 80% of your total data, with the rest splitting the remaining 20%.
First, train your model on the majority of the data, the training set. Once trained, evaluate the results on the smaller Validation Set.
Look at the results that come out of the validation set. Are they to your satisfaction? Likely, they’ll need some tweaking. Tweak, then train again and validate again. Repeat until you are satisfied with the validation results.
Once you’re happy with the validation results, test your results against the test dataset. This will be your final model benchmark.
Size of your Dataset
Recent developments in the world of ML have shown that quality is as important as quantity when it comes to training models. This means that a smaller, but high-quality set will usually perform equally or even better than a large set of lower quality.
That said, if you have the option to enlarge your dataset, we highly recommend doing so, as model results will improve significantly.
Format of your Dataset
The two most common medical imaging formats around are DICOM and TIFF. DICOM especially is the industry standard for radiologists.
Both DICOM and TIFF files can optionally contain multiple images, or “slices,” and metadata regarding the patient and the image itself.
Good medical image annotation platforms will support both formats, and Ango Hub also has the option to automatically remove identifying information from both metadata and the image itself on upload.
What makes medical image annotation different from normal labeling?
Labeling images for healthcare is an altogether different endeavor compared to regular image annotation. Here are some things that are different:
While ‘regular’ images are often freely available, or behind a standard NDA, medical imaging is usually protected by strict data processing agreements. This is mainly to protect the privacy of the patient. Obtaining medical imaging data is usually a longer process compared to other data types.
Regular images only have one layer, are of smaller size, and have a low bit depth. Medical images often have multiple layers (slices), are larger in size, and have a higher bit depth.
Further, the labeler profiles for both will be different. While regular images are labeled by generalist annotators, medical imaging requires specialized healthcare experts. These experts are used to certain UI and UX paradigms. Therefore, when choosing a data labeling platform, it’s important to note whether medical professionals can easily use its keyboard controls and UI.
Picking the Medical Image Annotation Tool for You
DICOM viewers with annotation capabilities abound in the market. One notable open-source option, for example, is 3D Slicer.
DICOM viewing tools, however, are not optimized for ML model training. Sometimes it is simply impossible to use the labels from these viewers in machine learning. This is due to a lack of instance IDs, and a lack of structured export formats.
Automated image segmentation of lungs in a frontal thoracic X-Ray on Ango Hub.
To train and develop a neural network, you will need to use a professional medical imaging labeling tool.
The image annotation tool you choose will have to satisfy certain requirements:
- Does the tool support medical formats such as DICOM and TIFF?
- Does the tool support the labeling tools you are looking for?
- Is the tool’s UX easy to use and suitable for medical use?
- Is the tool’s export format easy to use in ML model training?
- Does the tool come with a medical data labeling service to enhance your own workforce?