Yinghuan Shi, Qian Wang*
1State Key Laboratory for Novel Software Technology,
Nanjing University, Nanjing 210023, China
2Institute for Medical Imaging Technology, School of Biomedical Engineering,Shanghai Jiao Tong University, Shanghai 200030, China
Key words: medical imaging; artificial intelligence; deep learning; image segmentation; image registration; image detection; image recognition
Abstract Medical imaging is now being reshaped by artificial intelligence (AI) and progressing rapidly toward future. In this article, we review the recent progress of AI-enabled medical imaging. Firstly, we briefly review the background about AI in its way of evolution. Then, we discuss the recent successes of AI in different medical imaging tasks, especially in image segmentation, registration, detection and recognition. Also, we illustrate several representative applications of AI-enabled medical imaging to show its advantage in real scenario, which includes lung nodule in chest CT, neuroimaging, mammography, and etc. Finally, we report the way of human-machine interaction. We believe that, in the future, AI will not only change the traditional way of medical imaging, but also improve the clinical routines of medical care and enable many aspects of the medical society.
A RTIFICIAL intelligence (AI) is rapidly growing in recent years and has been pushing forward many applications into clinical practice. The technique of AI has demonstrated its powerful capability of assisting clinicians in enormous scenarios that cover the entire pipeline in current healthcare system. Whereas AI is still developing continuously, the combination of bigger data, stronger hardware, and more intelligent algorithm will eventually lead to maturation of commercial products, which may influence all aspects of healthcare substantially, including radiology, pathology, clinical decision making, etc.
AI-enabled medical imaging has already become a focus under the spotlight. A major engine underlying this heated wave of AI is computer vision, while medical images are a natural influx of computer vision,image processing, pattern recognition and the great interest from the medical society. Many governments around the world have announced their blueprints, in collaboration with industry giants and prestigious research institutes, to promote the AI-enabled medical imaging and image analysis. It is widely believed that a huge market will emerge with fast developing and maturation of next-generation medical imaging firmware and software enabled by the powerful tool of AI.
The AI-enabled products will eventually change the way daily diagnosis and treatment are conducted in hospitals and clinics. For example, radiologists will be able to examine and quantify the image data from an unprecedented perspective through the weapon of AI. In addition, the involvement and commitment from clinicians to AI-enabled healthcare cannot be restricted by the role of users only. Many clinical institutes,as well as medical practitioners, have devoted to the planning, development, validation and application of AI-enabled solutions, while intelligent medical imaging is currently a leading force to signify this ongoing revolution.
The birth of AI could be dated back to the Dartmouth meeting in 1956. However, the latest wave of AI that has amazed the entire world can be mostly attributed to the introduction of deep learning. Nowadays, the deep learning algorithm and its numerous variants have been applied to many scenarios. For example, one may apply convolutional neural network(CNN) toward natural language processing (NLP), such that computers can understand and generate written or verbal texts. In hospitals, doctors can benefit from such NLP intelligence significantly - there is no need to input the medical record by physically writing down the words or typing over a keyboard; instead, one may read out while computers can record by converting to texts automatically.
Other applications can also be easily noticed in clinical practice. With a huge number of clinical data that comes from highly diverse sources, it is typically challenging to human experts to fuse them effectively and efficiently. AI, however, provides a data-driven way to mine the data and to reveal intrinsic patterns that are associated with individual diseases and their subtypes. New ways are thus paved by AI for precise diagnosis and personalized treatment. Patients also benefit from better service that is presented to themone may feel easier and more comfortable to streamline the remedy plan, which could be initially proposed by an AI agent with high precision and much reduced cost.
In the field of medical imaging particularly, AI has reshaped many aspects in algorithm designs and application implementations. An ultimate goal in the field of computer vision aims to understand visual data(e.g., images, videos) automatically. In the past several decades, image understanding can only be conducted within a relatively small and less intelligent scope.Although tremendous efforts could be found for the extraction and manipulation of object contour or silhouette in 2D/3D image, the revolution did not happen until machine learning became ready. Nowadays, researchers are very familiar with machine learning, and in particular deep learning, to solve individual computer vision and medical image problems.
Image segmentation is a major battlefield where deep learning has achieved great successes.1,2The task of segmentation is often perceived as pixel-level (or voxel-level in 3D case) classification, by assigning individual pixels in an image to different categories. With precise segmentation, one may quantify the appearance information rendered in specific region of interest(ROI), and it also facilitates many subsequent treatments, e.g., radiotherapy planning and image-based guiding in interventional therapy. The initial version of CNN is well known for its classification capability.However, the architecture of CNN might not be a proper choice to image segmentation. To this end, fully convolutional network (FCN) has become a state-ofthe-art solution, where the input image can generate its corresponding label map in the style of end-to-end segmentation. Nowadays, many researchers tend to adopt U-Net or V-Net toward medical image segmentation, which prove their merits especially in relatively small datasets of medical images.
While segmentation and registration both belong to low-to-middle level processing of visual data, the latter faces more challenges.3,4A typical registration algorithm has two (or more) input images, while the output is the optimized spatial transformation. One of the two input images, or the moving one, can be warped to the space defined by the second input image (fixed).Thus the two images establish anatomical correspondences and then become quantitatively comparable in a unified space. The major difficulty, which originate from the image registration process, points to the fact that no ground-truth supervision could be acquired for learning based registration. The registration task,meanwhile, suffers from the curse of very high dimensionality when optimizing the spatial transformation. To this end, unsupervised learning framework is becoming more and more popular. By embedding the registration quality metric as the loss function, the deep network has demonstrated its power in encoding large yet complex spatial transformation. That is, the network learns to minimize the loss function, while the transformation can be generated through the network.
Besides the low and middle level processing, high level vision tasks, such as detection and recognition, are always attractive to researches and applications.5,6A detection model often requires identifying the location of the lesion, with moderate suffering from false positive rate. If a lesion has been successfully identified,then a preliminary diagnosis could be attained per patient. The detection might not be always necessary concerning the diagnosis - the AI system is capable of end-to-end learning, by encoding and decoding disease-related visual cues from the images directly. It is worth noting that, although the design for detection and recognition are similar to traditional computer-assisted diagnosis system, AI has reshaped the inside of the architecture. In particular, an AI system can work without relying on arbitrarily designed image features,since the network is able to optimize the kernel parameters spontaneously. Meanwhile, many researches have also highlighted the importance of clinical domain knowledge. One may translate the priors into feature representations. While deep networks provide flexible ways to fuse the inputs of external features,it is often found that the AI system can improve the detection and recognition after incorporating expert knowledge.
Lung nodule detection, segmentation and classification are among the frontline of AI when entering the field of medical imaging.7,8The anatomical structures pose great challenges to image reading. Usually the low thickness requirement incurs a huge amount of image data, making it hard for radiologists to screen lung cancer and make diagnosis easily. AI provides a low-cost yet efficient way, as an alternative to human expert, for lung cancer diagnosis as well as several diseases that can be captured by chest CT. There are many reports in the literature, proposing several models that demonstrate superior performance in identifying nodules of different sizes and severity in CT images. However, most of the methods are still pending for approvals from regulators. From a technical perspective, the robustness of the methods including sensitivity and specificity might be challenged especially concerning the high variation of clinical data in acquisition and preparation. Meanwhile, the method and the product have to be seamlessly combined with the clinical pipeline, posing high demands to their adaptive capability.
Neuroimaging is a major sub-field in medical imaging.9Current AI tools targeting neuroimages are mostly focusing on image analysis and diagnosis. In particular,precise ROI segmentation in reference to subtle neural structures and functions is still a popular direction.Deep learning has shown the capability of fast fullbrain parcellation, by labeling ROIs of different scales automatically. Based on the ROI parcellation, clinicians are able to quantify brain structures and functions in individual regions. The measures can be further handled through an AI system that enables multi-source data fusion, such that different modalities of images,as well as disease symptoms, lab test reports, could be combined together in clinical routines. From a disease perspective, much attention has been devoted to psychiatric diseases and progressive degeneration diseases. Studies around stroke, trauma, and brain tumor are also progressing rapidly.
Breast cancer is a leading fatal cancer to females.10It is already verified that screening through mammography can significantly reduce the mortality of breast cancer. While ultrasound is still a dominant tool to breast cancer screening and diagnosis in China, mammography is developing fast as its role has been well recognized around the world already. Interpretation of mammography is highly dependent on the expertise of radiologists, who are suffering from heavy working loads. To this end, AI tools that can help to read mammography data are essentially important. Several methods have been developed for this sake, including detection of potential lesions through augmented feature representation, category determination of Breast Imaging Reporting and Data System (BI-RADS), etc.The AI system can improve the efficiency in radiology department significantly by reducing the time cost of mammography reading. It can also make it possible to deploy high-quality healthcare service to remote regions, where training for radiologists might be very costly.
Bone diseases such as osteoarthritis,11are drawing more and more attention, since they post a lot burdens to the lives of the elderly. Current studies target disease diagnosis, which relies on many measures acquired from multi-modal images. To this end, sophisticated image segmentation and detection are often required. Moreover, concerning osteoarthritis, it is necessary to stage the disease while the clinically adopted criterion is often challenging to local hospitals.The AI-enabled solution, obviously, can help to screen the patients of early osteoarthritis, who can then benefit from proper treatment. On the other hand, the AI technique can also bridge a gap between medical image and surgery. Whereas surgery is a major therapeutic way to handle symptoms of bone and joint, with AI one can establish anatomical models from images conveniently. The resulted models can help surgeons to plan the treatment and guide the intervention.
The AI technology has penetrated multiple service scenarios, and healthcare is not a single exception.However, how to put “medical + AI” into real use is still a problem that the whole society is currently exploring. The goal of AI is not to replace human beings.Therefore, medical AI cannot replace doctors but to assist doctors in clinical treatment, help doctors to learn new knowledge, and provide tools for image handling and intervention. Meanwhile, although AI does not aim to replace doctors, those doctors who are not willing or unable to use AI effectively will most likely be replaced by those who are experts to leverage AI to improve their professional level.
Regarding the changes brought about by AI, in addition to improving the accuracy of the doctor’s judgment, the most important thing is to improve the doctor’s confidence. Assuming the AI system can reach the professional level equivalent to a senior doctor,then human expert will be more confident to deliver a diagnosis report especially when the judgement can be supported by the AI system. On the contrary, when a person and a machine collide (for example, when the machine thinks it is a pulmanary nodule on chest CT and the doctor does not think so), the doctor will gather more clinical data of the patient to make a decision after careful deliberation.
The interaction between human experts and machines needs to be tuned in a long period of time. The only and ultimate goal here is to deliver high-quality healthcare service to patients. Therefore, the introduction of AI should not alter or affect the ongoing effectiveness of clinical pipeline. Meanwhile, doctors act as teachers to the AI system by supervising the intelligent agent to work and correcting errors in any case needed. The AI system, which owns fabulous energy to learn non-stop, can improve its capability, accuracy,and robustness during this human-machine interaction.
Medical imaging is now being reshaped by AI and progressing rapidly toward future. On one hand,AI is changing the traditional way of imaging, making it more accurate and convenient to acquire image big data with reduced cost upon the healthcare system. On the other hand, AI is improving the clinical routines,while physicians are expected to work with assistance from AI and deliver better healthcare service. In the future, AI will enable and reshape many aspects of the medical society including radiology. Whereas patients will be able to receive higher-quality healthcare service from doctors, who are also benefiting from AI in their professional careers.
The authors declared no conflict of interests.
REFERENCE
1. Wang L, Nie D, Li GN, et al. Benchmark on automatic 6-month-old infant brain segmentation algorithms: the iSeg-2017 challenge. IEEE Trans Med Imaging 2019; epub: 2019 Feb 27. doi: 10.1109/TMI.2019.2901712.
2. Dolz J, Gopinath K, Yuan J, et al. HyperDense-net:a hyper-densely connected CNN for multi-modal image segmentation. IEEE Trans Med Imaging 2018;38(5):1116-26. doi: 10.1109/tmi.2018.2878669.
3. Cao XH, Yang JH, Zhang J, et al. Deformable image registration using cue-aware deep regression network.IEEE Trans Biomed Eng 2018; 65(9):1900-11. doi:10.1109/TBME.2018.2822826.
4. Wang Q, Lu L,Wu DJ, et al. Automatic segmentation of spinal canals in CT images via iterative topology refinement. IEEE Trans Med Imaging 2015; 34(8):1694-704. doi: 10.1109/tmi.2015.2436693.
5. Shi YH, Suk HI, Gao Y, et al. Leveraging coupled interaction for multi-modal Alzheimer’s disease diagnosis. IEEE Trans Neural Networks Learning Syst 2019; Epub:2019 Mar 20. doi: 10.1109/TNNLS.2019.2900077.
6. Lian CF, Liu MX, Zhang J, et al. Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI.IEEE Trans Pattern Anal Machine Intell 2018; Epub:2018 Dec 21. doi: 10.1109/TPAMI.2018.2889096.
7. Xie YT, Xia Y, Zhang JP, et al. Knowledge-based collaborative deep learning for benign-malignant lung nodule classification on chest CT. IEEE Trans Med Imaging 2018; 38(4):991-1004. doi: 10.1109/TMI.2018.2876510.
8. Farag AA, Munim HEAE, Graham JH. A novel approach for lung nodules segmentation in chest CT using level sets. IEEE Trans Image Processing 2013; 22(12):5202-13. doi: 10.1109/TIP.2013.2282899.
9. Torre LA, Islami F, Siegel RL, et al. Global cancer in women: burden and trends. Cancer Epidemiol Biomarkers Prev 2017; 26(4):444-57. doi:10.1158/1055-9965.
10. Larobina M, Murino L. Medical image file formats.J Digit Imaging 2014; 27(2):200-6. doi: 10.1007/s10278-013-9657-9.
11. Moskowitz RW. The burden of osteoarthritis: clinical and quality-of-life issues. Am J Manag Care 2009;15(8 Suppl): S223-9.
Chinese Medical Sciences Journal2019年2期