clear-datacenter / plan

MIT License
46 stars 17 forks source link

Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation #21

Open wanghaisheng opened 7 years ago

wanghaisheng commented 7 years ago

Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation 论文:https://arxiv.org/pdf/1603.08486v1.pdf 代码:

Interleaved text/image deep mining on a very largescale radiology database http://www.cs.jhu.edu/~lelu/publication/cvpr15_0371.pdf

Interleaved Text/Image Deep Mining on a Large-Scale Radiology Database for Automated Image Interpretation https://arxiv.org/abs/1505.00670

Learning the Correlation Between Images and Disease Labels Using Ambiguous Learning http://link.springer.com/chapter/10.1007%2F978-3-319-24571-3_23

High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks http://link.springer.com/article/10.1007/s10278-016-9914-9

Diseño de técnicas de inteligencia artificial aplicadas a imágenes médicas de rayos X para la detección de estructuras anatómicas de los pulmones y sus alteraciones https://riunet.upv.es/handle/10251/70103

Open-iSM:imaging, informatics, natural language processing, and multi-modal information retrieval – research and development https://pdfs.semanticscholar.org/117c/58682be513a5dfe9fe36f638a3f796207076.pdf

wanghaisheng commented 7 years ago

摘要

Despite the recent advances in automatically describing image contents, their applications have been mostly limited to image caption datasets containing natural images (e.g.,Flickr 30k, MSCOCO). In this paper, we present a deep learning model to efficiently detect a disease from an image and annotate its contexts (e.g., location, severity and the affected organs). We employ a publicly available radiology dataset of chest x-rays and their reports, and use its image annotations to mine disease names to train convolutional neural networks (CNNs). In doing so, we adopt various regularization techniques to circumvent the large normal-vs-diseased cases bias. Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features. Moreover, we introduce a novel approach to use the weights of the already trained pair of CNN/RNN on the domain-specific image/text dataset, to infer the joint image/text contexts for composite image labeling. Significantly improved image annotation results are demonstrated using the recurrent neural cascade model by taking the joint image/text contexts into account

wanghaisheng commented 7 years ago

背景介绍

Comprehensive image understanding requires more than single object classification. There have been many advances in automatic generation of image captions to describe image contents, which is closer to a more complete image understanding than classifying an image to a single object class. Our work is inspired by many of the recent progresses in image caption generation [44, 54, 36, 14, 61, 15, 6, 62, 31], as well as some of the earlier pioneering work [39, 17, 16]. The former have substantially improved performance, largely due to the introduction of ImageNet database [13] and to advances in deep convolutional neural networks (CNNs), effectively learning to recognize the images with a large pool of hierarchical representations. Most recent work also adapt recurrent neural networks (RNNs), using the rich deep CNN features to generate image captions. However, the applications of the previous studies were limited to natural image caption datasets such as Flickr8k [25], Flickr30k [65], or MSCOCO [42]which can be generalized from ImageNet. Likewise, there have been continuous efforts and progresses in the automatic recognition and localization of specific diseases and organs, mostly on datasets where target objects are explicitly annotated [50, 26, 55, 46, 40, 49]. Yet, learning from medical image text reports and generating annotations that describe diseases and their contexts have been very limited. Nonetheless, providing a description of a medical image’s content similar to a radiologist would describe could have a great impact. A person can better understand a disease in an image if it is presented with its context, e.g., where the disease is, how severe it is, and which organ is affected. Furthermore, a large collection of medical images can be automatically annotated with the disease context and the images can be retrieved based on their context, with queries such as “find me images with pulmonary disease in the upper right lobe”. In this work, we demonstrate how to automatically annotate chest x-rays with diseases along with describing the contexts of a disease, e.g., location, severity, and the affected organs. A publicly available radiology dataset is exploited which contains chest x-ray images and reports published on the Web as a part of the OpenI [2] open source literature and biomedical image collections. An example of a chest x-ray image, report, and annotations available on OpenI is shown in Figure 1. A common challenge in medical image analysis is the data bias. When considering the whole population, diseased cases are much rarer than healthy cases, which is also the case in the chest x-ray dataset used. Normal cases account for 37% (2,696 images) of the entire dataset (7,284 images), compared to the most frequent disease1 case “opacity” which accounts for 12% (840 images) and the next frequent “ cardiomegaly” constituting for 9% (655 images). In order to circumvent the normal-vs-diseased cases bias, we adopt various regularization techniques in CNN training. In analogy to the previous works using ImageNet-trained CNN features for image encoding and RNNs to generate image captions, we first train CNN models with one disease label per chest x-ray inferred from image annotations, e.g.,“calcified granuloma”, or “cardiomegaly”. However, such single disease labels do not fully account for the context of a disease. For instance, “calcified granuloma in right upper lobe” would be labeled the same as the “small calcified granuloma in left lung base” or “multiple calcified granuloma”. Inspired by the ideas introduced in [28, 64, 27, 62, 60],we employ the already trained RNNs to obtain the context of annotations, and recurrently use this to infer the image labels with contexts as attributes. Then we re-train CNNs with the obtained joint image/text contexts and generate annotations based on the new CNN features. With this recurrent cascade model, image/text contexts are taken into account for CNN training (images with “calcified granulomain right upper lobe” and “small calcified granuloma in left lung base” will be assigned different labels), to ultimately generate better and more accurate image annotations

wanghaisheng commented 7 years ago

相关研究

This work was initially inspired by the early work in image caption generation [39, 17, 16], where we take more recently introduced ideas of using CNNs and RNNs[44, 54, 36, 14, 61, 15, 6, 62, 31] to combine recent advances in computer vision and machine translation. We also exploit the concepts of leveraging the mid-level RNN representations to infer image labels from the annotations [28, 64, 27, 62, 60].Methods for mining and predicting labels from radiology images and reports were investigated in [51, 52, 57]. However, the image labels were mostly limited to disease names and did not contain much contextual information. Furthermore, the majority of cases in the datasets were diseased cases. In reality, most cases are normal, so that it is a challenge to detect relatively rarer diseased cases within such unbalanced data. Mining images and image labels from a large collections of photo streams and blog posts on the Web were demonstrated in [34, 33, 35] where images could be searched with natural language queries. Associating neural word embeddings and deep image representations were explored in [37], but generating descriptions from such images/text pairs or image/word embeddings have not yet been demonstrated.Detecting diseases from x-rays was demonstrated in [3, 45, 29], classifying chest x-ray image views in [63], and segmenting body parts in chest x-rays and computed tomography in [5, 21]. However, learning image contexts from text and re-generating image descriptions similar to what a human would describe have not yet been studied. To the best of our knowledge, this is the first study mining from a radiology image and report dataset, not only to classify and detect images but also to describe their context

wanghaisheng commented 7 years ago

数据

We use a publicly available radiology dataset of chest x-rays and reports that is a subset of the OpenI [2] open source literature and biomedical image collections. It contains 3,955 radiology reports from the Indiana Network for Patient Care, and 7,470 associated chest x-rays from the hospitals’ picture archiving systems. The entire dataset has been fully anonymized via an aggressive anonymization scheme, which achieved 100% precision in de-identification. However, a few findings have been rendered uninterpretable.More details about the dataset and the anonymization procedure can be found in [11], and an example case of the dataset is shown in Figure 1. Each report is structured as comparison,indication,findings, and impression sections, in line with a common radiology reporting format for diagnostic chest x-rays. In the example shown in Figure 1, we observe an error resulting from the aggressive automated de-identification scheme. A word possibly indicating a disease was falsely detected as a personal information, and was thereby “anonymized” as “XXXX”. While radiology reports contain comprehensive information about the image and the patient, they may also contain information that cannot be inferred from the image content. For instance, in the example shown in Figure 1, it is probably impossible to determine that the image is of a Burmese male. On the other hand, a manual annotation of MEDLINE R citations with controlled vocabulary terms (Medical Subject Headings (MeSHR) [1]) is known to significantly improve the quality of the image retrieval results [20, 22, 10]. MeSH terms for each radiology report in OpenI (available for public use) are annotated according to the process described in [12]. We use these to train our model. Nonetheless, it is impossible to assign a single image label based on MeSH and train a CNN to reproduce them,because MeSH terms seldom appear individually when describing an image. The twenty most frequent MeSH terms appear with other terms in more than 60% of the cases. Normal cases (term “normal”) on the contrary, do not have any overlap, and account for 37% of the entire dataset. The thirteen most frequent MeSH terms appearing more than 180 times are provided in Table 1, along with the total number of cases in which they appear, the number of cases they overlap with in an image and the overlap percentages. The x-ray images are provided in Portable Network Graphics (PNG) format, with sizes varying from512420 to 512624. We rescale all CNN input training and testing images to a size of 256256

wanghaisheng commented 7 years ago

疾病标签

The CNN-RNN based image caption generation approaches [44, 54, 36, 14, 61, 15, 6, 62, 31] require a well-trained CNN to encode input images effectively. Unlike natural images that can simply be encoded by ImageNet trained CNNs, chest x-rays differ significantly from the ImageNet images. In order to train CNNs with chest x-ray images, we sample some frequent annotation patterns with less overlaps for each image, in order to assign image labels to each chest x-ray image and train with cross-entropy criteria. This is similar to the previous works from [51, 52, 57], which mines disease labels of images from their annotation text (radiology reports).We find 17 unique patterns of MeSH term combinations appearing in 30 or more cases. This allows us to split the dataset in training/validation/testing cases as 80%/10%/10% and place at least 10 cases each in the val- idation and testing sets. They include the terms shown in Table 1, as well as scoliosis,osteophyte,spondylosis,fractures/bone. MeSH terms appearing frequently but without unique appearance patterns include pulmonary atelectasis,aorta/tortuous,pleural effusion,cicatrix, etc. They often appear with other disease terms (e.g.consolidation,airspace disease,atherosclerosis). We retain about 40% of the full dataset with this disease image label mining, where the annotations for the remaining 60% of images are more complex (and it is therefore difficult to assign a single disease label)

wanghaisheng commented 7 years ago

使用CNN 进行图像分类

We use the aforementioned 17 unique disease annotation patterns (in Table 1, and scoliosis,osteophyte,spondylosis,fractures/bone) to label the images and train CNNs. For this purpose, we adopt various regularization techniques to deal with the normal-vs-diseased cases bias. For our default CNN model we chose the simple yet effective Network-In-Network (NIN) [41] model because the model is small in size, fast to train, and achieves similar or better performance to the most commonly used AlexNet model [38]. We then test whether our data can benefit from a more complex state-of-the-art CNN model, i.e. GoogLeNet [58]. From the 17 chosen disease annotation patterns, normal cases account for 71% of all images, well above the numbers of cases for the remaining 16 disease annotation patterns. We balance the number of samples for each case by augmenting the training images of the smaller cases where we randomly crop 224224 size images from the original 256256 size image

wanghaisheng commented 7 years ago

Regularization by Batch Normalization and Data Dropout

Even when we balance the dataset by augmenting many diseased samples, it is difficult for a CNN to learn a good model to distinguish many diseased cases from normal cases which have many variations on their original samples. It was shown in [27] that normalizing via mini-batch statistics during training can serve as an effective regularization technique to improve the performance of a CNN model. B normalizing via mini-batch statistics, the training network was shown not to produce deterministic values for a given training example, thereby regularizing the model to generalize better. Inspired by this and by the concept of Dropout [23], we regularize the normal-vs-diseased cases bias via randomly dropping out an excessive proportion of normal cases compared to the frequent diseased pattern when sampling mini-batches. We then normalize according to the mini-batch statistics where each mini-batch consists of a balanced number of samples per disease case and a random selection of normal case samples. The number of samples for disease cases is balanced by random cropping during training,where each image of a diseased case is augmented at least four times. We test both regularization techniques to assess their effectiveness on our dataset. The training and validation accuracies of the NIN model with batch-normalization, data-dropout, and both are provided in Table 2. While batch-normalization and data-dropout alone do not significantly improve performance, combining both increases the validation accuracy by about 2%

wanghaisheng commented 7 years ago

5.2. Effect of Model Complexity

We also validate whether the dataset can benefit from a more complex GoogLeNet [58], which is arguably the current state-of-the-art CNN architecture. We apply both batch-normalization and data-dropout, and follow recommendations suggested in [27] (where human accuracy on the ImageNet dataset is superseded): increase learning rate, remove dropout, remove local response normalization. The final training and validation accuracies using GoogLeNet model are provided in Table 3, where we achieve a higher(4%) accuracy2. We also observe a further3% increase in accuracy when the images are no longer cropped, but merely duplicated to balance the dataset

wanghaisheng commented 7 years ago

Annotation Generation with RNN

We use recurrent neural networks (RNNs) to learn the annotation sequence given input image CNN embeddings. We test both Long Short-Term Memory (LSTM) [24] and Gated Recurrent Unit (GRU) [7] implementations of RNNs. Simplified illustrations of LSTM and GRU are shown in Figure 2, and the details of both RNN implementations are briefly introduced below.

6.1. Recurrent Neural Network Implementations

6.2. Training

The number of MeSH terms describing diseases ranges from 1 to 8 (except normal which is one word), with a mean of 2.56 and standard deviation of 1.36. The majority of descriptions contain up to five words. Since only 9 cases have images with descriptions longer than 6 words, we ignore these by constraining the RNNs to unroll up to 5 time steps. We zero-pad annotations with less than five words with the end-of-sentence token to fill in the five word space. The parameters of the gates in LSTM and GRU decide whether to update their current state h to the new candidate state h, where these are learned from the previous input sequences. Further details about LSTM can be found in [24, 15, 14, 61], and about GRU and its comparisons to LSTM in [7, 30, 9, 8, 32]. We set the initial state of RNNs as the CNN image embedding (CNN(I)), and the first annotation word as the initial input. The output of the RNNs are the following annotation word sequences, and we train RNNs by minimizing the negative log likelihood of output sequences and true sequences where yt is the output word of RNN in time step t,st the correct word, CNN(I) the CNN embedding of input image I, and N the number of words in the annotation (N= 5with the end-of-sequence zero-padding). Equation 11 is not a true conditional probability (because we only initialize the RNNs’ state vector to be CNN(I)) but a convenient way to describe the training procedure. Unlike the previous work [31, 15, 14] where they use the last (FC-8) or second last (FC-7) fully-connected layers of AlexNet [38] or VGG-Net [53] model, the NIN or GoogLeNet models replace the fully-connected layers with the average-pooling layers [41, 58]. We therefore use the output of the last spatial average-pooling layer as the image embedding to initialize the RNN state vectors. The size of our RNNs’ state vectors are R11024, which is identical to the output size of the average-pooling layers from NIN and GoogLeNet

6.3. Sampling

In sampling we again initialize the RNN state vectors with the CNN image embedding (ht=1 =CNN(I)). We then use the CNN prediction of the input image as the first word as the input to the RNN, to sample following sequences up to five words. As previously, images are normalized by the batch statistics before being fed to the CNN. We use GoogLeNet as our default CNN model since it performs better than the NIN model in Sec. 5.2.

 6.4. Evaluation

We evaluate the annotation generation on the BLEU [47]score averaged over all of the images and their annotations in the training, validation, and test set. BLEU scores is a metric measuring a modified form of precision to compare n-gram words of generated and reference sentences. The BLEU scores evaluated are provided in Table 4. The BLEU-N scores are evaluated for cases withN words in the annotations, using the implementation of [4]. We noticed that LSTM is easier to train, while GRU model yields better results with more carefully selected hyper-parameters3. While we find it difficult to conclude which model is better, the GRU model seems to achieve higher scores on average.

wanghaisheng commented 7 years ago

7 Recurrent Cascade Model for Image Labeling with Joint Image/Text Context

In Section 5, our CNN models are trained with disease labels only where the context of diseases are not considered. For instance, the same calcified granuloma label is assigned to all image cases that actually may describe the disease differently in a finer semantic level, such as “calcified granuloma in right upper lobe”, “small calcified granuloma in left lung base”, and “multiple calcified granuloma”. Meanwhile, the RNNs trained in Section 6 encode the text annotation sequences given the CNN embedding of the image the annotation is describing. We therefore use the already trained CNN and RNN to infer better image labels, integrating the contexts of the image annotations beyond just the name of the disease. This is achieved by generating joint image/text context vectors that are computed by applying mean-pooling on the state vectors (h) of RNN at each step over the annotation sequence. Note, that the state vector of RNN is initialized with the CNN image embeddings (CNN(I)), and the RNN is unrolled over the annotation sequence, taking each word of the annotation as input. The procedure is illustrated in Figure 3, and the RNNs share the same parameters. The obtained joint image/text context vector (him:text) encodes the image context as well as the text context describing the image. Using a notation similar to Equation 11, the joint image/text context vector can be written as: 。。。。。。。 where xt is the input word in the annotation sequence with N words. Different annotations describing a disease are thereby separated into different categories by the him:text, as shown in Figure 4. In Figure 4, the him:text vectors of about fifty annotations describing calcified granuloma are projected onto two-dimensional planes via dimensionality reduction (R11024 ! R12), using the t-SNE [59] implementation of [48]. We use the GRU implementation of the RNN because it showed better overall BLEU scores in Table 4. A visualization example for the annotations describing opacity can be found in the supplementary material. From the him:text generated for each of the image/annotation pair in the training and validation sets, we obtain new image labels taking disease context into account. In addition, we are no longer limited to disease annotation mostly describing a single disease. The joint image/text context vector him:text summarizes both the image’s context and word sequence, so that annotations such as “calcified granuloma in right upper lobe”, “small calcified granuloma in left lung base”, and “multiple calcified granuloma” have different vectors based on their contexts. Additionally, disease labels used in Section 5 with unique annotation patterns now have more cases, as cases with a disease described by different annotation words are no longer filtered out. For example, calcified granuloma previously had only 139 cases because cases with multiple diseases mentioned or with long description sequences were filtered out. At present, 414 cases are associated with calcified granuloma. Likewise, opacity now has 207 cases, as opposed to the previous 65. The average number of cases all first-mentioned disease labels has is 83:89, with a standard deviation of 86:07, a maximum of 414 (calcified granuloma) and a minimum of 18 (emphysema). For a disease label having more than 170 cases (n  170 = (average+standard deviation)), we divide the cases into sub-groups of more than 50 cases by applying k-means clustering to the him:text vector with k = Round(n=50). We train the CNN once more with the additional labels (57,compared to 17 in Section 5), train the RNN with the new CNN image embedding, and finally generate image annotations. The new RNN training cost function (compared to Equation 11) can be expressed as: 。。。。。 where him:textiter=0 denotes the joint image/text context vector obtained from the first round (with limited cases and image labels at 0th iteration) of CNN and RNN training. In the second CNN training round (1st iteration), we fine-tune from the previous CNNiter=0, by replacing the last classification layer with the new set of labels (17 ! 57) and training it with a lower learning rate (0:1), except for the classification layer. The overall workflow is illustrated in Figure 5.

7.1. Evaluation

The final evaluated BLEU scores are provided in Table 5.We achieve better overall BLEU scores than those in Table 4 before using the joint image/text context. It is noticeable that higher BLEU-N (N > 1) scores are achieved compared to Table 4, indicating that more comprehensive image contexts are taken into account for the CNN/RNN training. Also, slightly better BLEU scores are obtained using GRU on average and higher BLEU-1 scores are acquired using LSTM, although the comparison is empirical. Examples of generated annotations on the chest x-ray images are shown in Figure 6. These are generated using the GRU model, and more examples can be found in the supplementary material

wanghaisheng commented 7 years ago

8. Conclusion

We present an effective framework to learn, detect disease, and describe their contexts from the patient chest xrays and their accompanying radiology reports with Medical Subject Headings (MeSH) annotations. Furthermore, we introduce an approach to mine joint contexts from a collection of images and their accompanying text, by summarizing the CNN/RNN outputs and their states on each of the image/text instances. Higher performance on text generation is achieved on the test set if the joint image/text contexts are exploited to re-label the images and to train the proposed CNN/RNN framework subsequently. To the best of our knowledge, this is the first study that mines from a publicly available radiology image and report dataset, not only to classify and detect disease in images but also to describe their context similar to a human observer would read. While we only demonstrate on a medical dataset, the suggested approach could also be applied to other application scenario with datasets containing coexisting pairs of images and text annotations, where the domain-specific images differ from those of the ImageNet.