Table of Content
Chapter 1. Overview
Chapter 2. ImageJ Training Course
Chapter 3. 3D Imaging Analysis
Chapter 1. Overview
Computer Vision
Imaging is an effective way to digitally capture the state of a biological system, therefore, it is naturally used to also characterize the changes in the system. For example by comparing the changes in cell count between a cancer cell line and a normal cell line after the same chemical perturbation, we can quantify the potential differential anti-tumor activities of the compound. Similarly morphological changes within a stem cell population gives hints to the effectiveness of a compound in some cell regeneration process.
Human vision is superb at recognizing objects, e.g., given a cell image (Figure 1.a), we have little trouble in outlining the contours of both individual nucleus and their corresponding cytoplasm boundaries (most of them) (Figure 1.b). The boundary outlining process is called segmentation. In modern drug discovery facilities, a high-throughput high-content screen may involve a million compounds and one thousand cells per compound treatment. Repeating this process with human labor to depict one billion cells is clearly not an option. Besides being slow, human is very poor at being precise and consistent at both outlining these cellular compartments throughout the whole segmentation process. In addition, we need to take measurements, such as the area, perimeter, and average intensities within each cytoplasm region after the segmentation step. Computer is clearly more efficient at doing that. Without question we must rely on computer vision technologies to analyze digital images automatically. We here focus on the topic of segmentation alone.
Figure 1.a |
Figure 1.b The cell image is taken from a demo image named "\Widefield Images\Segmentation\Cell.tif)" used by "Fiji Training Notes". |
Figure 2.a |
Figure 2.b |
Figure 2.c |
To learn computer vision techniques, I recommend a well-written book "Digital Image Processing Using Matlab" by Gonzalez et al. This is actually a hands-on companion book to a more algorithm-focused one by the same authors, "Digital Image Processing". Both books are highly recommended.
I do not plan to write about computer vision algorithms in this blog series, but would instead write about a powerful open-source program called ImageJ. ImageJ allows one to carry out many computer vision analyses without programming; nearly all the figures used in this blog are generated with ImageJ, which should give you a peek at its capability. There are tons of functions in ImageJ, so it can feel overwhelming for new users. Based on my own learning experience, I designed a few exercises that can help new users to quickly grasp those core features of most relevance to biological image analysis in my opinion. ImageJ is the topic of Chapter 2.
Machine Learning
An alternative approach is to rely on machine learning. In the computer vision approach, we try to conceive segmentation rules based on our observations. We test these hypotheses using some sample images and then implement these rules in computer code. Computer simply executes these rules provided by us onto images in batch. Therefore, this is basically a "human learning" process, where human did all the intelligent part of the work, while computer solves the engineering scale-up issue. Alternatively, in the machine learning paradigm, computer conceives the segmentation rules based on training data provided by human, so the most difficult and intelligent step, the rule construction task, is offloaded on to machine, as we hope artificial intelligence can supersede human intelligence in the field of biological image analysis, similar to what is happening in many other fields.
One very successful implementation of this approach is found in a software called Ilastik. The concept is well illustrated in this YouTube video. Given an input image, Ilastik first computes a set of predefined features $\{f_i(x)\}$. Each feature $f_i$ is generally defined by a filter matrix; the filter is applied onto the input image $x$, performing pixel-level computations within a given neighborhood. This process is called convolution. It results in a new image of the same dimension and the output image is called a feature map $f_i(x)$. E.g., an edge filter applied on Figure 1.3 generates a new feature map image outlining the edge of nuclei (Figure 3.d). There are multiple features predefined in Ilastik ranging from intensity, edge to texture characteristics (Figure 3.a, row-wise), and each feature can be associated with multiple scales (column-wise in Figure 3.a) to capture features at different geometrical distances. Figure 3.b-d shows some example feature maps of Figure 1. These powerful features are the results of decades of hard research work done by computer vision scientists.
Figure 3.a Ilastik features. |
Figure 3.b Example intensity feature map. |
Figure 3.c Example edge feature map. |
Figure 3.d Example texture feature map. |
With the collection of dozens of feature maps $\{f_i\}$ calculated, we provide training pixels by drawing a few example scribbles, some are labelled as nuclei (red in Figure 4) and others as background (yellow in Figure 4, here background actually means non-nuclei). These training pixels with correct labels will enable Ilastik to train a random forest engine, which can then automatically predict the labels for the remaining pixels. The transparent red and yellow masks overlaid in Figure 4 are the result of such a prediction, where just a few training strokes are sufficient to teach Ilastik to come up with a random forest model that recognizes nuclei pretty well.
Figure 4. Ilastik training. |
Considering this as a binary classification problem, we provide true labels $y_j$ (nucleus or non-nucleus as 1 or 0) for a few given pixels $j$ under the scribbles, random forest aims to combine precomputed $\{f_i\}$ into a probability function $\mathcal{P}$, which optimally predicts the probability of training pixels belonging to the nucleus class.
The probability function is:
$$\begin{equation} \mathcal{P}(x) = \mathcal{P}(f_1(x), f_2(x), \cdots, f_i(x), \cdots). \end{equation}$$
The loss function to be minimized is the cross-entropy score:
$$\begin{equation}\mathcal{L} = \sum_{j} -(y_j \log(\mathcal{P}(x_j)) -(1-y_j)log(1-\mathcal{P}(x_j)). \end{equation}$$
Since the individual feature maps (as shown in Figure 3.b-d) are generally rather smooth functions, $\mathcal{P}$, as the result of their non-linear superposition, also tend to be smooth. Given the target probability function (1 on nuclei and 0 elsewhere) is a reasonably smooth function, it only requires few training data points $x_j$ in order to tune the model parameters within $\mathcal{P}$. This is why the random forest model inside Ilastik can work rather effectively and efficiently for this relatively easy example; very few training strokes are needed.
When we examine the output probability prediction $\mathcal{P}(x)$ in Figure 4.a, we can see that all the pixels within nuclei are white and the rest are brown/black. In the 3D surface view of Figure 4.b, all nuclei are colored in red, which means they all reach the similar height of about probability 1.0, while non-nuclei pixels have probability value around 0.0 and colored in blue.
Compare the probability prediction in Figure 4.b to the original intensity in Figure 2.a, we can see the machine learning process essentially makes a transformation that largely eliminates the original intensity variations in the nuclei and turn the original image into a new one where the pixel intensity represents the probability of its belonging to part of a nucleus. This way all nuclei are well separated from each other and have about the same height.
Figure 4.b Probability surface function $\mathcal{P}$ constructed by Ilastik. |
Similarly, the same strategy can by applied to segment cell boundaries (Figure 5).
Figure 5. Example of using Ilastik to segment cells. Image named "Widefield Images\Segmentation\Cytoplasm.tif" is taken from demo image package used by "Fiji Training Notes". |
The above machine learning approach relies on two inputs: (1) a set of predefined feature maps, (2) some user-supplied labels of training data pixels. Its output probability function tends to have many desirable normalization properties, if the segmentation works well, e.g., nuclei are already declumped, background are corrected, all nuclei are of about the same intensity (probability approaching 1.0). With all the complexities and variations in the original cell image absorbed by the random forest model, we can now carry out a human-based segmentation task using the probability image as our new input. A simply threshold cutoff of 0.5 would do a decent job in segmenting out nuclei.
Compared to the computer vision approach, although we introduce an additional machine learning step, we only need to spend time in annotating some training pixels, the rest is done by the machine. The whole training process only takes a minute or two, leading to a drastic improvement in efficiency and quality. By offloading human learning on to machine learning, we are liberated from the headache of constructing complex segmentation rules ourselves, thus shifting the bottleneck into computational resource needs.
Deep Learning
If we think the machine learning approach has addressed all the issues, we are certainly way too optimistic. Even for some images seemly trivial to segment by eyes, Ilastik can still fail miserably. An such example is provided in Figure 6. The original image (Figure 6.a) appears rather clear to our eyes, the boundaries of erythrocytes become more clear in the corresponding edge feature map (Figure 6.b). However, despite lots of efforts, neither images can be segmented appropriately using Ilastik (Figure 6.c).
Figure 6.a Erythrocyte image. The example is taken from the U-Net web site. The file named "\DIC-Erythrocyte\6hr-002-DIC.tif" can be found in the "sampledata.zip" file. |
Figure 6.b Edge feature map. |
Figure 6.c Ilastik training on the edge map. |
If we resort to the U-Net deep learning solution as published by Falk et al. in Nature Methods, this image can be easily segmented with as few as 50 refinement steps. We first need to draw examples of erythrocytes (Figure 7.a), ideally in at least two separate images, one for training use and another for validation purpose to avoid over-fitting. The result of a successful training produces a nice probability prediction (Figure 7.b-c), where all erythrocytes can been correctly recognized.
Figure 7.a Deep-learning Training requires cell masks to be provided. Here mask outlined are provided by Falk et al. |
Figure 7.b Probability map predicted by U-Net. |
Figure 7.c Probability surface plot, where all erythrocytes are nicely recognized, with the internal cell region shows probability of near 1.0. |
$$\begin{equation} \mathcal{P}(x) = \mathcal{P}(\mathcal{G}_1(x), \mathcal{G}_2(x), \cdots, \mathcal{G}_i(x), \cdots). \end{equation}$$
Unlike $f$ in traditional machine learning, $\mathcal{G}$ here are not predefined feature transformations. These are custom features constructed through the deep learning training. In fact $\mathcal{G}$ is drastically different from $f$, where $f$ is mostly relies on a transformation filter matrix depicted at pixel level, but $\mathcal{G}$ is a series of hierarchically nested transformations, which enables it to capture "concepts" (see the last paragraph in "the rationale for CNN" at a previous blog). In this example, U-Net does not construct $\mathcal{G}$ from scratch, it refines them based on features learned through many previous cell images from other projects, therefore, it is likely there was already a ring concept, objects forming a circular shape, in the U-Net before the refinement process. With the ring feature, it would be rather effortless for the U-Net to fit the desirable probability output. For readers never saw a cell image before, they would have no trouble recognizing erythrocytes in Figure 6.a, why? This is because they have seen plenty of circular and ring objects elsewhere, so such features are already exist in their vision system. This is what happens to a pre-trained U-Net model.
The power of deep learning network means it can construct very sensitive features, therefore, its feature map may no longer be as smooth as the $\{f_i\}$ we saw in Ilastik. Overfitting is a real concern now. In order to prevent overfitting, we would need to provide much more training labels for U-Net compared to what is required for Ilastik. Here, it can be tedious to hand draw those dozens of cell masks. Fortunately, in this relatively easy example, one or two dozen example cell masks seem to be sufficient. But for more tricky cases, it could requires many more training data, that could easily become a new bottleneck.
Discussion
From the above examples, we can understand that it is very challenging to rely on traditional computer vision skills to segment biological images due to the inherent variations in the biological system. In many circumstances, the cellular system is heterogeneous, meaning there are multiple cell populations of distinct morphology, and it would be too challenging to come up with a set of successful segmentation rules manually to account for these heterogeneity. Classical machine learning approach can be a speedy alternative, where it relies on a set of predefined features engineered based on decades of research work. The result can be of high quality, if the phenotype of biological interest can be recapitulated based on such features. However, when the phenotype is beyond the depiction of existing features, we would need to resort to deep learning systems. With sufficient amount of training examples, deep learning architecture theoretically allows models such as U-Net to construct custom features particularly suitable for the problem at hand.
Since the capacity of learning in U-Net supersedes that in classical machine learning, would it always make sense to use U-Net instead of Ilastik? As mentioned above, U-Net is much more demanding on its training data than Ilastik, not only it requires more labels, but it requires complete cell masks compared to strokes. We apply U-Net to the simple nuclei segmentation example in Figure 1 and obtain the results in Figure 8.
Figure 8. With a few dozen nuclei masks provided, U-Net nicely predicted the probability map for individual nucleus, where there is no clumping. |
Figure 9. Probability surface predicted by ilastik (left) and U-Net (right). |
Experienced image analysts may argue the edge map in Figure 6.b could have been segmented using computer vision techniques, because those circular edges looks like giant water lilies (Figure 10), therefore watershed technique could be used to segment individual erythrocytes. The result based on this watershed idea is shown in Figure 11.
Figure 11. Result of watershed based on the water lily idea. |
What if all attempts fail and the image is still not segmentable despite all our efforts? We would need to go back to the biological questions we are asking. Maybe for the erythrocyte image, what we really need is cell counting. We can take the edge image and do a simply threshold segmentation, then remove small particles and produce a mask image like Figure 12. The area of the white pixels can act as a proxy to the cell count and allow us to sort images.
Figure 12. Mask created by thresholding Figure 6.b and then remove particles with areas less then 400 square pixels. |
Conclusion
All together, it seems a sensible approach is first to determine if an image is segmentable by eye. If not, we need to discuss with biologists to figure out how to extract image-level signals. If yes, we should always first apply Ilastik for segmentation, as its barrier of training is so low. If the quality is not satisfactory, we should then consider U-Net or other deep learning models. Either Ilastik or U-Net produces a probability map as output, which can then be further segmented using traditional computer vision tools, such as ImageJ or CellProfiler (excels at batch processing), where one can apply simple thresholding methods and watershed techniques to turn the pixel-level level probabilities into individual cell object masks. Once we have individual masks, then measurements can be carried out and results enable population analyses to characterize perturbagen effects at cell level.
No comments:
Post a Comment