Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated method for the segmentation and morphometry of nerve fibers in large-scale CARS images of spinal cord tissue

Open Access Open Access

Abstract

A fully automated method for large-scale segmentation of nerve fibers from coherent anti-Stokes Raman scattering (CARS) microscopy images is presented. The method is specifically designed for CARS images of transverse cross sections of nervous tissue but is also suitable for use with standard light microscopy images. After a detailed description of the two-part segmentation algorithm, its accuracy is quantified by comparing the resulting binary images to manually segmented images. We then demonstrate the ability of our method to retrieve morphological data from CARS images of nerve tissue. Finally, we present the segmentation of a large mosaic of CARS images covering more than half the area of a mouse spinal cord cross section and show evidence of clusters of neurons with similar g-ratios throughout the spinal cord.

© 2014 Optical Society of America

1. Introduction

To function properly, the nervous system requires fast and efficient transmission of electrical signals between neurons. In vertebrates, this is aided by the myelin sheath, a cellular membrane formed by specialized glial cells that is wrapped around axons to increase the propagation speed of the action potentials along these axons [1]. There are a number of pathologies that affect myelin and lead to poor conduction as well as irreversible damage to axons [2], the most widespread being multiple sclerosis where there can be thinning, blebbing or swelling of the myelin. Physical disability results from the accumulation of several types of damage along a given neurological pathway, and consequently, disabilities are apparent only when the disease has progressed significantly. Since local myelin damage is the origin of this macroscopic damage, the investigation of myelinated fibers morphology (e.g. axon diameter and area, myelin thickness) is of particular interest, as it characterizes damage early on in the evolution of the pathology [3].

Nerve fiber morphometry is typically found by imaging cross sections of nerve tissue with a microscope and then identifying and measuring the structures of interest. Traditionally, morphometry was measured manually by an expert using prints or a projection of the images along with a ruler [4] or digitalizing tablet [5,6]. Nowadays, the extraction of morphometric information from the myelinated axons that populate an image can be partially or fully automated by transforming the images into something simpler and more meaningful. This is accomplished through the process of image segmentation, i.e. the partitioning of an image into the components of interest. In the case presented here, every pixel of the image must be classified either as axon, myelin, or background.

The application of image segmentation techniques across all imaging modalities in biology and medicine has become a very active field of research in recent years [7, 8]. Over the last few decades, there have been numerous studies proposing nerve fiber segmentation techniques for optical microscopy images. Semi-automated methods (where human intervention is required at some point throughout the process) are faster than a human expert and have traditionally had the advantage of accuracy over fully-automated methods [911]. The latter are very attractive since they are usually faster than semi-automated methods and are not user-dependent. While most of these methods are based on typical segmentation techniques such as template matching [12], edge detection [1315], zonal graph [16], thresholding [17,18], neural networks [19] and region growing [20, 21], other contributions rely on multiple stage methods using a combination of techniques: elliptical Hough transform followed by an active contour model [22], multi-level gradient watershed and fuzzy systems [23]. Li et al. [24] use a classification algorithm (spectral angle mapper) to segment nerve fibers in hyperspectral images.

While the majority of these methods were developed for standard light microscopy images using staining such as toluidine blue or osmium tetroxide, some were intended to work with transmission electron microscope images [19, 20] and scanning electron microscope images [11]. To the best of our knowledge, there are only a few studies whereby semi-automated or automated segmentation was developed for nonlinear optical microscopy images to find cell nuclei [2529] and none to extract nerve fiber morphology. Moreover, the issue that is central to segmentation of microscopy images is that optical modalities, especially nonlinear techniques, are very sensitive to fine spatial and molecular details in samples. This is a double-edged sword: it is the strength that justifies their development, but it is also the Achilles heel of the resulting images: inhomogeneities in intensities on the scale of microns are quite common, especially in vivo, and render many standard image segmentation strategies used in surveillance, magnetic resonance imaging, positron emission tomography, and other fields very difficult to apply.

Over the last decade, a microscopy technique particularly well suited to myelin imaging has gained wide acceptance. Coherent anti-Stokes Raman scattering (CARS) microscopy is a nonlinear optical technique [3032] that uses the endogenous contrast provided by molecules already present in the sample of interest: i.e. the contrast can be tuned to the myelin lipid content [3336]. Since then, it has been used to visualize demyelination [3741], and we have proposed several techniques to characterize myelin morphology [4244].

The present work sets out to develop strategies for segmentation of CARS microscopy images to be used for myelin characterization. Therefore, the primary objective of this article is to present a fully automated nerve fiber segmentation methods designed specifically for CARS microscopy images of transverse sections of nervous tissue. After a brief outline of the imaging method, we present the details of the segmentation strategy. Then, the accuracy of the proposed segmentation method is quantified. Finally, we conclude by presenting how this method can be used to successfully extract nerve fiber morphology information from large-scale CARS images.

The MATLAB code implementing this analysis in addition to all of the data used is made available on the group web site at http://www.dcclab.ca.

2. Materials and methods

2.1. Tissue preparation

C57BL/6 adult mice, 25 to 30 g of body weight, were intracardially perfused with 0.1 M phosphate buffer solution (PBS) followed by 4% paraformaldehyde (PFA). The whole spinal cord was dissected from each mouse and fixed flat in 4% PFA overnight. The spinal segment of interest was isolated from the spinal cord, embedded in low gelling temperature agarose, and 350 μm thick transverse sections were made with a vibratome (Leica, VT 1000). Slices were rinsed several times with 0.1 M PBS solution and then mounted inside a spacer on a slide. The remaining space was filled with 0.1 M PBS and a coverslip was mounted on top of the spacer. All experimental procedures have been performed in accordance with guidelines from the Canadian Council on Animal Care.

2.2. CARS microscopy

Image acquisition is performed using a custom video-rate laser scanning microscope based on a fast rotating polygonal mirror. Our microscope allows for a maximum image acquisition rate of 30 frames per seconds at 752 × 500 pixels for a nominal pixel dwell time of 30 ns. The field of view (FOV) is 169.2 μm by 112.5 μm (225 nm/pixel) with a 60× objective lens (UPLSAPO 1.2 NA w, Olympus). CARS is used to image the myelin sheaths surrounding the axons by probing the CH2 symmetric stretch vibrations of lipids at 2845 cm−1. This is achieved using a 1064 nm beam from a Nd:Vanadate mode-locked laser overlapped in space and time with a second beam from an optical parametric oscillator tuned to 816.8 nm. The average power of the 7 ps pump and Stokes beam at the sample was limited to a few tens of mW. In order to reduce the acquisition noise, images are typically averaged over 15 to 30 individual frames, but more importantly, to avoid polarisation-dependent and orientation-dependent signals, we illuminate the tissue with circularly polarized beams [42]. More details about the system can be found in a previous article by Veilleux et al. [45].

2.2.1. Large-scale mosaic acquisitions of spinal cord cross sections

Imaging an entire cross section from a mouse spinal cord, which typically spans a few mm in diameter, can require as many as 800 images to cover the whole area. In order to do so, we use our custom video-rate microscope and acquisition system. Our strategy involves scanning the surface by moving the sample over an xy grid sequence with an overlap of 20 % of the image field of view (FOV). For each position, the sample is imaged at various depths (40 slices spaced by 3 μm) to account for the curvature of the tissue surface while scanning over the whole spinal cord cross section. The z-stacks are later stitched together [46] to generate a 3D mosaic. However, because of the high scattering coefficient of myelin, there is usually only one optimal plane extracted from the 3D mosaic to form the 2D mosaic [44] suitable for the segmentation analysis. A complete mosaic of 5000 × 7000 pixels (typical) is typically acquired in 3.5 hours.

2.2.2. Image processing

The relatively wide FOV of the microscope brings undesired illumination variations across the image mainly because of chromatic aberrations of pump and Stokes beams and widening of the point spread function at the edges of the FOV. For this reason, the images are processed using contrast-limited adaptive histogram equalization [47] to minimize the effect of inhomogeneous illumination on the segmentation results. This method is used on small regions of the image to enhance the contrast such that the histogram of the region matches that of a specified distribution. We found that a uniform target distribution within 16 × 16 pixel regions improves the segmentation results significantly. It should be noted that this procedure changes the intensity values and hence, the concentration information contained in the CARS images. If needed, one could go back to the original images once the segmentation is complete.

An example of a myelin CARS image of a transverse spinal cord section from a healthy mouse is shown in Fig. 1. The image is split in two to show a section from the raw unprocessed image (Fig. 1(a)) and a section preprocessed with contrast-limited adaptive histogram equalization (Fig. 1(b)).

 figure: Fig. 1

Fig. 1 Myelin CARS image of a transverse spinal cord section from a healthy mouse split in two to show a raw unprocessed image section (a) and a section preprocessed with contrast-limited adaptive histogram equalization (b). The image is 752 × 500 pixels in size and an average of 30 frames.

Download Full Size | PDF

3. Segmentation strategy

Segmentation, i.e., the task of classifying each pixel and assigning it to a meaningful object, may range from trivial to challenging. Pixel classification is highly dependent on how pixels of a different nature appear differently in an image (contrast). In addition, assignment of the classified pixels to distinct objects depends on how well separated the objects are in an image (i.e., if their pixels are touching or not). In the case where high-contrast objects are isolated, segmentation is often trivial. In optical microscopy, however, images contain many low-contrast objects that are touching. Information about the system under study (expected morphology, detection noise, spatial resolution, etc.) is therefore used to inform pixel classification and assignment. In the case of CARS images of nerve fibers in transverse tissue sections, we make use of the following information: 1) the signal is only produced by lipids which originates almost exclusively from myelin, 2) a myelinated axon does not touch other axons since it is wrapped inside a myelin sheath, 3) the shape of the exterior myelin boundary is similar to the axon boundary, and 4) adjacent myelinated fibers are in contact but do not overlap. With those facts in mind, we devised a two-part strategy whereby the axon candidates are first segmented and the collected information is then used to determine the myelin outer boundary, which in turn serves to unambiguously identify axons.

The challenge with axon segmentation lies in the fact that they are defined by an absence of signal in the images. Therefore, finding dark regions will invariably lead to true axons as well as inter-nerve-fiber background labeled as axon candidates. The axon segmentation is divided into three steps: 1) groups of pixels corresponding to a local minimum of at least a certain depth (extended-minima algorithm) are identified as axons candidates regardless of their shape, 2) their shape is refined through an iterative deformation process (active contour algorithm) 3) the axon candidates are subjected to a first validation test that aims to identify and remove inter-nerve-fiber background based on morphological properties. The result is a binary image of the remaining axon candidates and another image of the background.

The challenge with the myelin segmentation is to accurately identify the outer myelin boundary in images where most myelinated axons are touching each other. To achieve this goal, we probe the image around each axon candidate and use the information contained in the binary images of the other axons as well as the background to limit the search space. The myelin segmentation strategy comprises three steps: 1) the myelin outer boundary of axon candidates is detected in the straightened subspace image when the intensity changes from high to low, 2) the candidates are subjected to a second validation test based on the area overlap between neighboring nerve fibers, and 3) all unique pairs of touching segmented myelin are pairwise separated using a watershed algorithm. After completing these steps, we have a new binary image representing myelin sheaths around axon objects with no connectivity between them, as well as updated binary images of the axons and the background from the validation step.

A block diagram of the previously outlined method is shown in Fig. 2. Specific details describing individual steps are in the sections to follow.

 figure: Fig. 2

Fig. 2 Flow chart of the two-part algorithm. (Left) Axon segmentation: 1) detection of axon candidates by extented-minima transform, 2) shape refinement with an active contour method and 3) candidate validation based on their shape. This part produces two binary images of the axon candidates and the background. (Right) Myelin segmentation: 1) segmentation of the myelin outer boundary in the straightened subspace images of the axon candidates, 2) candidate validation based on area overlap and 3) separation of touching myelin pairs by watershed technique. The final output is a binary image representing the myelin sheaths.

Download Full Size | PDF

3.1. Axon segmentation

3.1.1. Axon detection with extended-minima transform

Initial axon segmentation is obtained by looking for regional minima of at least a certain depth (intensity) in the image. This is accomplished by computing the H-minima transform of the contrast enhanced image and then finding the regional minima. The H-minima transform removes all minima with a depth of less than a certain value (h). The value of h can be set empirically by trial and error on a typical image portion and we found that a value of h on the order of the image intensity standard deviation offers a good starting point. Following the H-minima transform, all pixels with a uniform intensity that are surrounded by higher intensity pixels are extracted as connected components to form the objects populating the initial segmentation.

As shown in Fig. 3(a), the output of this step is a binary image containing seeds for all of the axons as well as possible false positives. The objects in the binary image are called seeds because they only approximate the shape of the underlying object.

 figure: Fig. 3

Fig. 3 Axon segmentation in a transverse CARS image of mouse spinal cord. (a) Axon detection with extended-minima transform. (b) Segmentation refinement using an active contour method. (c) Object validation separates the axons (green) from the background (red).

Download Full Size | PDF

3.1.2. Axon segmentation refinement with active contour method

In this step, the shapes of the seed objects resulting from the initial segmentation are refined through a deformation process known as “active contour” [48]. Active contour methods are used to better separate the foreground from the background by allowing an initial curve to deform iteratively so as to minimize a function defined in terms of the contour internal and external energies. The external energy often comprises image forces such as the intensity or gradient while the internal energy usually relates to curve elongation or bending.

Classically, energy functional defined based on the image gradient and curve evolution relies on an edge detector to find the presence of the object boundary. In this work, we use an active contour method whereby the energy describes the foreground and background in terms of their mean intensity [49]. The so called Chan-Vese method is better suited to detecting objects with smooth or discontinuous boundaries, with or without gradient.

Our segmentation algorithm was developed to analyze very large images containing many thousands of objects. While computational speed is not the most crucial factor, computation time has to be kept in check. This is especially true in this step where all the objects found by the initial segmentation are refined individually. Furthermore, when using the typical framework (level set [50]), active contour methods are notoriously slow to compute. Fortunately, a much more efficient framework called the sparse field method [51] exists which decreases computation time by about a factor of ten. The main drawback of this framework is that it is not possible for a new curve to appear spontaneously. In our workflow however, this is not a limitation since the goal of this step is to improve the shape of a previously found object.

While there exist many implementations of the Chan-Vese active contour algorithm, we settled on an implementation that also uses local image statistics to define the energy functional [52] which helps to segment objects that cannot easily be distinguished using global statistics. For every point along the curve, the foreground and background are described in terms of smaller local regions of radius r. A radius value of 10 pixels was used throughout this work.

The number of iterations is set to 100 even if shape refinement usually converges to the correct solution in fewer iterations. The regularization term controlling curve smoothness is set to 0.1 (between 0 and 1, where 0 indicates no penalty for arc length of the curve).

All objects touching the image border or containing less than 10 pixels are removed because their morphological properties cannot be accurately measured. The lower size limit was found with a simulation where the image of a circle of known radius was reduced in size until the error on the measured radius became significant. Ideally, the FOV and spatial resolution would be adapted so that the morphological properties of even the smallest axons can be measured accurately. While the current iteration of our custom microscope does not allow such changes, this will be improved in the near future so that smaller axons will be included in subsequent studies. A recent resolution enhancement strategy could also offer further benefits [53].

At the completion of this stage of the analysis, we now have a binary image containing axon candidates whose shapes were refined to better represent the underlying image properties (Fig. 3(b)). This image contains true axons as well as a large number of false positive in areas between the myelinated axons.

3.1.3. Axon validation using morphological properties

The main objective of the axon validation stage is to eliminate false positives from the binary image of axon candidates. The validation is based on morphological properties that were chosen based on their ability to separate true axons from false positives.

To determine an optimal set of properties, we measured a total of ten morphological parameters for over 10,000 axons identified manually in eight CARS images. Next, we examined all possible combinations of parameters taken one to ten at a time. The data was standardized (zero mean and unit variance) to account for the difference in range of the parameters. Using the squared Euclidean distance of an object to the origin of the parameter set space as a common metric, the optimal set was chosen based on its ability to discern false positives from true axons.

The optimal set is composed of four parameters: the circularity, the perimeter and area solidity and the concave perimeter fraction. The circularity is defined as 4πA/P2 and its value is always less than one except for the case of a perfect circle. The perimeter and area solidity are defined in terms of the object’s convex hull (CH) as P/PCH and A/ACH, respectively. Both values are less than one for a concave polygon. The concave perimeter fraction, defined as the ratio of the length of the concave perimeter sections to the object perimeter Lconcave/P, is greater than zero for a concave object.

Unfortunately, we found that even the optimal choice of parameters showed an overlap in their distributions for the true and false axons. For this reason, the cutoff (in terms of the square Euclidean distance to the origin of the parameter set space) was chosen conservatively so as to preserve most of the true axons, and a second validation test is introduced later in the analysis. Any object rejected through the validation stage is added to the binary image of the background that is used in the myelin segmentation step. Figure 3(c), shows the axons (green) as well as the background (red) superimposed on the original image.

3.2. Myelin segmentation

3.2.1. Myelin segmentation in the straightened subspace image

Myelin segmentation begins with the creation of a straightened subspace [54] image around every axon. Because the shape of the myelin outer boundary is in essence a scaled version of the axon shape, this image transformation aims to simplify the segmentation by reshaping the myelin outer boundary in more or less a straight line.

It requires a prior shape which is given by the binary image of the axon. For a given axon, starting from its contour, the image intensity is interpolated along 72 lines radiating outward (∼ 5° step) and perpendicular to the axon boundary. Figure 4(a) shows the probing lines (black) around an axon (green) as well as the binary images of the other axons (blue) and the background (red) overlaid on the processed CARS image. The radial probing is performed on the processed CARS image as well as on the combined binary image of the axons and the background to produce two straightened subspace images. The intensity image is used to find the myelin boundary and the binary image (blue for axons and red for background) is used to restrict its position.

 figure: Fig. 4

Fig. 4 Myelin segmentation in a transverse CARS image of mouse spinal cord. (a) The space around an axon (green) is probed along 72 radial lines to produce a straightened subspace image. (b) Sobel filter of the straightened subspace image with the myelin boundary (green). In both (a) and (b), the other axons are shown in blue and the background in red. (c) Segmented myelin. (d) The myelin validation stages uses the area overlap (yellow) as a metric to separate false (red) from true (green) candidates. (e) Connected objects are separated in pairs using a (f) marker-controlled watershed algorithm. (g) Final binary image with separated nerve fibers.

Download Full Size | PDF

The myelin boundary is expected to coincide with regions where the intensity changes from high to low. Since the boundary is mostly oriented in the straightened subspace image, the Sobel filter is used to produce a cost map that enhances the edges along the direction perpendicular to the boundary. However, because the myelinated axons are so tightly packed, clear edges are not always present along a line of sight linking two axons. For this reason, the search space for the boundary is restricted using prior knowledge collected during the axon segmentation. i.e., it is prohibited from going through pixels previously labeled as background or axons.

Once this is done, the boundary is found with a combination of three approaches. The first uses a minimal-path algorithm [55] to extract a low-cost path made continuous by extrapolating over fragmented edge sections, and has only one parameter controlling the linearity of the path. Although it usually extracts the expected solution, it favors low-cost solutions without considering their distances from the axon surfaces. Because of the way in which the straightened subspace images are created, higher cost solutions closer to the axon surface will sometimes generate more accurate solutions. Thus, either we look for the first minima along all the probed lines starting from the axon surface, or we look for the first minima lower than a specified cost threshold. The best of these three solutions is found based on their linearity by computing the standard deviation of the distances from the axons along the paths and choosing the solution with the smallest variation. The minimal-path algorithm is favored about 60% of the time. An example of the solution for the myelin outer boundary is shown in Fig. 4(b) as an overlay (green line) on the Sobel-filtered intensity image of the straightened subspace with the other axons and the background in blue and red.

Finally, once the myelin boundary is found, it is projected back to the original image space to form a polygon centered on the axon. The polygon is then smoothed using a spline function and is used to create a binary mask (Fig. 4(c)) that is stored in a list.

3.2.2. Myelin validation using area overlap

The guiding principle for this stage of the validation is that a properly segmented myelin outer boundary should have little to no overlap with neighboring myelinated axons. Therefore, the degree of overlap can be useful to reject false positives that could not be differentiated from true axons based on their morphology during the axon validation stage. The overlap fraction (Λ) is defined for a given candidate as the ratio of the overlapping myelin area to the total myelin area.

The first step in the myelin validation procedure is to compute the overlap fraction for all objects and place those with a value of Λ above a certain threshold in a candidate buffer. Starting with the objects having the highest overlap fraction, we re-compute the value of Λ to account for objects previously removed from the buffer and dismiss the object if Λ is still above the threshold. The threshold value was determined empirically (ΛT = 40%) so as to maximize the gain in precision while minimizing the loss of sensitivity. An example is shown in Fig. 4(d) where the overlapping area is shown in yellow. In this case, the newly identified false candidate (red) has an overlap fraction well above the threshold and is easily separated from the true candidates (green).

3.2.3. Separation of touching myelin pairs

Now that the myelin boundaries have been segmented and most of the non-axons removed, we need to make sure that none of the objects are connected in the final binary image. Separating touching objects is generally regarded as a difficult task, but here the problem is drastically simplified with the a priori knowledge of the number of true objects. The robust separation of touching myelin is obtained by processing the objects one pair at a time using a marker-controlled watershed segmentation algorithm.

The first step consists of finding all the pairs of connected myelin and to process them two at a time (Fig. 4(e)). The convex hull of each object is computed to help prevent issues associated with over-segmentation. The two convex hulls are combined to form a single object and the Euclidean distance transform of the resulting binary image is computed. The foreground is marked to zero using the binary images of the related axons and the watershed function is invoked to separate the image into two domains (Fig. 4(f)). Finally, once this procedure has been done for all connected pairs, the watershed lines are used to produce a binary image where none of the segmented myelin is connected (Fig. 4(g)). Although this reduces the myelin area (∼%), the watershed lines could be made vanishingly small by oversampling the images.

3.3. Segmentation accuracy

The accuracy of the proposed segmentation method is quantified by comparing the binary images produced at different stages of the algorithm to a ground truth. The ground truth was created manually for the position and shape of the axons. Through this process, the true positives (TP) and false positives (FP) are identified and two important parameters can be computed: the sensitivity and the precision of the segmentation. The sensitivity, or the true positive rate (TPR), is given by the ratio of the number of true positives (TP) over the number of objects in the ground truth. The precision, or positive predictive value (PPV), is given by the ratio of the number of true positives over the number of true positives plus the number of false positives (FP), i.e., the total number of segmented objects.

From a binary image resulting from the segmentation step under investigation, the axon candidates are extracted using feature-based boolean logic AND, i.e., objects in the segmentation binary image overlapping objects in the ground truth are selected. Then, the true positives are determined using two criteria based on similarity measures between the object pairs: the modified Hausdorff distance (MHD) [56] and the Dice coefficient, also known as the the quotient of similarity (QS) [57].

The modified Hausdorff distance is a robust measure of similarity that works well for the purpose of object matching. Given two binary objects 𝔸 and 𝔹 and their boundaries 𝔸′ = {a1,...,aNa} and 𝔹′ = {b1,...,bNb}, the MHD is defined as:

MHD(𝔸,𝔹)=max(D(𝔸,𝔹),D(𝔹,𝔸))
D(𝔸,𝔹)=1Naa𝔸d(a,𝔹)
where d(a, 𝔹) = minb∈𝔹′ab‖ is the distance between point a and the set of point 𝔹′, and ‖ · ‖ indicates the Euclidean distance. D(𝔸′, 𝔹′) represents the average distance from 𝔸′ to 𝔹′ and vice versa. When the objects 𝔸 and 𝔹 become more similar, the value of the MHD becomes smaller.

The quotient of similarity is defined as:

QS(𝔸,𝔹)=2N(𝔸𝔹)N(𝔸)+N(𝔹)
where ∩ is a pixel-based boolean AND and N(·) denotes the number of pixels in a set. The QS value ranges from 0 to 1, where 1 denotes identical objects.

From the visual comparison of the results and the ground truth, it was decided that an object with a MHD value lower than 3 or a QS higher than 0.85 was similar enough to the ground truth to be considered a true positive. Any object not passing the similarity test is labeled as a false positive.

Figure 5 shows a CARS image section with an overlay of the segmentation result that is color-coded to illustrate its comparison with the corresponding ground truth. The true positive axons and the false positive objects are indicated in green and red, respectively. The two false positive objects represent holes in the space between fibers which passed the two validation tests. In addition, the false negative axons, i.e., true axons that were rejected by one of the validation stages, are indicated in yellow and axons that were missed entirely in blue. We should note that a drawback of the morphological validation method is that it might prevent the detection of axons with an irregular shape (see some of the false negative objects in Fig. 5) and therefore limit the application of our method to studies where the axon shape is expected to stay regular. This could be avoided with the use of a multi-factorial validation method that could consider any other information obtained separately such as fluorescence, molecular order, etc. Finally, the segmented myelin is shown in cyan. Any object smaller than 10 pixels was rejected prior to the first validation stage.

 figure: Fig. 5

Fig. 5 Section of a CARS image with overlay for the true positive axons (green), the false positive objects (red), false negative axons (yellow), missed axons (blue) and segmented myelin (cyan). Objects smaller than 10 pixels were discarded prior to the initial validation stage.

Download Full Size | PDF

4. Results

4.1. Segmentation accuracy

The performance of our segmentation strategy was evaluated using a set of eight CARS images and two toluidine blue stained images of transverse spinal cord sections from healthy mice. The accuracy of the proposed method is measured both at the end of the axon refinement stage and after the two validation stages. This is summarized in Table 1.

Tables Icon

Table 1. The segmentation accuracy was evaluated using a set of eight CARS images and two toluidine blue stained images of transverse spinal cord sections from healthy mice. True positives (TP) and false positives (FP). The sensitivity, or the true positive rate (TPR), is given by the ratio of the number of true positives (TP) over the number of objects in the ground truth. The precision, or positive predictive value (PPV), is given by the ratio of the number of true positives over the number of true positives plus the number of false positives (FP), i.e., the total number of segmented objects.

Results shown in the “Refinement” columns in Table 1 indicate that the segmentation is extremely sensitive, missing only a few percent of the axons present in the ground truth. The median similarity value for all true positive objects identified after the refinement stage is 0.85 for the QS and 0.50 for the MHD. This, in conjunction with the visual inspection of the segmentation results, indicates a very good agreement between the axon shapes from the automatic segmentation and the ground truth.

Next, the behavior of the two validation stages is quantified. The axon validation stage (axon validation columns in Table 1) was designed to remove as many of the false positives as possible without significantly altering the sensitivity. A normalized distance threshold of 15 leads to a gain of 20% to 30% in precision in exchange for a decrease of no more than 2% in sensitivity. Finally, with the myelin validation stage (myelin validation columns in Table 1), the segmentation precision is increased to about 95% while the final sensitivity is reduced by ∼10%.

4.2. Morphology

This section showcases the ability of our method to retrieve morphological data from large mosaic images of nerve tissue. In particular, we used a cross section from the cervical region of the spinal cord from a healthy mouse. The cross section, approximately 1.7 mm in diameter, required 324 images to cover approximately 60% of the surface (∼7500×7500 pixels). Our nerve fiber segmentation algorithm revealed around 32,000 myelinated axons in the spinal cord white matter. Total computation time for the mosaic is 4 hours (∼2.2 seconds/axon) on a modestly powerful MacBook Pro using parallel processing on the dual-core processor. That time could be reduced using a computer with a greater number of more powerful processors. For each nerve fiber, standard morphometric measurements are computed from the segmentation results. From the two primitive parameters measured directly on the binary image (axon and fiber area), three parameters can be derived (i.e. axon and fiber equivalent diameter, and g-ratio). The equivalent diameter is defined as the diameter of a circle that has the same area as the object. The g-ratio, defined as the ratio of the axon diameter to the fiber diameter (i.e., axon plus myelin sheath), is computed from the equivalent diameters. The result of our segmentation is presented in Fig. 6(a) where the 32,000 myelin sheaths are shown as an overlay color coded for the g-ratio. Such an overview can be useful to see large-scale organization such as the concentration of higher g-ratio fibers around the anterior median fissure on the ventral side of the spinal cord (middle bottom) or the cluster of low g-ratio fibers in the middle of the spinal cord on the dorsal side (middle top). A zoomed-in view of a region of interest (blue rectangle) is presented in Fig. 6(b). The g-ratio follows a normal distribution with an average of 0.5 and standard deviation of 0.1. In Fig. 6(c) and (d), we present typical parameter couples as 2D histograms. A small sub-distribution is noticeable in both histograms: it includes a very small fraction (< 1%) of fibers for which the myelin outer diameter is underestimated (see arrows in Fig. 6(b)). This double ring structure could be the result of a preparation artifact, but it is also consistent with the appearance of Schmidt-Lanterman incisures in transverse cuts [58]. The algorithm makes no attempt at handling these rare cases differently.

 figure: Fig. 6

Fig. 6 Nerve fiber segmentation in a CARS mosaic from a transverse section of healthy mouse spinal cord. (a) Myelin sheaths are shown as an overlay, color-coded to the value of the g-ratio. (b) Zoomed-in view of the blue region of interest around the anterior median fissure on the ventral side of the spinal cord. The white arrows show examples where the myelin outer diameter is underestimated. Morphometric parameter 2D histograms of (c) the g-ratio versus axon equivalent diameter and (d) axon equivalent diameter versus fiber equivalent diameter.

Download Full Size | PDF

5. Conclusion

We have shown in this manuscript how we can extract nerve fiber morphometric information from CARS images of nervous tissue. Valuable information is readily obtained from these images using an automatic segmentation algorithm developed specifically for the task of classifying each pixel as myelin, axon, or background, as well as assigning it to a given nerve fiber. This strategy could be applied to many different situations (demyelinating diseases, nerve injuries, spinal cord trauma, myelination during development, etc). The critical aspect for success was to recognize that pixel classification needs to be informed by the geometry before being considered final, and that the geometry is highly specific to the problem at hand. We have proceeded by first identifying axon candidates which are then filtered based on shape parameters to remove obvious false positives. In the second stage where myelin pixels are identified, if the resulting myelin conflicted with the myelin from other axons, the myelin sheath with the most conflicts was assumed incorrect, while the others were confirmed as myelin. The algorithm was tested against manually segmented images with great success. Finally, it was used for the robust segmentation of large-scale CARS images where local clusters can be recognized. In future work, we plan to use this method to both quickly and accurately measure small changes in myelination in the earliest stages of demyelinating diseases such as multiple sclerosis, which will dramatically improve our understanding of the initiation of such debilitating diseases.

Acknowledgments

The authors would like to acknowledge the CIHR Emerging Team Program, Canada Research Chair program, CREATE Training Program, and Neurophysics Training Program. This investigation was supported in part by a postdoctoral fellowship from NSERC awarded to E. Bélanger.

References and links

1. W. A. H. Rushton, “A theory of the effects of fibre size in medullated nerve,” J. Physiol. 115, 101–122 (1951). [PubMed]  

2. K.-A. Nave, “Myelination and the trophic support of long axons,” Nature Rev. Neurosci. 11, 275–283 (2010). [CrossRef]  

3. R. F. Dunn, D. P. O’Leary, and W. E. Kumley, “Quantitative analysis of micrographs by computer graphics,” J. Microsc. 105, 205–213 (1975). [CrossRef]   [PubMed]  

4. M. A. Matthews, “An electron microscopic study of the relationship between axon diameter and the initiation of myelin production in the peripheral nervous system,” Anat. Rec. 161, 337–351 (1968). [CrossRef]   [PubMed]  

5. D. P. Ewart, W. M. Kuzon Jr., J. S. Fish, and N. H. McKee, “Nerve fibre morphometry: a comparison of techniques,” J. Neurosci. Meth. 29, 143–150 (1989). [CrossRef]  

6. R. L. Friede and W. Beuche, “Combined scatter diagrams of sheath thickness and fibre calibre in human sural nerves: changes with age and neuropathy,” J. Neurol. Neurosurg. Psychiatry 48, 749–756 (1985). [CrossRef]   [PubMed]  

7. E. Meijering, “Cell Segmentation: 50 Years Down the Road [Life Sciences],” IEEE Signal Processing Mag. 29, 140–145 (2012). [CrossRef]  

8. S. Uchida, “Image processing and recognition for biological images,” Dev. Growth. Differ. 55, 523–549 (2013). [CrossRef]   [PubMed]  

9. P. Mezin, C. Tenaud, J. L. Bosson, and P. Stoebner, “Morphometric analysis of the peripheral nerve: advantages of the semi-automated interactive method,” J. Neurosci. Meth. 51, 163–169 (1994). [CrossRef]  

10. D. A. Hunter, A. Moradzadeh, E. L. Whitlock, M. J. Brenner, T. M. Myckatyn, C. H. Wei, T. H. H. Tung, and S. E. Mackinnon, “Binary imaging analysis for comprehensive quantitative histomorphometry of peripheral nerve,” J. Neurosci. Meth. 166, 116–124 (2007). [CrossRef]  

11. H. L. More, J. Chen, E. Gibson, J. M. Donelan, and M. F. Beg, “A semi-automated method for identifying and measuring myelinated nerve fibers in scanning electron microscope images,” J. Neurosci. Meth. 201, 149–158 (2011). [CrossRef]  

12. G. K. Frykman, H. G. Rutherford, and I. R. Neilsen, “Automated nerve fiber counting using an array processor in a Multi-Mini Computer System,” J. Med. Syst. 3, 81–94 (1979). [CrossRef]  

13. T. J. Ellis, D. Rosen, and J. B. Cavanagh, “Automated measurement of peripheral nerve fibres in transverse section,” J. Biomed. Eng. 2, 272–280 (1980). [CrossRef]   [PubMed]  

14. I. R. Zimmerman, J. L. Karnes, P. C. O’Brien, and P. J. Dyck, “Imaging system for nerve and fiber tract morphometry: components, approaches, performance, and results,” J. Neuropath. Exp. Neur. 39, 409–419 (1980). [CrossRef]   [PubMed]  

15. Y. Usson, S. Torch, and R. Saxod, “Morphometry of human nerve biopsies by means of automated cytometry: assessment with reference to ultrastructural analysis,” Anal. Cell. Pathol. 3, 91–102 (1991). [PubMed]  

16. E. Romero, O. Cuisenaire, J. F. Denef, J. Delbeke, B. Macq, and C. Veraart, “Automatic morphometry of nerve histological sections,” J. Neurosci. Meth. 97, 111–122 (2000). [CrossRef]  

17. B. Weyn, M. van Remoortere, R. Nuydens, T. Meert, and G. van de Wouwer, “A multiparametric assay for quantitative nerve regeneration evaluation,” J. Microsc. 219, 95–101 (2005). [CrossRef]   [PubMed]  

18. F. Urso-Baiarda and A. O. Grobbelaar, “Practical nerve morphometry,” J. Neurosci. Meth. 156, 333–341 (2006). [CrossRef]  

19. E. Jurrus, A. R. C. Paiva, S. Watanabe, J. R. Anderson, B. W. Jones, R. T. Whitaker, E. M. Jorgensen, R. E. Marc, and T. Tasdizen, “Detection of neuron membranes in electron microscopy images using a serial neural network architecture,” Med. Image Anal. 14, 770–783 (2010). [CrossRef]   [PubMed]  

20. X. Zhao, Z. Pan, J. Wu, G. Zhou, and Y. Zeng, “Automatic identification and morphometry of optic nerve fibers in electron microscopy images,” Comput. Med. Imag. Grap. 34, 179–184 (2010). [CrossRef]  

21. M. Gierthmuehlen, T. M. Freiman, K. Haastert-Talini, A. Mueller, J. Kaminsky, T. Stieglitz, and D. T. T. Plachta, “Computational tissue volume reconstruction of a peripheral nerve using high-resolution light-microscopy and reconstruct,” PLOS ONE 8, e66191 (2013). [CrossRef]   [PubMed]  

22. Y. L. Fok, J. K. Chan, and R. T. Chin, “Automated analysis of nerve-cell images using active contour models,” IEEE Trans. Med. Imag. 15, 353–368 (1996). [CrossRef]  

23. Y.-Y. Wang, Y.-N. Sun, C.-C. K. Lin, and M.-S. Ju, “Segmentation of nerve fibers using multi-level gradient watershed and fuzzy systems,” Artif. Intell. Med. 54, 189–200 (2012). [CrossRef]   [PubMed]  

24. Q. Li, Z. Chen, X. He, Y. Wang, H. Liu, and Q. Xu, “Automatic identification and quantitative morphometry of unstained spinal nerve using molecular hyperspectral imaging technology,” Neurochem. Int. 61, 1375–1384 (2012). [CrossRef]   [PubMed]  

25. Y. Yang, F. Li, L. Gao, Z. Wang, M. J. Thrall, S. S. Shen, K. K. Wong, and S. T. C. Wong, “Differential diagnosis of breast cancer using quantitative, label-free and molecular vibrational imaging,” Biomed. Opt. Express 2, 2160–2174 (2011). [CrossRef]   [PubMed]  

26. L. Gao, H. Zhou, M. J. Thrall, F. Li, Y. Yang, Z. Wang, P. Luo, K. K. Wong, G. S. Palapattu, and S. T. C. Wong, “Label-free high-resolution imaging of prostate glands and cavernous nerves using coherent anti-Stokes Raman scattering microscopy,” Biomed. Opt. Express 2, 915–926 (2011). [CrossRef]   [PubMed]  

27. A. A. Hammoudi, F. Li, L. Gao, Z. Wang, M. J. Thrall, Y. Massoud, and S. T. C. Wong, “Automated Nuclear Segmentation of Coherent Anti-Stokes Raman Scattering Microscopy Images by Coupling Superpixel Context Information with Artificial Neural Networks,” in “Machine Learning in Medical Imaging,”, vol. 7009 of Lecture Notes in Computer Science, K. Suzuki, F. Wang, D. Shen, and P. Yan, eds. (Springer Berlin Heidelberg, Berlin, Heidelberg, 2011), pp. 317–325. [CrossRef]  

28. A. Medyukhina, T. Meyer, M. Schmitt, B. F. M. Romeike, B. Dietzek, and J. Popp, “Towards automated segmentation of cells and cell nuclei in nonlinear optical microscopy,” J. Biophotonics 5, 878–888 (2012). [CrossRef]   [PubMed]  

29. A. Medyukhina, T. Meyer, S. Heuke, N. Vogler, B. Dietzek, and J. Popp, “Automated seeding-based nuclei segmentation in nonlinear optical microscopy,” Appl. Opt. 52, 6979–6994 (2013). [CrossRef]   [PubMed]  

30. A. Zumbusch, G. R. Holtom, and X. S. Xie, “Three-dimensional vibrational imaging by coherent anti-Stokes Raman scattering,” Phys. Rev. Lett. 82, 4142–4145 (1999). [CrossRef]  

31. C. L. Evans and X. S. Xie, “Coherent anti-stokes Raman scattering microscopy: chemical imaging for biology and medicine,” Annu. Rev. Anal. Chem. 1, 883–909 (2008). [CrossRef]  

32. S. Bégin, E. Bélanger, S. Laffray, R. Vallée, and D. C. Côté, “In vivo optical monitoring of tissue pathologies and diseases with vibrational contrast,” J. Biophotonics 2, 632–642 (2009). [CrossRef]   [PubMed]  

33. H. Wang, Y. Fu, P. Zickmund, R. Shi, and J.-X. Cheng, “Coherent Anti-Stokes Raman Scattering Imaging of Axonal Myelin in Live Spinal Tissues,” Biophys. J. 89, 581–591 (2005). [CrossRef]   [PubMed]  

34. A. P. Kennedy, J. Sutcliffe, and J.-X. Cheng, “Molecular composition and orientation in myelin figures characterized by coherent anti-stokes Raman scattering microscopy,” Langmuir 21, 6478–6486 (2005). [CrossRef]   [PubMed]  

35. Y. Fu and J.-X. Cheng, “Imaging of Myelin by Coherent Anti-Stokes Raman Scattering Microscopy,” in Animal Models of Acute Neurological Injuries II, (Humana Press, 2012), pp. 281–291. [CrossRef]  

36. R. Galli, O. Uckermann, M. J. Winterhalder, K. H. Sitoci-Ficici, K. D. Geiger, E. Koch, G. Schackert, A. Zumbusch, G. Steiner, and M. Kirsch, “Vibrational spectroscopic imaging and multiphoton microscopy of spinal cord injury,” Anal. Chem. 84, 8707–8714 (2012). [CrossRef]   [PubMed]  

37. Y. Fu, H. Wang, T. B. Huff, R. Shi, and J.-X. Cheng, “Coherent anti-Stokes Raman scattering imaging of myelin degradation reveals a calcium-dependent pathway in lyso-PtdCho-induced demyelination,” J. Neurosci. Res. 85, 2870–2881 (2007). [CrossRef]   [PubMed]  

38. J. Imitola, D. C. Côté, S. Rasmussen, X. S. Xie, Y. Liu, T. Chitnis, R. L. Sidman, C. P. Lin, and S. J. Khoury, “Multimodal coherent anti-Stokes Raman scattering microscopy reveals microglia-associated myelin and axonal dysfunction in multiple sclerosis-like lesions in mice,” J. Biomed. Opt. 16, 021109 (2011). [CrossRef]   [PubMed]  

39. Y. Fu, T. J. Frederick, T. B. Huff, G. E. Goings, S. D. Miller, and J.-X. Cheng, “Paranodal myelin retraction in relapsing experimental autoimmune encephalomyelitis visualized by coherent anti-Stokes Raman scattering microscopy,” J. Biomed. Opt. 16, 106006 (2011). [CrossRef]   [PubMed]  

40. Y. Shi, D. Zhang, T. B. Huff, X. Wang, R. Shi, X.-M. Xu, and J.-X. Cheng, “Longitudinal in vivo coherent anti-Stokes Raman scattering imaging of demyelination and remyelination in injured spinal cord,” J. Biomed. Opt. 16, 106012 (2011). [CrossRef]   [PubMed]  

41. C. W. Freudiger, R. Pfannl, D. A. Orringer, B. G. Saar, M. Ji, Q. Zeng, L. Ottoboni, Y. Wei, W. Ying, C. Waeber, J. R. Sims, P. L. De Jager, O. Sagher, M. A. Philbert, X. Xu, S. Kesari, X. S. Xie, and G. S. Young, “Multicolored stain-free histopathology with coherent Raman imaging,” Lab. Invest. 92, 1492–1502 (2012). [CrossRef]   [PubMed]  

42. E. Bélanger, S. Bégin, S. Laffray, Y. de Koninck, R. Vallée, and D. C. Côté, “Quantitative myelin imaging with coherent anti-Stokes Raman scattering microscopy: alleviating the excitation polarization dependence with circularly polarized laser beams,” Opt. Express 17, 18419–18432 (2009). [CrossRef]  

43. E. Bélanger, F. P. Henry, R. Vallée, M. A. Randolph, I. E. Kochevar, J. M. Winograd, C. P. Lin, and D. C. Côté, “In vivo evaluation of demyelination and remyelination in a nerve crush injury model,” Biomed. Opt. Express 2, 2698–2708 (2011). [CrossRef]   [PubMed]  

44. S. Bégin, E. Bélanger, S. Laffray, B. Aubé, E. Chamma, J. Bélisle, S. Lacroix, Y. de Koninck, and D. Côté, “Local assessment of myelin health in a multiple sclerosis mouse model using a 2D Fourier transform approach,” Biomed. Opt. Express 4, 2003–2014 (2013). [CrossRef]   [PubMed]  

45. I. Veilleux, J. A. Spencer, D. P. Biss, D. C. Côté, and C. P. Lin, “In vivo cell tracking with video rate multimodality laser scanning microscopy,” IEEE J. Sel. Top. Quantum Electron. 14, 10–18 (2008). [CrossRef]  

46. S. Preibisch, S. Saalfeld, and P. Tomancak, “Globally optimal stitching of tiled 3D microscopic image acquisitions,” Bioinformatics 25, 1463–1465 (2009). [CrossRef]   [PubMed]  

47. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems IV, P. S. Heckbert, ed. (Academic Press Professional, Inc., 1994), pp. 474–485. [CrossRef]  

48. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int. J. Comput. Vision 1, 321–331 (1988). [CrossRef]  

49. T. F. Chan and L. A. Vese, “Active contours without edges,” Trans. Img. Proc. 10, 266–277 (2001). [CrossRef]  

50. J. A. Sethian, Level Set Methods & Fast Marching Methods : Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science (Cambridge University Press, 1999).

51. R. T. Whitaker, “A Level-Set Approach to 3D Reconstruction from Range Data,” Int. J. Comput. Vision 29, 203–231 (1998). [CrossRef]  

52. S. Lankton and A. Tannenbaum, “Localizing region-based active contours,” IEEE Trans. Image Processing 17, 2029–2039 (2008). [CrossRef]  

53. A. Gasecka, A. Daradich, H. Dehez, M. Piché, and D. Côté, “Resolution and contrast enhancement in coherent anti-Stokes Raman-scattering microscopy,” Opt. Lett. 38, 4510–4513 (2013). [CrossRef]   [PubMed]  

54. R. Chav, T. Cresson, C. Kauffmann, and J. A. de Guise, “Method for fast and accurate segmentation processing from prior shape: application to femoral head segmentation on x-ray images,” in “SPIE Medical Imaging,” vol. 7259 (2009), vol. 7259, pp. 72594Y.

55. L. Vincent, “Minimal path algorithms for the robust detection of linear features in gray images,” in Proceedings of the Fourth International Symposium on Mathematical Morphology and Its Applications to Image and Signal Processing, (Kluwer Academic Publishers, 1998), ISMM ’98, pp. 331–338.

56. M. P. Dubuisson and A. K. Jain, “A modified Hausdorff distance for object matching,” in “Proceedings of the 12th IAPR International Conference on Pattern Recognition,”, vol. 1 (1994), vol. 1, pp. 566–568.

57. L. R. Dice, “Measures of the Amount of Ecologic Association Between Species,” Ecology 26, 297 (1945). [CrossRef]  

58. M. Ross and W. Pawlina, Histology (Lippincott Williams & Wilkins, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Myelin CARS image of a transverse spinal cord section from a healthy mouse split in two to show a raw unprocessed image section (a) and a section preprocessed with contrast-limited adaptive histogram equalization (b). The image is 752 × 500 pixels in size and an average of 30 frames.
Fig. 2
Fig. 2 Flow chart of the two-part algorithm. (Left) Axon segmentation: 1) detection of axon candidates by extented-minima transform, 2) shape refinement with an active contour method and 3) candidate validation based on their shape. This part produces two binary images of the axon candidates and the background. (Right) Myelin segmentation: 1) segmentation of the myelin outer boundary in the straightened subspace images of the axon candidates, 2) candidate validation based on area overlap and 3) separation of touching myelin pairs by watershed technique. The final output is a binary image representing the myelin sheaths.
Fig. 3
Fig. 3 Axon segmentation in a transverse CARS image of mouse spinal cord. (a) Axon detection with extended-minima transform. (b) Segmentation refinement using an active contour method. (c) Object validation separates the axons (green) from the background (red).
Fig. 4
Fig. 4 Myelin segmentation in a transverse CARS image of mouse spinal cord. (a) The space around an axon (green) is probed along 72 radial lines to produce a straightened subspace image. (b) Sobel filter of the straightened subspace image with the myelin boundary (green). In both (a) and (b), the other axons are shown in blue and the background in red. (c) Segmented myelin. (d) The myelin validation stages uses the area overlap (yellow) as a metric to separate false (red) from true (green) candidates. (e) Connected objects are separated in pairs using a (f) marker-controlled watershed algorithm. (g) Final binary image with separated nerve fibers.
Fig. 5
Fig. 5 Section of a CARS image with overlay for the true positive axons (green), the false positive objects (red), false negative axons (yellow), missed axons (blue) and segmented myelin (cyan). Objects smaller than 10 pixels were discarded prior to the initial validation stage.
Fig. 6
Fig. 6 Nerve fiber segmentation in a CARS mosaic from a transverse section of healthy mouse spinal cord. (a) Myelin sheaths are shown as an overlay, color-coded to the value of the g-ratio. (b) Zoomed-in view of the blue region of interest around the anterior median fissure on the ventral side of the spinal cord. The white arrows show examples where the myelin outer diameter is underestimated. Morphometric parameter 2D histograms of (c) the g-ratio versus axon equivalent diameter and (d) axon equivalent diameter versus fiber equivalent diameter.

Tables (1)

Tables Icon

Table 1 The segmentation accuracy was evaluated using a set of eight CARS images and two toluidine blue stained images of transverse spinal cord sections from healthy mice. True positives (TP) and false positives (FP). The sensitivity, or the true positive rate (TPR), is given by the ratio of the number of true positives (TP) over the number of objects in the ground truth. The precision, or positive predictive value (PPV), is given by the ratio of the number of true positives over the number of true positives plus the number of false positives (FP), i.e., the total number of segmented objects.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

MHD ( 𝔸 , 𝔹 ) = max ( D ( 𝔸 , 𝔹 ) , D ( 𝔹 , 𝔸 ) )
D ( 𝔸 , 𝔹 ) = 1 N a a 𝔸 d ( a , 𝔹 )
QS ( 𝔸 , 𝔹 ) = 2 N ( 𝔸 𝔹 ) N ( 𝔸 ) + N ( 𝔹 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.