Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-based vessel extraction in 3D confocal microscope images of cleared human glioma tissues

Open Access Open Access

Abstract

Comprehensive visualization and accurate extraction of tumor vasculature are essential to study the nature of glioma. Nowadays, tissue clearing technology enables 3D visualization of human glioma vasculature at micron resolution, but current vessel extraction schemes cannot well cope with the extraction of complex tumor vessels with high disruption and irregularity under realistic conditions. Here, we developed a framework, FineVess, based on deep learning to automatically extract glioma vessels in confocal microscope images of cleared human tumor tissues. In the framework, a customized deep learning network, named 3D ResCBAM nnU-Net, was designed to segment the vessels, and a novel pipeline based on preprocessing and post-processing was developed to refine the segmentation results automatically. On the basis of its application to a practical dataset, we showed that the FineVess enabled extraction of variable and incomplete vessels with high accuracy in challenging 3D images, better than other traditional and state-of-the-art schemes. For the extracted vessels, we calculated vascular morphological features including fractal dimension and vascular wall integrity of different tumor grades, and verified the vascular heterogeneity through quantitative analysis.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) visualization and accurate extraction of glioma microvasculature can help to understand the mechanism of tumor angiogenesis and tumor biological behavior. It is known that the survival and progression of gliomas highly depend on the tumor microenvironment (TME), which includes a variety of cells and components. Blood vessels are one of the main components of TME and play a key role in the tumor growth. Vascular heterogeneity is closely related to glioma cell invasion and resistance. In the development of gliomas, the vessels show heterogeneity in 3D spatial configuration. The density, morphology, distribution and wall structural integrity of vessels are diverse in different grades of gliomas. Therefore, extraction of human glioma vessels and analysis of their microanatomical structures at the 3D level can help to understand mechanisms of complex TME, angiogenesis, and tumor cell invasion. Tissue clearing technology makes tissue transparent and allows deep 3D imaging of thick tissue specimens without physical sectioning, which makes up for the shortcomings of the traditional histological methods based on two-dimensional (2D) tissue sections [1,2]. It has become one of the classic technologies for 3D exploration of the whole brains at the micrometer scale, helping provide a complete understanding of cerebrovascular networks [37]. In recent years, tissue clearing, combined with vessel labeling and optical imaging techniques, has successfully revealed 3D microscopic information of glioma microvasculature [1,8,9], and the vascular morphology in the 3D images has provided key information for diagnosis and treatment.

Many studies have made efforts in the extraction of vessels from the 3D images of cleared tissues. The extraction approaches can be grouped into three categories, i.e., software, simple image processing algorithms, and machine learning methods. The popular software include Vascular Modelling Toolkit [10], Amira [11], ImageJ [10,11], and Imaris [8,10,12]. Although they are user-friendly, they often require considerable manual interaction. In the second category, some image processing algorithms, such as Otsu thresholding, morphological operations, and filter-based algorithms, were employed to build pipelines to segment the vasculature in cleared tissues [4,5,13]. For example, TubeMap developed a multi-path binarization pipeline for rapid vasculature segmentation of whole brain [5]. They automated the extraction and can scale to large volumetric data, but most of the studies got only low segmentation accuracy. In the third category, machine learning and deep learning algorithms have been used to build customized segmentation models, and boosted the performance in the automatic extraction of blood vessels in the 3D images [14,15]. For example, random forest [16,17] and Markov random field [3] have been used in the vessel segmentation of the whole mouse organs like lung, ovary, stomach, liver, and brain, while deep convolutional neural networks have also been used for vessel segmentation [6] and vascular lumen filling [5] in the whole mouse brains. Most recently and notably, 3D U-Net architecture was applied to segment the blood vessels in healthy mouse hearts and achieved high Dice scores [18]. However, the deep learning has rarely been used in the extraction of glioma vessels, and existing works either employed pre-trained deep learning models in software package ZEN Intellesis for the extraction [9], or trained a customized one based on 3D U-Net [19]. The methods are quite preliminary, and the performance still has space to improve. Compared to vessels in other tissues, the vessels in the gliomas are mostly endothelium-labeled and have severe morphological irregularity, so it is challenging for both manual and automated ways to segment the complete vascular structures.

Here, we establish a novel framework, FineVess, based on deep learning for fine and automated extraction of glioma vessels from 3D confocal microscope images of cleared human tissues. The FineVess consists of two main steps: (1) automatic and accurate segmentation of glioma blood vessels via proposed 3D ResCBAM nnU-Net, which has high performance and generalization, and low computational demand; and (2) refinement of the segmentation masks via designed preprocessing and post-processing methods, which cope with the incomplete and hollow structure of glioma vessels. FineVess was subsequently applied to quantify and analyze vascular morphological features of gliomas in different grades. Compared with manual segmentation and existing methods like 3D U-Net and nnU-Net, given limited amount of data with sparse annotations, our framework has better segmentation results for variable and complex tumor vascular images and helps achieve reliable quantitative analysis.

2. Materials and methods

2.1 Data

In this work, we used 3D glioma vessel data obtained by optical clearing, immunolabeling, and confocal microscope imaging to develop and test the FineVess framework. The data acquisition and dataset construction methods are described as follows. In addition, two available vessel datasets from other works were employed to verify the generalizability of our segmentation methods.

2.1.1 Tissue sample preparation and imaging

Glioma samples from the right frontal cerebral region were obtained during surgeries at Zhujiang Hospital. For controls, normal brain tissues from the same region of non-tumorous brains were used. Ethical clearance for the study protocol was granted by the Medical Ethics Committee of the aforementioned hospital (Approval Numbers: 2018-SJWK-004 and 2020-YBK-001-02). Additionally, informed consent was secured from every participating patient. Samples were fixed in 10% neutral buffered formalin (Wexis, Guangzhou, China) for 5-7 days at 4°C. Post-fixation, samples were rinsed in phosphate-buffered saline (PBS; Solarbio, Beijing, China) and embedded in 4% agarose gel (Sigma, American). The solidified agarose-embedded tissue blocks were sectioned into 500 µm slices using a vibratome (Koster, American). These sections underwent delipidation in Sodium borate clearing solution (SBC; consisting of 4% SDS in 0.2 M sodium borate, pH 8.5) for 3-5 days at 37-55°C, followed by PBS with Triton X-100 (PBST : 0.3% Triton X-100 (vol/vol) and 0.01% sodium azide (wt/vol)) wash. For immunolabeling, sections were incubated with a primary antibody (rabbit anti-CD31 antibody; Abcam, Shanghai, China) in PBST for two days at 37°C, washed, and incubated with secondary antibodies (Goat anti-rabbit antibody, Alexa Fluor 633 conjugate; Sigma, American) overnight in darkness. Finally, sections were washed and immersed in OPTIClear solution for six hours.

After optical clearing, tissues were arranged on cell and tissue culture dishes coated with OPTIClear solution. For 3D imaging, a Leica SP8 confocal microscope was used, scanning along the z-axis to compile volumetric data with a z-step of 0.799 µm. The data was taken with a × 20 objective (Plan-Apochromat CS; numerical aperture, 0.70; working distance, 0.59 mm), 8-bit depth, voxel size of 0.568 µm, and voxel number of 1024 on the x and y axes. A spectral detection range of 570-700 nm was selected to enhance image clarity and precision, mitigating autofluorescence. For each sample, 3-7 regions of interest (ROI) were imaged. Imaging parameters and depth range were adjusted to ensure that the signal-to-noise ratio of the image was appropriate.

2.1.2 Dataset

Vascular volumes from 12 samples of patients were obtained. Tumor cases were clinically diagnosed as low-grade (mainly grade II astroglioma) and high-grade (grade IV glioblastoma) gliomas according to the latest World Health Organization (WHO) classification of brain tumors (Table S1). As the training of deep learning models needs annotated data as ground truth, we firstly cut the volumes into subvolumes, each of which is a quarter of a whole volume, and then annotated 14 subvolumes by manually labeling one of every ten slices to ease the annotation burden (Fig. 1, Table S2). The ground truth was screened and refined several times by multiple professionals and experts in the tissue preparation and imaging. The vessels with lumens, incomplete walls, and staining issues were labeled as filled and intact tubular in the hope that the deep learning can learn the full vessel shape.

 figure: Fig. 1.

Fig. 1. Subvolumes for model development shown as top view of 3D images. The glioma vasculature is heterogeneous between different patient samples as well as between different regions of the same sample. The vessels have variable scale, morphology, and fluorescence intensity. The images have variable contrast and brightness, while some images have strong background interference. Scale bar = 50µm.

Download Full Size | PDF

Two-fold cross validation was used in our experiments, and each of the partitioned two sets consisted of 7 subvolumes (Fig. 1). The two sets were from different patients, and both of them contained the subvolumes with various vascular appearance and grades. The data partitioning ensures the independence and diversity of the training set and testing set, which is important for the small-scale training data. The detail of the 14 subvolumes was summarized in Table S2 and was visualized in Fig. 1.

2.1.3 Datasets from other works

Two public vessel datasets from other works [6,20] were employed to further verify the generalizability of our vessel extraction methods.

The first dataset consists of 70 annotated mouse cerebrovascular volumetric data from two-photon fluorescence microscopy [20]. Vasculature was visualized by creating a cranial window over the parietal bone and injecting Texas Red 70 kDa dextran. 512 × 512 images with different depth were collected by a × 25 water-immersion objective lens with a lateral resolution of 0.621–0.994 µm/pixel and a z-step of 1–10 µm. The ground truth was created with traditional image processing methods, a 2D U-Net and manual corrections.

The second dataset has 11 annotated vascular volumetric data imaged with light-sheet microscopy for analyzing cerebrovasculature of whole mouse brains [6]. It was acquired by staining entire vasculature with wheat germ agglutinin (WGA) and Evans blue dyes, clearing samples with 3DISCO clearing solutions and ×12 LaVision objective imaging. To match our single-channel network, we used only the first channel (WGA staining), which already presents all the vessels in the manual annotation. The image resolution is 2.83 µm × 2.83 µm × 4.99 µm and the size is 500 × 500 × 50 voxels.

2.2 Deep learning network for vessel segmentation

Overview of FineVess framework is shown in Fig. 2. The main part of the FineVess is a deep learning network, named 3D ResCBAM nnU-Net, for the vessel segmentation. The network is based on the most popular segmentation model, 3D nnU-Net [21], which has achieved state-of-the-art performance and high feasibility in a variety of biomedical image segmentation tasks [2225]. The 3D nnU-Net allows automatically configure standardized and suitable segmentation pipeline for tasks with small memory consumption, making it the best baseline to segment 3D images from different imaging modalities and sites [26,27]. Considering our data characteristics, we improved the baseline 3D nnU-Net by adjusting network architecture and designing training strategies to better fit our task, and built the 3D ResCBAM nnU-Net.

 figure: Fig. 2.

Fig. 2. Overview of FineVess framework proposed in this study. FineVess contains vessel segmentation with 3D ResCBAM nnU-Net and refinement of the segmentation masks with image preprocessing and post-processing. It is then applied to the feature quantification and analysis of vascular heterogeneity in different grades of gliomas.

Download Full Size | PDF

2.2.1 Architecture of the 3D ResCBAM nnU-Net

Compared with the classical 3D nnU-Net, our network model introduced multiple convolutional block attention module (CBAM) blocks and added residual connections in the architecture to improve the segmentation performance (Fig. 3). The classical architecture of 3D nnU-Net has an encoder and a decoder, each of which has 6 resolution stages and 5 sampling stages, and in each resolution stage, two blocks containing convolution layers, instance normalization layer, and leaky ReLU layer are used. Below we describe the two modifications based on the classical 3D nnU-Net.

 figure: Fig. 3.

Fig. 3. Network architecture of 3D ResCBAM nnU-Net (a) and 3D CBAM block (b).

Download Full Size | PDF

Introducing CBAM blocks. Considering that attention modules could help models focus on meaningful information in images [28,29], we tested multiple attention modules and chose CBAM to be incorporated in our model. The CBAM blocks followed the output of the convolution blocks in each resolution level of the decoder of 3D nnU-Net.

The CBAM module [30], different from squeeze-and-excitation (SE) [31] and spatial and channel squeeze-and-excitation (scSE) blocks [32], extracts both max-pooled features and average-pooled features in channels and spatial attention paths, respectively. Since vessels have larger gray value than background in our images, so both the average-pooled and max-pooled features may be effective in our task. The introduction of CBAM blocks into the 3D nnU-Net network can enhance meaningful features and suppress weak features by helping up-sampling to recover the spatial information and recalibrating the directions of learned feature maps.

Structure of the 3D CBAM block was shown in Fig. 3(b). It contains a channel attention module and a spatial attention module. Given a feature map $F \in {R^{C \times H \times W \times D}}$ as input, the CBAM firstly infers a one-dimensional (1D) channel attention map ${M_c} \in {R^{C \times 1 \times 1 \times 1}}$, and then a 3D spatial attention map ${M_s} \in {R^{\textrm{1} \times H \times W \times D}}$. The overall process can be summarized as:

$$\begin{array}{l} {\kern 1pt} {F^{\prime}} = {M_c}(F )\otimes F\\ {F^{^{\prime\prime}}} = {M_s}({{F^{\prime}}} )\otimes {F^{\prime}}, \end{array}$$
where ${\otimes}$ denotes element-wise multiplication, ${F^{\prime}}$ represents channel-refined feature map, and ${F^{^{\prime\prime}}}$ represents the final refined feature map.

In the channel attention module, the spatial information of the inputted feature map is firstly aggregated through average-pooling and max-pooling operations. Both aggregated results are sequentially forwarded to a shared multi-layer perceptron (MLP), element-wise summation, and a sigmoid function to obtain 1D channel attention map. The process is briefly represented as:

$${M_c}(F)= \sigma ({MLP({AvgPool(F )} )+ MLP({MaxPool(F )} )} ),$$
where $\sigma$ denotes the sigmoid function.

In the spatial attention module, the channel information of channel-refined feature map is firstly aggregated through two pooling operations, and then the aggregated results are concatenated and convolved by a convolution layer to produce 3D spatial attention map:

$${M_s}(F^\prime )= \sigma ({Con{v^{7 \times 7 \times 7}}({[{AvgPool(F^\prime );MaxPool(F^\prime )} ]} )} ),$$
where $Con{v^{7 \times 7 \times 7}}$ represents a convolution operation with kernel size of $7 \times 7 \times 7$.

Adding residual connections. It has been reported that residual connection could enhance information flow, aid gradient propagation during training, and preserve original feature map information [3336]. For this task, it is expected that the residual connection can convey rich visual representation and enable learning of complete semantic information of fine structures. Therefore, we added residual connections in the architecture, which combined the output of the CBAM blocks and original feature maps.

2.2.2 Training strategies

Multiple training strategies, i.e., aggressive data augmentation, aggressive patch sampling, weighted objective function, were designed and used in the FineVess framework.

Aggressive data augmentation was performed to increase the model robustness. First, all the subvolumes in our dataset got the uniform voxel size of 0.568 µm × 0.568 µm × 0.799 µm and image size of 512 × 512 × 126 by uniformity of voxel space and z-score normalization of voxel intensity. Then, as the input size in the 3D nnU-Net is 224 × 224 × 48, we cropped the subvolumes to small patches and conducted data augmentation. The data augmentation was more diverse and aggressive than that used in the baseline 3D nnU-Net, and it was a dynamic augmentation strategy, which included random rotation, scaling, gamma transformation, mirroring, brightness adjustment, and elastic deformation, with increasing the probability for applying various transformations and widening the variation range of their parameters [37]. Since the aggressive data augmentation can only be effective in reducing the image domain gap between training and test images when used in conjunction with batch normalization, all the instance normalization layers were replaced with batch normalization layers.

The training of our network skipped blank patches, which have no labeled vessels. According to nnU-Net’s configuration, batch size was set to 2. Originally, in each iteration, one patch was sampled centered on foreground, and another was randomly sampled to ensure high training efficiency while covering various types of patches. Since the vessels in our task were sparsely distributed and labeled, random sampling is prone to pick empty image patches, which would reduce the training efficiency. Therefore, to improve training efficiency and segmentation performance, we used a more aggressive sampling strategy that skipped empty patches to replace the random sampling.

Due to the imbalance of vessel and background classes in our images, category weights were added in the cross-entropy loss function (setting vessels: background = 3:1). The loss function is represented as:

$${L_W} = {L_{WCE}} + {L_{Dice}},$$
where LWCE represents weighted cross-entropy loss function, while LDice represents soft Dice loss function. Additionally, the pixel value of the unlabeled pixels was set to 2 and the pixels were ignored in the calculation of the loss function during training.

During training of the networks, each experiment ran 1000 epochs. A stochastic gradient descent optimizer with a high initial learning rate of 0.01 and a large Nesterov momentum of 0.99 was used. We conducted all the experiments with PyTorch on NVIDIA RTX 3090 GPU with 24 G VRAM. All the experiments were done following the open source nnU-Net framework.

2.3 Segmentation refinement based on preprocessing and post-processing

A novel refinement scheme for segmentation masks was designed to further improve the segmentation performance. First, the segmentation masks obtained from the 3D ResCBAM nnU-Net were refined by referring segmentation masks of preprocessed images, and then the refined images were further post-processed to improve the internal cavity problem of vessels.

2.3.1 Refinement by preprocessing

Different from other works that just use preprocessed data as input of networks, we utilized data preprocessing in a more efficient way (Fig. 2). We firstly preprocessed the raw images by contrast limited adaptive histogram equalization (CLAHE) algorithm, rolling ball background subtraction (RBBS) algorithm, and smoothing algorithm. These preprocessors could weaken the background intensity, brighten the vessels, and smooth the vessel edges, but also augment a few vessel-like noises. Then, we applied the 3D ResCBAM nnU-Net to the preprocessed images and got their segmentation masks, referred as reference masks, in which the vessel shapes were more complete. Finally, the target segmentation masks were refined by replacing the vessel segments with those in the corresponding positions of the reference masks. The replacement was done only when the segments in the two masks have a small difference. If the difference was too large, it was assumed that the reference mask was inaccurate and should be discarded. The refinement by the preprocessing enables more complete extraction of glioma vessels with broken patterns in challenging images. Noteworthy, compared with directly using the reference masks as refined results, it avoided introducing noise to the final masks.

2.3.2 Refinement by post-processing

As only vessel walls were labeled and some of the glioma vessel walls showed fragmentation, large vessels appeared as empty tubes and small vessels appeared holes, both of them with incomplete vessel walls. In addition, tiny false positive patches may exist in the segmentation masks due to background interference. These would affect vessel quantification and subsequent analysis. Therefore, we designed a pipeline to post-process the segmentation masks as the second refinement. The pipeline consists of filling small vessel holes and filling large vessel lumens.

For the small holes of vessels, we used deep learning and mathematical morphology to help in identifying and filling the 3D holes. Firstly, each mask was fed into the tube filling network in the TubeMap [5] and the hole filling function in mathematical morphology, respectively. Thus, each of the two methods outputted one initially filled mask. Secondly, we determined whether a hole was true or just a vascular gap by comparing the two initial masks. If both of the masks have the hole, we regarded it as a real hole that needs to be filled. Thirdly, we filled the real holes by region growing based on the masks obtained by the morphology fill function, and the growing was under the restriction of the TubeMap masks. In this way, the holes in vessels could be well filled.

To fill the lumens in large vessels, we developed an automatic algorithm that has extraction and fill steps. Because the large vessels are very rare, it is assumed that one subvolume data have at most one large vessel. Firstly, to find out the position and shape of the large vessel, we projected adjacent slices along the z-axis into one slice, and selected the slice that have the largest connected domain as a representative to determine the approximate location and shape. Secondly, we separated the whole subvolume mask into multiple parts, and defined the large vessel in each part by comparing the locally projected slice with the representative in term of radius and overlap area. Thirdly, the large vessel was skeletonized slice by slice, and the endpoints of the skeleton segments were connected and closed loops were filled, which achieved closure of the vessel wall and filling of the lumens. The tube filling network in TubeMap was also applied to make the vessel coherent and smooth. Finally, the small connected components with less than a certain volume were regarded as noise and removed by 3D connectivity component analysis. More details are shown in Fig. S1.

2.4 Segmentation evaluation metrics

Dice similarity coefficient, widely adopted for evaluation in segmentation tasks [38,39], was used as the main metric in our work, while recall, precision, and average surface distance (ASD) were used as reference. These metrics were calculated only based on the labeled slices in the subvolumes. Dice similarity coefficient was calculated by:

$$Dice = \frac{{2|{P \cap G} |}}{{|P |+ |G |}},$$
where P represents the pixel set in the segmentation masks, and G represents the pixel set in the ground truth.

ASD measures the average surface distance between the segmentation masks and the ground truth. It can be represented as:

$$ASD = \frac{1}{{|{S(P)} |+ |{S(G)} |}}\left( {\sum\limits_{p \in S(P)} {\mathop {\min }\limits_{g \in S(G)} ||{p - g} ||+ \sum\limits_{g \in S(G)} {\mathop {\min }\limits_{p \in S(P)} ||{g - p} ||} } } \right),$$
where S(P) and S(G) denote the sets of surface pixels of P and G, respectively, and $||. ||$ denotes the Euclidean distance of two pixels. The lower the ASD, the better the segmentation performance.

2.5 Vessel feature extraction and analysis

We applied our FineVess framework in all the acquired volumes to investigate the morphological complexity and the degree of breakage of vessel wall in different grades of human glioma. It is clinically accepted that compared with the vessels in normal brains, glioma vessels are denser, more unevenly distributed, more irregularly shaped, and poor in vascular wall integrity, especially for high-grade gliomas [40,41]. Based on the final segmentation mask, we used box-dimension method [42] to measure the fractal dimension, an indicator to describe the complexity of geometric structures. Higher the fractal-dimension values represent higher complexity [43]. To describe the fenestration of vessel wall, we proposed a new indicator based on the segmentation masks from the FineVess. We believed that the proportion of vascular walls that need to be filled during the hole-filling stage of FineVess could reflect the degree of incompleteness of the vessel wall to some extent. The proportion was the number of vessel pixels in the filled holes divided by that in the whole mask. Large values reflect high level of breakage. Statistical analysis of the two parameters, i.e., fraction dimension and fill proportion, was performed in three groups (normal, low grade, and high grade groups). Results were expressed as mean ± SEM. The significance of differences among the three groups was tested by one-way ANOVA.

3. Results

3.1 Results of FineVess

The FineVess used the 3D ResCBAM nnU-Net as the main model for vessel segmentation, and incorporated preprocessing and post-processing to refine the segmentation results. For the 3D ResCBAM nnU-Net, we performed lots of ablation experiments to determine its network architecture and training strategies, compared the performance with the baseline 3D nnU-Net, and also verified its generalization performance. Besides, the utility of the image preprocessing and post-processing was also investigated.

3.1.1 Segmentation results of 3D ResCBAM nnU-Net

Segmentation results of 3D ResCBAM nnU-Net are shown in Fig. 4(a), in which some example slices of the subvolumes were shown. The Dice of our 3D ResCBAM nnU-Net is 2.27% higher than the baseline 3D nnU-Net. It can be seen that the 3D ResCBAM nnU-Net can better recognize weak, discontinuous, and large vessels, so the vessel masks can get better continuity and completeness in the 3D space.

 figure: Fig. 4.

Fig. 4. Glioma vessel segmentation results of 3D ResCBAM nnU-Net and its application in other microscope vessel data. The red arrows point out the main differences.

Download Full Size | PDF

To verify the generalizability of 3D ResCBAM nnU-Net, we applied it to the two datasets from other works [6,20]. The two types of volumes were acquired with different optical microscopes (two-photon fluorescence microscopy and light sheet microscopy, respectively) and have a different voxel size and image characterization. The vessels in the two datasets were firstly segmented by directly applying the model trained with all of the 14 glioma vessel subvolumes (Fig. 4(b)). It could be observed that the 3D ResCBAM nnU-Net could achieve more coherent and complete vessels in the segmentation masks than the ground truth. For the light-sheet microscope vasculature, due to the large differences in terms of voxel space, vessel, and image characterization, the model treated small and weak vessels as background and could not extract them well. Whereas, if finetuning the trained model with small amount of training data (only 15 and 8 volumes, respectively), the model could greatly enhance the Dice performance on the two datasets.

3.1.2 Ablation experiments of 3D ResCBAM nnU-Net

To study the role of individual parts in the 3D ResCBAM nnU-Net, we performed ablation experiments. The components for improvement include introducing CBAM block with residual connection (ResCBAM), using more aggressive data augmentation, skipping blank patches when sampling training data, and introducing weights in the cross-entropy loss. All the models were built based on the baseline 3D nnU-Net. The results in Table 1 showed that a combination of all the components could yield the best segmentation performance. It seemed that the aggressive data augmentation achieved the most improvements in all the components. This was in line with the characterization of our task, a small amount of data, so enriching data might lead to more improvements.

Tables Icon

Table 1. Ablation experiments of 3D ResCBAM nnU-Net. Bold numbers indicate the best metric among all rows

In order to investigate the advantages of each component over other counterparts, we also performed comparative experiments by replacing the components with alternative methods (Fig. 5). The replacements were performed based on the final 3D ResCBAM nnU-Net.

 figure: Fig. 5.

Fig. 5. Comparative experiments of different components for improvement. We verified the advantages of our components by replacing the corresponding components individually with others in 3D ResCBAM nnU-Net.

Download Full Size | PDF

For the network architecture, we tested the effects of adding attention blocks or transformer blocks in our task. Firstly, the attention block, CBAM, was compared with scSE, which was designed for segmentation and has no pooling features in spatial attention path, and verified its superiority in our task. Meanwhile, the role of residual connection in CBAM block was verified, and it is concluded the ResCBAM incorporating the information of original feature map can help recover image details. Secondly, considering that TransUNet [44] and TransBTS [45] that incorporate transformer block (self-attention layer and multi-layer perceptron) in the encoder and bottleneck layers of 3D U-Net, respectively, have achieved impressive performance in segmentation tasks, we attempted to introduce the transformer block into our 3D ResCBAM nnU-Net architecture. In Fig. 5, it turned out that the performance was poor in our task either using the transformer block alone or combined with the ResCBAM modules in the architecture. The reason might be that the transformer blocks have large computational overhead and high demand on the amount of training data.

In the training strategies, the results of data augmentation suggested that aggressive data augmentation method was effective in our task only when used in conjunction with batch normalization, rather than the instance normalization that usually used in the 3D nnU-Net. Besides, the patch sampling method, skipping blank patches, was compared with another oversampling method, which was a more aggressive data sampling strategy, in which all the patches are sampled centered on the target object pixels. In Fig. 5, it was shown that the aggressive oversampling degraded the performance, demonstrating the importance of an appropriate degree of data sampling. Too aggressive sampling would make the model unable to handle images with complex backgrounds. As for the hybrid loss function consisting of the weighted cross-entropy loss and Dice loss, we firstly replaced the former loss with its variant [46] that only pixels harder to predict are selected for the loss calculation, and then replaced the latter loss with its variant [47] in which a weight parameter was added to penalize the incorrectly predicted pixels. Both of the two loss changes led to worse performance. We speculated that the vessels in our dataset were variable and it was hard for the pixels to be correctly classified, so fixed-direction reasoning and focusing only on specific types of pixels would detract from grasping the big picture.

3.1.3 Refinement results of preprocessing and post-processing

The preprocessed images using CLAHE, RBBS, and smoothing algorithms were fed into the 3D ResCBAM nnU-Net in parallel, and the outputted reference masks were used to refine the segmentation masks. Some examples of the segmentation masks and the average Dice are shown in Fig. 6. After the refinement, there was an increase in the Dice and recall values despite decreased performance of precision and ASD. The improvement was due to the fact that with the help of reference masks, the extracted vessels were more complete than those of unprocessed images. However, because the glioma vasculature is morphologically irregular and broken, when the refinement tried to complete the vessels with broken patterns, it also brought in some wrong voxels. This might be why the performance of precision and ASD decreased. Intrinsically, the preprocessing refinement purposefully fused the segmentation maps and is a more targeted ensemble.

 figure: Fig. 6.

Fig. 6. Refinement results of segmentation maps based on preprocessing. The red arrows point out the main differences.

Download Full Size | PDF

The post-processing aimed to detect and fill the small holes and large lumens of vessels (Fig. 7). It can be seen that some holes and broken walls were filled after the post-processing, and the designed pipeline for large vessel filling can also work well. The post-processing step made the vessels more intact, which improved most of the metrics. Since there is a significant disparity in size between large and small vessels, and many of the small holes are not distributed in the labeled slices, there is a big improvement in the metrics of large vessel filling and a relatively slight one in those of small hole filling.

 figure: Fig. 7.

Fig. 7. Overall results of post-processing pipeline. The result was shown as top view of 3D images. In addition to the obvious large lumen fillings, small hole fillings can be seen where yellow arrows point out.

Download Full Size | PDF

3.2 Comparison of FineVess with other vessel extraction methods

We compared the results of the FineVess with other segmentation methods including surface modeling in Imaris software, multi-path binarization in TubeMap, and 3D U-Net (Table 2 and Fig. 8). Specifically, 3D U-Net experiments were performed in the framework of VesSAP [6] and a customized framework in previous works of Wang et. al [19], respectively. The former used basic 3D U-Net architecture and simple training strategies, while the latter improved the 3D U-Net designs in terms of network depth and sampling strides, large patch size, diverse dynamic data augmentation, and sufficient number of iterations during training. The results showed that our method was significantly superior to the commonly used vascular extraction protocols. Comparing with others, the FineVess can better cope with broken and variable tumor vessels in complex images, and achieve intact extraction of vessels while ignoring noise in the background.

 figure: Fig. 8.

Fig. 8. Comparison of our method and other vessel extraction schemes.

Download Full Size | PDF

Tables Icon

Table 2. Segmentation results of different schemes. The bold block is to highlight the optimal scheme

3.3 Application of FineVess on new data

We applied FineVess in new glioma vessel volumes to verify its performance and generalizability. The new volumes are 3D vascular images of low-grade and high-grade gliomas, as well as normal brain tissues, and they have not been considered or referred in the design of FineVess. Here, the FineVess model was trained by all of the 14 subvolumes.

Some of the intermediate and final results of the vessel segmentation are presented in Fig. 9(a). Supplementary Visualization 1, Visualization 2 are presentation of the 3D effects of two example raw images and segmented results of FineVess. It can be seen that the 3D ResCBAM nnU-Net performed well and the refinement made further improvements to the unfilled and disrupted parts. Some of the improvements were quite slight. This is because that for the low-grade and normal cases where vessels have regularized shape and even distribution would reduce the computational requirements, using 3D ResCBAM nnU-Net without refinement could already give a satisfactory performance.

 figure: Fig. 9.

Fig. 9. Results of applying the FineVess framework to new glioma vessel data (a) and quantitative analysis of vascular morphological features (b). The yellow arrows point out the some examples of differences. *p<0.05, **p<0.01, ***p< 0.001, ****p< 0.0001, one-way ANOVA test.

Download Full Size | PDF

3.4 Morphological analysis of vessels in different glioma grades

We applied FineVess to characterize the morphological complexity and the integrity of the vessel wall in 35 volumes from low-grade gliomas, high-grade gliomas, and normal brain tissues (Fig. 9(b)). Glioma vessels have higher fractal dimension than normal vessels, suggesting the complexity of the morphological structure of glioma vessels. The results of evaluating the proportion to be filled indicate that glioma vessels have more parts to fill than normal vessels, and high-grade vessels have more than the low-grade. The latter also can be reflected in the increased dice values through the hole filling stage. That demonstrated that glioma vessels are more broken, especially for the high-grade gliomas. These are consistent with the clinical conclusion [8,40].

4. Discussion

In this work, a novel framework FineVess for extraction of complex glioma vessels in 3D microscope images was developed. Here, we firstly illustrate the potential values of the FineVess in practical applications, and summarize the advantages of FineVess compared to existing methods. Then, the design and construction of FineVess are presented. Finally, the limitations of the current study and possible future research directions are analyzed.

FineVess as a fine vessel extraction tool has potential research and clinical values. Accurate extraction of tumor vasculature enables in-depth observation of complete vascular morphology and subtle changes, as well as the spatial relationship with tumor cells, immune cells, and other components. Based on the extraction results, quantitative and statistical analysis of vascular features based on large-scale data can help excavate the angiogenic mechanism, the law of vascular heterogeneity, and tumor biological behaviors. In this study, based on the intermediate and final results of FineVess, we proposed a novel metric to analyze the structural incompleteness of the glioma vessels, and this is a preliminary application of FineVess to the analysis of tumor vascular heterogeneity. In clinical value, the extracted tumor vasculature can help in precise diagnosis and grading of tumors, as well as obtaining reliable prognostic indexes, evaluating the efficacy of anti-vascular therapies, and discovering new targets for vascular-targeted precision therapy. The FineVess is expected to assist in building a glioma vascular analysis platform and be used in practical applications.

Compared to existing methods, FineVess has three main advantages in the glioma vessel segmentation. First, FineVess outperformed in handling complex vessels. Glioma vessels in images are irregular in distribution and morphology, having fuzzy edges and disruption fluorescent signals, and are easily confused with background. Most previous deep learning methods could extract normal vessels, but the vessels in tumor are rarely concerned and explored. In this study, FineVess designed and used multiple components, including incorporating CBAM blocks and residual connections, utilizing training strategies, and refinement by preprocessing and post-processing, to improve the capture of vessel edge details and segmentation for fine structures. Those components provide key ideas for the outperformance. Second, FineVess was developed with limited training data and sparse labels. Most deep learning models need large data to train and tune, while FineVess can achieve good results based on small data and sparse labels, providing a success example and a useful scheme for the thorny issues of obtaining and labeling 3D data. Third, FineVess has good generalizability. It can be applied to normal vessels (Fig. 9(a)) and vessel images obtained by other optical imaging modalities and labeling methods (Fig. 4(b)). This ability is due to the high generalization of 3D ResCBAM nnU-Net and flexibility of the refinement pipeline.

The construction process of FineVess provides a design reference for the development of vascular segmentation methods. In the development of 3D ResCBAM nnU-Net, ablation, comparison, and exploratory experiments on the network architectures and training strategies can provide valuable references for related tasks. To our best knowledge, we are the first one to apply nnU-Net to 3D microscopy vessel images and demonstrate the good potential of nnU-Net in limited and complex microscope vascular data. We show the feasibility of using very sparse manual annotations for vessel segmentation in the 3D U-Net architecture. Our explorations also validate the importance of training designs. For example, in the early exploratory and final ablation experiments, we found that the improvements of network architecture were effective only when we performed aggressive data augmentation in our task with small-scale training data. In addition, using raw data as training data worked better than preprocessed data due to powerful data augmentation in the nnU-Net. We also found the too aggressive data sampling and weighted loss function are not suitable for using together, as both of them have a similar role for helping models focus on vessels. For the final 3D ResCBAM nnU-Net, we experimentally verified that the decoder is the optimal location for the CBAM blocks. Transfer-learning techniques by pretraining model with other data failed to improve the results due to the differences between the data, limitation of external data diversity, and method bottleneck. For the holes and lumens filling of complex vessels, considering the difficulty of the task, we had once created new slice-by-slice ground truth based on the segmentation results of nnU-Net, and tried to train a deep learning network for vessel filling task. However, the filling effect was not significant due to the fact that the discontinuous, lumens, and holes are much less than the filled vessels in training data. Thus, we chose the suboptimal scheme, i.e., designed an image processing pipeline. The refinement pipeline of FineVess combines the advantages of traditional image processing and deep learning algorithms, and provides additional and creative ideas to further refine results and realize difficult filling tasks.

Our work is still preliminary and there is some room for improvement. Firstly, the performance of our framework can be further improved if more labeled data could be added into the training set. It has been demonstrated that the 3D ResCBAM nnU-Net of FineVess could achieve higher Dice value when more training data was fed into the network (Fig. 9(a)). Also, the refinement method of FineVess was less robust due to methodological limitations and the limited amount of data, especially in the large vessel filling stage, where the fully automated large vessel detection was a very tricky problem and our automated pipeline was still preliminary and need to be refined on larger datasets to avoid overfitting. Thus, in the future, as advances in tissue clearing and volumetric imaging are making it possible to visualize larger size and amount of samples, FineVess can be improved with more labeled data, and the refinement pipeline could be adjusted based on experiments with large datasets. Secondly, as a framework containing multiple procedures, FineVess is more time-consuming than single methods. In current stage of development, we valued extraction accuracy more than the extraction speed. The extraction speed could be an important aspect for improvement.

5. Conclusion

Driven by the glioma vessel segmentation from confocal microscope images of cleared human glioma tissues, we presented a framework FineVess. Its main part, 3D ResCBAM nnU-Net, was developed for robust and accurate image segmentation, and subsequent refinement pipeline further improved the segmentation performance. FineVess can cope with glioma vessels with high disruption and irregularity and is expected to be applied to future glioma vessel analysis. Its development is anticipated to give inspiration and references for tasks in the field of microscopic tumor vascular segmentation.

Funding

Scientific Research Startup Fund for Shenzhen High-Caliber Personnel of SZPU (602333000*K); Natural Science Foundation of Guangdong Province (2022A1515011436, 2023A1515030045); Guangzhou Municipal Science and Technology Project (202102021087); Presidential Foundation of Zhujiang Hospital (yzjj2022ms4); Special Fund Project for Science and Technology Innovation Strategy of Guangdong Province (pdjh2023b0106); National College Students Innovation and Entrepreneurship Training Program (202112121004).

Acknowledgments

The authors would also like to thank all the technical staff of the Clinical Biobank Centre for their kind assistance. They would also like to thank all tissue donors and their families who have kindly donated their resected specimens to the Clinical Biobank Centre.

Disclosures

The authors declare no conflicts of interest.

Data availability

The code in this paper are available in Ref [48]. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

Reference

1. R. Yang, J. Guo, Z. Lin, et al., “The combination of two-dimensional and three-dimensional analysis methods contributes to the understanding of glioblastoma spatial heterogeneity,” J. Biophotonics 13(2), e201900196 (2020). [CrossRef]  

2. J. Zhu, X. Liu, Y. Deng, et al., “Tissue optical clearing for 3D visualization of vascular networks: A review,” Vasc. Pharmacol. 141, 106905 (2021). [CrossRef]  

3. A. P. Di Giovanna, A. Tibo, L. Silvestri, et al., “Whole-Brain Vasculature Reconstruction at the Single Capillary Level,” Sci. Rep. 8(1), 12573 (2018). [CrossRef]  

4. L. Y. Zhang, P. Lin, J. Pan, et al., “CLARITY for High-resolution Imaging and Quantification of Vasculature in the Whole Mouse Brain,” Aging and disease 9(2), 262–272 (2018). [CrossRef]  

5. C. Kirst, S. Skriabine, A. Vieites-Prado, et al., “Mapping the Fine-Scale Organization and Plasticity of the Brain Vasculature,” Cell 180(4), 780–795.e25 (2020). [CrossRef]  

6. M. I. Todorov, J. C. Paetzold, O. Schoppe, et al., “Machine learning analysis of whole mouse brain vasculature,” Nat. Methods 17(4), 442–449 (2020). [CrossRef]  

7. T. Miyawaki, S. Morikawa, E. A. Susaki, et al., “Visualization and molecular characterization of whole-brain vascular networks with capillary resolution,” Nat. Commun. 11(1), 1104 (2020). [CrossRef]  

8. T. Lagerweij, S. A. Dusoswa, A. Negrean, et al., “Optical clearing and fluorescence deep-tissue imaging for 3D quantitative analysis of the brain tumor microenvironment,” Angiogenesis 20(4), 533–546 (2017). [CrossRef]  

9. S. Kostrikov, K. B. Johnsen, T. H. Braunstein, et al., “Optical tissue clearing and machine learning can precisely characterize extravasation and blood vessel architecture in brain tumors,” Commun. Biol. 4(1), 815 (2021). [CrossRef]  

10. E. Lugo-Hernandez, A. Squire, N. Hagemann, et al., “3D visualization and quantification of microvessels in the whole ischemic mouse brain using solvent-based clearing and light sheet microscopy,” J. Cereb. Blood Flow Metab. 37(10), 3355–3367 (2017). [CrossRef]  

11. M. C. Müllenbroich, L. Silvestri, A. P. Di Giovanna, et al., “High-Fidelity Imaging in Brain-Wide Structural Studies Using Light-Sheet Microscopy,” eneuro 5(6), ENEURO.0124-18.2018 (2018). [CrossRef]  

12. T. Liebmann, N. Renier, K. Bettayeb, et al., “Three-dimensional study of Alzheimer’s disease hallmarks using the iDISCO clearing method,” Cell Rep. 16(4), 1138–1152 (2016). [CrossRef]  

13. P. Kennel, L. Teyssedre, J. Colombelli, et al., “Toward quantitative three-dimensional microvascular networks segmentation with multiview light-sheet fluorescence microscopy,” J. Biomed. Opt. 23(08), 1–086002 (2018). [CrossRef]  

14. W. Tahir, S. Kura, J. Zhu, et al., “Anatomical Modeling of Brain Vasculature in Two-Photon Microscopy by Generalizable Deep Learning,” BME Front. 2020, 8620932 (2020). [CrossRef]  

15. N. Holroyd, Z. Li, C. Walsh, et al., “tUbe net: a generalisable deep learning tool for 3D vessel segmentation,” bioRxiv, bioRxiv:2023.2007.2024.550334 (2023). [CrossRef]  

16. R. Oren, L. Fellus-Alyagor, Y. Addadi, et al., “Whole organ blood and lymphatic vessels imaging (WOBLI),” Sci. Rep. 8(1), 1412 (2018). [CrossRef]  

17. K. Takahashi, K. Abe, S. I. Kubota, et al., “An analysis modality for vascular structures combining tissue-clearing technology and topological data analysis,” Nat. Commun. 13(1), 5239 (2022). [CrossRef]  

18. M. Lapierre-Landry, Y. Liu, M. Bayat, et al., “Digital labeling for 3D histology: segmenting blood vessels without a vascular contrast agent using deep learning,” Biomed. Opt. Express 14(6), 2416–2431 (2023). [CrossRef]  

19. X. Wang, X. Yang, D. He, et al., “Three-dimensional visualization of blood vessels in human gliomas based on tissue clearing and deep learning,” arXiv, bioRxiv:2023.2010.2031.564955 (2023).

20. C. Poon, P. Teikari, M. F. Rachmadi, et al., “A dataset of rodent cerebrovasculature from in vivo multiphoton fluorescence microscopy imaging,” Sci. Data 10(1), 141 (2023). [CrossRef]  

21. F. Isensee, P. F. Jaeger, S. A. A. Kohl, et al., “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nat. Methods 18(2), 203–211 (2021). [CrossRef]  

22. Y. Choi, J. Bang, S.-Y. Kim, et al., “Deep learning–based multimodal segmentation of oropharyngeal squamous cell carcinoma on CT and MRI using self-configuring nnU-Net,” Eur. Radiol.2 (2024).

23. G. Dot, T. Schouman, G. Dubois, et al., “Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework,” Eur. Radiol. 32(6), 3639–3648 (2022). [CrossRef]  

24. S. Roy, G. Koehler, C. Ulrich, et al., “MedNeXt: Transformer-Driven Scaling of ConvNets for Medical Image Segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (Springer Nature Switzerland, 2023), pp. 405–415.

25. F. Isensee, C. Ulrich, T. Wald, et al., “Extending nnU-Net Is All You Need,” in Bildverarbeitung für die Medizin 2023 (Springer Fachmedien Wiesbaden, Wiesbaden, 2023), pp. 12–17.

26. R. Wang, T. Lei, R. Cui, et al., “Medical image segmentation using deep learning: A survey,” IET Image Processing 16(5), 1243–1267 (2022). [CrossRef]  

27. P. H. Conze, G. Andrade-Miranda, V. K. Singh, et al., “Current and Emerging Trends in Medical Image Segmentation With Deep Learning,” IEEE Trans. Radiat. Plasma Med. Sci. 7(6), 545–569 (2023). [CrossRef]  

28. J. Xie and Y. Peng, “The Head and Neck Tumor Segmentation Using nnU-Net with Spatial and Channel ‘Squeeze & Excitation’ Blocks,” in Head and Neck Tumor Segmentation, V. Andrearczyk, eds. (Springer International Publishing, Cham, 2021), pp. 28–36.

29. H. M. Luu and S.-H. Park, “Extending nn-UNet for Brain Tumor Segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, A. Crimi and S. Bakas, eds. (Springer International Publishing, Cham, 2022), pp. 173–186.

30. S. Woo, J. Park, J.-Y. Lee, et al., “Cbam: Convolutional block attention module,” in Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 3–19.

31. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 7132–7141.

32. A. G. Roy, N. Navab, C. Wachinger, et al., “Concurrent Spatial and Channel ‘Squeeze and Excitation’ in Fully Convolutional Networks,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 (Springer International Publishing, 2018), pp. 421–429.

33. A. Nazir, M. N. Cheema, B. Sheng, et al., “OFF-eNET: an optimally fused fully end-to-end network for automatic dense volumetric 3D intracranial blood vessels segmentation,” IEEE Trans. on Image Process. 29, 7192–7202 (2020). [CrossRef]  

34. K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 770–778.

35. F. Isensee, P. Kickingereder, W. Wick, et al., “Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries,A. Crimi, eds. (Springer International Publishing, 2018), pp. 287–297.

36. B. Kayalibay, G. Jensen, and P. van der Smagt, “CNN-based segmentation of medical imaging data,” arXiv, arXiv:1701.03056 (2017). [CrossRef]  

37. F. Isensee, P. F. Jäger, P. M. Full, et al., “nnU-Net for Brain Tumor Segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, A. CrimiS. Bakas, eds. (Springer International Publishing, 2021), pp. 118–132.

38. S. Moccia, E. De Momi, S. El Hadji, et al., “Blood vessel segmentation algorithms — Review of methods, datasets and evaluation metrics,” Computer Methods and Programs in Biomedicine 158, 71–91 (2018). [CrossRef]  

39. D. Jia and X. Zhuang, “Learning-based algorithms for vessel tracking: A review,” Computerized Medical Imaging and Graphics 89, 101840 (2021). [CrossRef]  

40. G. P. Cribaro, E. Saavedra-Lopez, L. Romarate, et al., “Three-dimensional vascular microenvironment landscape in human glioblastoma,” Acta Neuropathol. Commun. 9(1), 24 (2021). [CrossRef]  

41. X. Li, Q. Tang, J. Yu, et al., “Microvascularity detection and quantification in glioma: a novel deep-learning-based framework,” Lab. Invest. 99(10), 1515–1526 (2019). [CrossRef]  

42. S. P. Lalley and D. Gatzouras, “Hausdorff and Box Dimensions of Certain Self–Affine Fractals,” Indiana University Mathematics Journal 41, 533–568 (1992).

43. J. W. Baish and R. K. Jain, “Cancer, angiogenesis and fractals,” Nat. Med. 4(9), 984 (1998). [CrossRef]  

44. J. Chen, Y. Lu, Q. Yu, et al., “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv, arXiv:2102.04306 (2021). [CrossRef]  

45. W. Wang, C. Chen, M. Ding, et al., “TransBTS: Multimodal Brain Tumor Segmentation Using Transformer,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (Springer International Publishing, Cham, 2021), pp. 109–119.

46. K. Hu, Z. Zhang, X. Niu, et al., “Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function,” Neurocomputing 309, 179–191 (2018). [CrossRef]  

47. H. Qing, S. Jinfeng, D. Hui, et al., “Robust liver vessel extraction using 3D U-Net with variant dice loss function,” Comput. Biol. Med. 101, 153–162 (2018). [CrossRef]  

48. PRBioimages, “FineVess: a deep-learning-based framework for fine and automated extraction of tumor vessels from 3D light microscope images,” Github, 2023, https://github.com/PRBioimages/FineVess.

Supplementary Material (3)

NameDescription
Supplement 1       Data details and large vessel lumen filling method
Visualization 1       One example 3D image and its segmented result of FineVess.
Visualization 2       One example 3D image and its segmented result of FineVess.

Data availability

The code in this paper are available in Ref [48]. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

48. PRBioimages, “FineVess: a deep-learning-based framework for fine and automated extraction of tumor vessels from 3D light microscope images,” Github, 2023, https://github.com/PRBioimages/FineVess.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Subvolumes for model development shown as top view of 3D images. The glioma vasculature is heterogeneous between different patient samples as well as between different regions of the same sample. The vessels have variable scale, morphology, and fluorescence intensity. The images have variable contrast and brightness, while some images have strong background interference. Scale bar = 50µm.
Fig. 2.
Fig. 2. Overview of FineVess framework proposed in this study. FineVess contains vessel segmentation with 3D ResCBAM nnU-Net and refinement of the segmentation masks with image preprocessing and post-processing. It is then applied to the feature quantification and analysis of vascular heterogeneity in different grades of gliomas.
Fig. 3.
Fig. 3. Network architecture of 3D ResCBAM nnU-Net (a) and 3D CBAM block (b).
Fig. 4.
Fig. 4. Glioma vessel segmentation results of 3D ResCBAM nnU-Net and its application in other microscope vessel data. The red arrows point out the main differences.
Fig. 5.
Fig. 5. Comparative experiments of different components for improvement. We verified the advantages of our components by replacing the corresponding components individually with others in 3D ResCBAM nnU-Net.
Fig. 6.
Fig. 6. Refinement results of segmentation maps based on preprocessing. The red arrows point out the main differences.
Fig. 7.
Fig. 7. Overall results of post-processing pipeline. The result was shown as top view of 3D images. In addition to the obvious large lumen fillings, small hole fillings can be seen where yellow arrows point out.
Fig. 8.
Fig. 8. Comparison of our method and other vessel extraction schemes.
Fig. 9.
Fig. 9. Results of applying the FineVess framework to new glioma vessel data (a) and quantitative analysis of vascular morphological features (b). The yellow arrows point out the some examples of differences. *p<0.05, **p<0.01, ***p< 0.001, ****p< 0.0001, one-way ANOVA test.

Tables (2)

Tables Icon

Table 1. Ablation experiments of 3D ResCBAM nnU-Net. Bold numbers indicate the best metric among all rows

Tables Icon

Table 2. Segmentation results of different schemes. The bold block is to highlight the optimal scheme

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

F = M c ( F ) F F = M s ( F ) F ,
M c ( F ) = σ ( M L P ( A v g P o o l ( F ) ) + M L P ( M a x P o o l ( F ) ) ) ,
M s ( F ) = σ ( C o n v 7 × 7 × 7 ( [ A v g P o o l ( F ) ; M a x P o o l ( F ) ] ) ) ,
L W = L W C E + L D i c e ,
D i c e = 2 | P G | | P | + | G | ,
A S D = 1 | S ( P ) | + | S ( G ) | ( p S ( P ) min g S ( G ) | | p g | | + g S ( G ) min p S ( P ) | | g p | | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.