Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic protocol for quantifying the vasoconstriction in blood vessel images

Open Access Open Access

Abstract

Vascular targeted photodynamic therapy (V-PDT) has been successfully utilized for various vascular-related diseases. To optimize the PDT dose and treatment protocols for clinical treatments and to elucidate the biological mechanisms for V-PDT, blood vessels in the dorsal skin-fold window chamber (DSWC) of nude mice are often chosen to perform in vivo studies. In this study, a new automatic protocol to quantify the vasoconstriction of blood vessels in the DSWC model is proposed, which focused on tracking the pixels of blood vessels in pre- V-PDT images that disappear after V-PDT. The disappearing pixels indicate that the blood vessels were constricted, and thus, the vasoconstriction image for pixel distribution can be constructed. For this, the image of the circular region of interest was automatically extracted using the Hough transform. In addition, the U-Net model is employed to segment the image, and the speeded-up robust features algorithm to automatically register the segmented pre- and post- V-PDT images. The vasoconstriction of blood vessels in the DSWC model after V-PDT is directly quantified, which can avoid by the potential of generating new capillaries. The accuracy, sensitivity and specificity of the U-Net model for image segmentation are 90.64%, 80.12% and 92.83%, respectively. A significant difference in vasoconstriction between a control and a V-PDT group was observed. This new automatic protocol is well suitable for quantifying vasoconstriction in blood vessel image, which holds the potential application in V-PDT studies.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photodynamic therapy (PDT) is based on light activation of photosensitizer localized in target tissue to produce reactive oxygen species (ROS) which selectively eradicate target cells though direct cytotoxicity, vascular shutdown and indirect activation of an immune response. For vascular targeted PDT (V-PDT), a photosensitizer is site-directly delivered to vasculature, and the highly toxic ROS are produced in the vascular lumen upon irradiation with a laser at a particular wavelength, leading to irreversible vessel constriction (i.e. vasoconstriction) thus resulting in damages of the target lesion [1,2]. To date, V-PDT has been successfully utilized to treat prostate cancer [3,4], and several vascular-related diseases such as age-related macular degeneration (AMD) [5,6], port-wine stain (PWS) [7,8], vascular bleeding gastrointestinal mucosal lesions [9] and gastric antral vascular ectasia [10]. However, the treatment outcomes of V-PDT remain variable not only between different patients but also within the same lesion. Given that there are a number of both technical and conceptual reasons why measuring the blood vessel vasoconstriction may fail in the complex and heterogeneous milieu of solid tumors and PWS model (i.e. chicken cockscomb model), blood vessels in the dorsal skin-fold window chamber (DSWC) [11,12] of nude mice and the chorioallantoic membrane (CAM) [13,14] in chicken eggs are widely selected to perform in vivo studies, to optimize the dose and protocol for clinical V-PDT and to further elucidate the biological mechanisms.

The characteristic indexes of blood vessels include morphology, branching pattern, density and diameter. Previous studies demonstrated that a change in the diameter for blood vessels could be used to quantify the biological response after V-PDT treatment [1215], and the multi-position diameters of single or multiple vessels were measured as the basis for determining V-PDT response. The accuracy of this method needs to be further improved due to the complexity of the vascular architecture, the randomness of constrictive positions and the constriction disorder of each blood vessel. In particular, the accuracy in describing the V-PDT response by determining the diameter of a single selected blood vessel is insufficient. To overcome this limitation, the vessel density was proposed as an index in optical coherence angiography (OCA) images to predict V-PDT outcome. It was shown that the peri-tumour microvasculature could be used to determine the long-term V-PDT response [16]. The vascular density was also used as an index to evaluate OCA and optical coherence tomography (OCT) images, respectively [1618]. Unfortunately, the changes in vascular density were only analyzed, while the changes in their vascular location were neglected. To overcome this limitation, Buzzá et al. used the vessel area of the CAM model to analyze the V-PDT response through the calculation of the total pixels in the images for both pre- and post-PDT [14]. However, this method will become invalid when the presence of new blood vessels was observed after V-PDT, due to the vasoconstriction, which was not previously detected in the vascular network. As a result, the increased number of pixels by appearing thin vessels resulted in a vasoconstriction decrease, suggesting greater errors in the analysis.

The objective of this study was to quantify the vasoconstriction for well-controlled blood vessels in the DSWC model of nude mice. For this, a new automatic protocol to quantify blood vessel vasoconstriction in the DSWC model was proposed, which was focused on tracking the pixels of blood vessels in pre- V-PDT images, which disappear after V-PDT. The disappearing pixels indicate that the blood vessels were constricted, and thus, the vasoconstriction image for pixel distribution could be constructed. For this, images of the circular region of interest were automatically extracted using the Hough Transform. Also, the U-Net model was employed to segment the image, and the Speeded-Up Robust Features algorithm to automatically register the segmented pre- and post- V-PDT images. The vasoconstriction of blood vessels in the DSWC model after V-PDT was then directly quantified to evaluate the biological response.

2. Materials and methods

2.1 Blood vessel model for V-PDT

The blood vessels in the DSWC model were prepared with a well-established protocol [11,19]. Rose Bengal (RB) (Sigma-Aldrich, St. Louis MO, USA) was chosen as the photosensitizer model for this study. For V-PDT, each BALB/c nude mice (male, 25∼30 g) provided from Shanghai SLAC Laboratory Animal Co. Ltd., Shanghai, China was intravenously injected with an RB solution (25 mg/kg body weight). Shortly after RB administration, the blood vessels in the DSWC model were irradiated by a 532 nm semiconductor laser with an irradiance of 50 mW/cm2 and a total radiant exposure of 30 J/cm2. The blood vessel images of the DSWC model were collected before and immediately after V-PDT via a Leica MZ16FA microscope (equipped with a DFC300FXcamera, Germany, at 16 magnification). The resolution of the images in RGB mode is 1040 × 1392 pixels. All animal experiments and procedures were approved by the Institutional Animal Care and Use Committee at the Fujian Normal University.

2.2 Image processing and data analysis

The image processing procedure includes the following three parts: 1) Circular ROI images of pre- and post- V-PDT were automatically selected using the Hough transform. 2) Images of pre- and post- V-PDT were segmented. 3) Automatic image registration with the SURF algorithm was used for pre- and post V-PDT images, followed by extraction of the overlapping region.

2.2.1 Auto-selected circular ROI using the Hough transform

The flow chart for the auto-selected circular ROI is shown in Fig. 1. The circular ROI is necessary for selection since all vessels are with the circular window. Since the metal frame looks much darker and of a less uniform colour than the skin, the image preprocessing includes enhancing the circle edge of the image, correcting the non-uniform appearance [20], detecting the image edge and area filtering. Max filters were used to enhance the circular edge of the metal frame. The non-uniform appearance was corrected by a top-hat transformation. Next, the binary image was obtained through a segmentation approach introduced by Otsu [21]. The edge of the binary image could be detected by a Canny edge detector, and an area filter was applied to remove the wide and short areas which are not the edge of the circular metal frame [22].

 figure: Fig. 1.

Fig. 1. Flow chart for auto-selected circular ROI using Hough Transform.

Download Full Size | PDF

The radius and center of the circle could be tracked by the Hough transform [23], as each pixel’s Cartesian coordinates $(x,y)$ can be converted into polar coordinates $(r,\theta )$ [24,25], as shown in Eq. (1). The two-dimensional $(x,y)$ space is converted into a three-dimensional space $(a,b,r)$. Four parameters, the max radius, min radius, radius search step and angle search step, are used for the transform. When the total number of points in the $(a,b,r)$ space is higher than a threshold value, $(a,b,r)$ being recorded. The minimum radius $(r_\textrm{min})$ is selected because the inner radius of the DSWC is close to the skin. The mean of the center coordinates $(a,b)$ corresponding to $r_\textrm{min}$ are calculated as $a_{average}$ and $b_{average}$, and then the circular ROI of the original image is selected by $(a_{average},b_{average},r_\textrm{min)}$.

$$\begin{array}{c} {x = a + r\cos \theta }\\ {y = b + r\sin \theta } \end{array}$$
where $(x,y)$ and $(r,\theta )$ are image pixel coordinates of the cartesian and polar coordinates, respectively. r is the circular radius, and $(a,b)$ denotes the circle’s center coordinate. Since the diameter of the DSWC model is known, the number of radial pixels is estimated to a set of max and min radius parameters with Eq. (2) [26].
$$D = \frac{{D^{\prime}}}{\beta } = \frac{{({N_2} - {N_1}){L_0}}}{\beta }$$
where $D$ and $D^{\prime}$ are the real and measurement dimension of the object in the image, respectively. $N_2$ and $N_1$ are the locations of pixels that are crossing the edge of the object in the image. $L_0$ is the size of the image pixels. $\beta$ is the transverse magnification of the objective lens for imaging. For calculation, D in Eq. (2) is twice the length of r in Eq. (1).

2.2.2 Segmentation for blood vascular images

The blood vessels of images could be extracted by the image segmentation. The blood vessels in the DSWC model have different depths in the skin, so that their gray values are different. Hence, the dynamic threshold method is used to segment blood vascular images [27], by the algorithm shown in Eq. (3).

$$B(x,y) = \left\{ {\begin{array}{c} {255},\\ 0, \end{array}} \right.\begin{array}{c} {C \le I(x,y) - I^{\prime}(x,y)}\\ {C\;>\;I(x,y) - I^{\prime}(x,y)} \end{array}$$
where $I(x,y)$ is the original image, $I^{\prime}(x,y)$ is the result with a mean filter of size $N{ \times }N$. The binary image $B(x,y)$ is 255 or 0 depending on that the subtraction of $I(x,y)$ and $I^{\prime}(x,y)$ is higher than or equal to or less than the constant C. Hence, the segmentation accuracy is determined by the constant C.

As to Eq. (3), the vessel segmentation with the dynamic threshold algorithm is affected by noise, especially for the small blood vessel in the images. For this, the dynamic threshold algorithm, combined with deep learning, is used to improve the accuracy and sensitivity of image segmentation. In this study, the U-Net model was employed for vessel segmentation since satisfying results have been obtained in processing retina images [28]. Furthermore, since the diameter of the blood vessels in the retina image (DRIVE datasets [29], Digital Retinal Image for Vessel Extraction) is similar to that of the small blood vessels in DSWC model, the mixed image samples include both large vessels using dynamic threshold algorithm segmentation and retina images from the DRIVE data set. The flow chart for training mixed sample by the U-Net model is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flow chart for training mixed sample by the U-Net model.

Download Full Size | PDF

2.2.3 Image registration

To compare the blood vessel images of pre- and post-V-PDT, the Speeded-Up Robust Features (SURF) algorithm was used for automatic image co-registration [30]. The SURF algorithm normally converts the original image into the eigenvalues of a Hessian matrix of an image pixel and then represents the Gaussian pyramid image to locate the feature points. The speed of SURF can be improved by employing the integral image method for solving the eigenvalues of the Hessian matrix. After training, the image after segmentation by the U-Net model becomes a map following reconstruction, which can be used as a pre-registration image. In this way, feature points will not be obtained from the ROI circle edge of the DSWC model when using the SURF algorithm, and the registration can be successful. In this study, to obtain acceptable feature points before and after V-PDT, the image is divided into four regions with the extracted center of the circle, as shown in Fig. 3. The appropriate feature points were obtained if the registered feature points fall in the same area with similar positions.

 figure: Fig. 3.

Fig. 3. Feature points are extracted by the SURF algorithm are constrained based on the position and the direction of the circular region. O and O’ are registration centers of the circles. A and A’ are registration points as one group.

Download Full Size | PDF

The flow chart for image registration is shown in Fig. 4. For image registration, the affine was utilized to accomplish the resampling conversion for image rotation, movement or distorted shape for blood vessels after V-PDT were not observed [31]. The ROI centers of pre- and post- V-PDT extracted by Hough transform are not always completely consistent. Consequently, the images inside the circles post-registration are not fully overlapping. For this, the image overlap after registration was extracted as the vascular area for quantitative analysis.

 figure: Fig. 4.

Fig. 4. Flow chart for image registration.

Download Full Size | PDF

2.3 Vasoconstriction of blood vessels after V-PDT

After the segmentation and registration for vascular images of pre- and post- V-PDT, a new method was proposed for calculating vasoconstriction. Briefly, this method takes the pixel points of vascular image of pre- V-PDT as the standard points, and it was used to analyze whether these standard points still exist in the vascular image after V-PDT. If the standard points disappear after V-PDT, which suggests that the blood vessel has been constricted in this position and the constriction position will be recorded. All the recorded constriction positions will be constructed to produce a map of vasoconstriction, and the vasoconstriction can be calculated as the listed Eq. (4).

$$Vasoconstriction = {T_{constriction}}/{T_{before}}$$
where $T_{before}$ and $T_{constriction}$ represent the total pixels of the vascular image for pre- and post- V-PDT, respectively.

3. Results

The blood vessels in the DSWC model for pre- and post- V-PDT were compared to quantify the vasoconstriction. For this, each group with two images for pre- and post- V-PDT was studied, and in total 20 groups were collected. Among these, 15 groups were used to train the U-Net model with deep learning, and another 5 groups were selected for data analysis.

3.1 Auto-selected circular ROI using the Hough transform

The process of circular ROI extraction is shown in Fig. 5. Firstly, a Max filter was used to enhance the circular edge of the metal frame (Fig. 5(b)), and the binary image (Fig. 5(c)) was then obtained through a segmentation method introduced by Otsu [21]. After this, the contour image (Fig. 5(d)) was extracted by the Canny edge detector and Area filter. The image (Fig. 5(e)) of the detected circles with different center and radius could be further obtained by the Hough transform. Four parameters, the maximal radius, minimal radius, radius search step and angle search step were introduced for this transform. According to Eq. (1), all of the circular contour points can be mapped from Cartesian coordinates to polar coordinates, and the number of pixel points that have the same center and radius will be counted, as the green circle shown in Fig. 5(e). Finally, the circular red ROI in Fig. 5(f) can be extracted with the minimal radius after the center average of the circles.

 figure: Fig. 5.

Fig. 5. (a) Orginal and (b) Enhanced images of DSWC model. (c) Binary image; (d) Contour image; (e) Image of the detected circles with the different centre and radius; (f) Image of the detected circle overlaped with the orginal image.

Download Full Size | PDF

3.2 Segmentation for blood vascular images

Representative original images of a DSWC model, both pre- and post- V-PDT are shown in Fig. 6(a) and 6(d), respectively. MATLAB software (The Mathworks Inc. MA) was used to extract the circular ROI with 2.6 GHz CPU and 8.00 GB RAM, and the auto-selected circular ROIs of DSWC model were extracted with the same radius of 400 pixels, as shown in Fig. 6(b) and 6(e). For each image, the processing time is around 20-60 seconds, Fig. 6(c) and 6(f) show the corresponding segmentation images used to train the model by U-Net.

 figure: Fig. 6.

Fig. 6. Images of DSWC model for (a) pre- and (d) post- V-PDT. Auto-selected circular ROI images for (b) pre- and (e) post-V-PDT by using Hough Transform. Segmentation images of (c) pre- (f) post- V-PDT. The binarized circle boundary was removed by reducing the ROI radius.

Download Full Size | PDF

In this study, the labeled blood vessels images in the DSWC model were segmented by dynamic threshold algorithm. Figure 7(a) shows that a lot of background noise will be segmented when a constant C of 1 is chosen, whereas the small blood vessels cannot be segmented with $C = 7$ (Fig. 7(c)). Therefore, $C = 3$ was chosen for segmentation since the boundary gradient value of the observed small blood vessels is around 3 (Fig. 7(b)). As shown in Fig. 7(d), the blood vessels cannot be extracted exactly ($N = 7$) as the Mean filter area was too small. If we have an N value of 90, the resolution of the segmented image is lower, i.e. the small blood vessels close to the large blood vessels were segmented into the large vessels, as indicated in Fig. 7(f). For the dynamic threshold algorithm, the segmentation accuracy for large blood vessels is higher, while it is lower for the segmentation of small blood vessels due to the noise. Regarding the optimization for the dynamic threshold algorithm, $C = 3$ and $N = 35$ were used for segmentation.

 figure: Fig. 7.

Fig. 7. Segmentation images of blood vessels achieved by dynamic threshold algorithm: (a) $N = 35$, $C = 1$; (b) $N = 35$, $C = 3$; (c) $N = 35$, $C = 7$; (d) $C = 3$, $N = 7$; (e) $C = 3$, $N = 35$; (f) $C = 3$, $N = 90$.

Download Full Size | PDF

In order to overcome the influence of the selected constants C and N for the dynamic threshold algorithm, the U-Net model with deep learning was utilized for vascular segmentation. However, the image samples trained by the U-Net model need to first label blood vessel image. For U-Net training, the mixed samples consisted of retinal images and blood vessel images of the DSWC model. The labeled blood vessel images of the retinal were directly given by DRIVE since the diameter of small blood vessels in the DSWC model is consistent with the diameter of blood vessels in the retina for the standard dataset DRIVE [29]. The labeled images of large blood vessels in the DSWC model were segmented by dynamic threshold algorithm. Figure 8 showed the typical segmented blood vessels in the DSWC model processed by artificial method and the U-Net model after image binarization, respectively. The images in Fig. 8(b) and 8(c) is very similar to that found in Fig. 8(a), while the segmentation sensitivity of U-Net model is higher than that of artificial method.

 figure: Fig. 8.

Fig. 8. Segmentation images of blood vessels processed by (a) artificial method; U-Net model without (b) Area filter and (c) with Area filter.

Download Full Size | PDF

To verify the accuracy for determining the diameter of the blood vessel, the standard syringe needle with a diameter of 600 µm (Kangyou Medical Instrument Co., Ltd. Jiangsu, China) was used for segmentation. The original image and segmented black-white image of a syringe needle were shown in Fig. 9(a) and (b), respectively. The diameter of the standard syringe needle after image segmentation can be calculated using Eq. (2). As illustrated in Fig. 9(b), the coordinates of the remarked red line crossing the needle were (626,344) and (626,385), respectively. The diameter of the needle is 41 pixels, which is equal to 564 µm with an error of 6.0%. These data suggest that the segmentation error for the diameter is acceptable, and the vasoconstriction could be accurately determined using the vascular area.

 figure: Fig. 9.

Fig. 9. (a) Original image and (b) segmented black-white image of syringe needle.

Download Full Size | PDF

3.3 Image registration

The images of non-registration and registration are shown in Fig. 10(a) and 10(b), respectively. Since the centers and radius of the auto-selected circular ROIs for pre- and post- V-PDT using Hough Transform are not always the same, the overlapping part of the circular ROIs are further extracted, as illustrated by the white area in Fig. 10(c). After this, the image in the white overlap area will be used to calculate vasoconstriction. As compared to the image in Fig. 6(c), the vascular pixels disappeared in the image of Fig. 6(f) will be recorded, as shown in Fig. 10(d). The sum of the vascular pixels in the vasoconstriction image in Fig. 10(d) is defined as $T_{constriction}$, while the sum of the vascular pixels in Fig. 6(c) is defined as $T_{before}$. The vasoconstriction can then be calculated by Eq. (4).

 figure: Fig. 10.

Fig. 10. (a) Non-registration and (b) registration images for pre- and post- V-PDT of DSWC model (blue and red images indicate pre- and post- V-PDT, respectively); of pre- and post-treatment images; (c) The white region is the overlapping mask; (d) vasoconstriction image (blue line is the overlap mask edge).

Download Full Size | PDF

3.4 Vasoconstriction of V-PDT

For demonstration, control studies without V-PDT were performed first, and the vasoconstriction for three control cases of the DSWC model is shown in Fig. 11. The first and second columns are the original images for pre- and post-treatment, respectively. The third column is the overlap images after image registration, and the fourth column is the vasoconstriction image (blue line is overlapping mask edge). The vasoconstriction for each case is indicated in the fifth column. The vasoconstriction for the control group (n=3) is 3.3${ \pm }$1.3%, which suggests that there was no obvious vasoconstriction for blood vessels in the DSWC model without V-PDT treatment, as expected.

 figure: Fig. 11.

Fig. 11. Vasoconstriction of blood vessels in the DSWC model for post-treatment. Case 1 was treated without RB and light; case 2 was treated with RB but without light; case 3 was treated with light but without RB.

Download Full Size | PDF

Figure 12 shows the vasoconstriction of vessels in the DSWC model for pre- and post- V-PDT. A decrease in vascularization is due to blood vessel shutdown or a reduction in its diameter, as obviously observed. The vasoconstriction after V-PDT is 31.9${ \pm }$6.7%, and a statistically significant difference was observed between the V-PDT groups and the control groups listed above.

 figure: Fig. 12.

Fig. 12. Vasoconstriction of blood vessels in the DSWC model post- V-PDT. BALB/c nude mice were intravenously injected with RB solution (25 mg/kg body weight), and V-PDT was performed with laser irradiance of 50 mW/cm2 and a total light dose of 30 J/cm2.

Download Full Size | PDF

4. Discussion

4.1 Auto-selected circular ROI using the Hough transform

To accurately extract the circular ROI by Hough transform, it is essential to detect the edge of the metal frame. Hence, a Max filter of size 3${ \times }$3 is used to enhance the image because the metal frame of the DSWC model has some bright points as the metal reflects light. Figure 13(a) and 13(c) show the binary images processed with and without the Max filter, respectively. Also, the edge of the image in Fig. 13(b) is more obvious than the one in Fig. 13(d). The diameter of the DSWC model is 10.0 mm, and a 15.6-inch screen was used. As the aspect ratio for image display is 16:9, the length and width of the screen are 31.7 cm and 23.77 cm, respectively. The resolution of the screen is 1920${ \times }$1080, and the length and width of each pixel is 0.0165 cm${ \times }$0.022 cm. The magnification of the optical microscope is 16${ \times }$, and the radius of the DSWC model is about 485 pixels, which can be theoretically calculated from Eq. (2). For this, the minimum and maximum radii are 350 pixels and 500 pixels in this study, and the selected radius could affect the cyclic search speed.

 figure: Fig. 13.

Fig. 13. Enhanced binary images processed (a) with and (c) without Max filter, and the edge of a metal frame in the DSWC model processed by Canny detector (b) with and (d) without the Max filter.

Download Full Size | PDF

4.2 Segmentation for blood vascular image

The dynamic threshold algorithm is one method for segmentation, in which the constant C and the average filtered image block N could be selected. The value of N is considered according to the size of the blood vessel image. In this study, the maximum diameter of the blood vessels in the DSWC model is around 35 pixels. In order to highlight the filtered blood vessels, 35 pixels are used as the size of block N. In order to segment small blood vessels, a constant C of 3 is chosen as the edge gradient of the small blood vessel is 3.

In order to exam, the accuracy, sensitivity and specificity of the dynamic threshold algorithm and the U-Net model [32], the retina DRIVE dataset was selected as the standard object for comparison. In addition, three different training sets, i.e. retinal image, the vascular image of the DSWC model, and the mixed samples with both vascular image of DSWC and retinal image, were used for training. As shown in Table 1, regarding the test for the vascular image of the DSWC model, the training for mixed samples with both DSWC and retinal vascular image could provide satisfactory results. The accuracy, specificity and sensitivity for U-Net model are 90.64%, 80.12% and 92.83%, respectively. As compared to the dynamic threshold algorithm, the sensitivity for the U-Net model increased from 62.78% to 80.12% and the accuracy and specificity are acceptable. The lower sensitivity for the dynamic threshold algorithm implies that this algorithm is easily affected by noise and other factors.

Tables Icon

Table 1. Performance determined from retina and vascular image of DSWC model datasets

As shown in Fig. 14, the sensitivity of the dynamic threshold algorithm is lower than that of the U-Net model for segmenting small blood vessels, as indicated in Fig. 14(c) and 14(d), respectively. This finding agrees well with the obtained data.

 figure: Fig. 14.

Fig. 14. (a) Original image; (b) Ground true image; Segmentation image with (c) dynamic threshold algorithm and (d) U-Net model for the retina.

Download Full Size | PDF

4.3 Image registration

As shown in Fig. 15(a), the feature points of the SURF algorithm often fall on the circular edge if the image was registered before segmentation, which could easily lead to registration failure. However, the segmented images could be directly used for registration, and the feature points were successfully located in circular ROI, as indicated in Fig. 15(b). This suggests that the image registration with feature points in the ROI image after segmentation is better than that prior to segmentation.

 figure: Fig. 15.

Fig. 15. Image registration with feature points in ROI image (a) before segmentation and (b) after segmentation.

Download Full Size | PDF

4.4 Image processing error

The image processing error predominantly includes (1) radius of the circular ROI. (2) segmentation and (3) registration. Since only the overlapping image post-registration is selected, as shown in Fig. 10(c), the circular ROI radius error could be ignored.

As we know, image registration also could be successfully achieved by the manual method [33], and the feature points for manual registration are implemented using the MATLAB software. The image registration for blood vessels in the DSWC model is implemented manually with the SURF algorithm shown in Fig. 16(a) and 16(b), respectively. The difference between Fig. 16(a) and 16(b) is indicated in Fig. 16(c). Compared to the manual method for registration, the error for image registration with the SURF algorithm is 4.9% (i.e. the ratio of Fig. 16(c) to Fig. 16(a)).

 figure: Fig. 16.

Fig. 16. Image registration implemented by (a) Manual method and (b) SURF algorithm; (c) Difference between (a) and (b) image.

Download Full Size | PDF

As shown in Fig. 17, vasoconstriction maps were obtained using artificial methods, dynamic threshold algorithms, a U-Net model for image segmentation, the SURF algorithm and manual method for image registration. The vasoconstrictions with the SURF algorithm are 64.29%, 45.68% and 14.5% for the artificial method, dynamic threshold algorithm and the U-Net model, while the vasoconstrictions with manual method are 61.02%, 42.57% and 14.39%. There is no significant difference in vasoconstriction between the SURF algorithm and the manual method, which suggest the errors for image registration are comparable. In addition, the difference in vasoconstriction for U-Net model is minima, implying that more suitable feature points can be found to register due to the high segmentation accuracy of the U-Net model. However, the manual method for image registration is time consuming.

 figure: Fig. 17.

Fig. 17. Images of DSWC model for (a) pre- and (d) post- V-PDT. (c)-(h) vasoconstriction maps obtained by different methods for image segmentation and registration.

Download Full Size | PDF

Significant difference in vasoconstriction was found among some methods for image segmentation. The artificial method had the lowest sensitivity since many small blood vessels cannot be accurately segmented, as indicated by the white arrow in Fig. 7. As a result, both segmentation and vasoconstriction errors will increase. On the other hand, the segmentation sensitivity and accuracy could be dramatically influenced by the selected constants C and N for the dynamic threshold algorithm, as illustrated in Fig. 7. Therefore, the U-Net model combined with the SURF algorithm was chosen for image segmentation and registration in this study.

The previous study showed that a large number of capillaries could be re-generated and became visible after V-PDT [14], as shown in Fig. 6(a) and 6(c). If the direct subtraction after image registration is applied, the vasoconstriction may be significantly increased and resulted in overestimation, which can be successfully avoided by the proposed method in this study.

5. Conclusion

In conclusion, a new automatic protocol to quantify the vasoconstriction of blood vessels in the DSWC model was proposed, which was focused on tracking the pixels of blood vessels in pre- V-PDT images that disappear after V-PDT. As compared to the existing methods for calculating vasoconstriction, our proposed protocol could successfully avoid the new capillaries that potentially generated after V-PDT. Furthermore, images can be automatically processed without manual-labelling with high efficiency. Further work will address the statistical analysis of the vasoconstriction for V-PDT with various photosensitizers and light doses.

Funding

National Natural Science Foundation of China (61520106015, 61635014, 61935004); Natural Science Foundation of Fujian Province (2017J01743, 2019Y4004).

Acknowledgments

We gratefully acknowledge Dr. Lothar Lilge from the University of Toronto for his insightful discussion and assistance in proofreading.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. S. Cavin, T. Riedel, P. Rosskopfova, M. Gonzalez, G. Baldini, M. Zellweger, G. Wagnieres, P. J. Dyson, H. B. Ris, T. Krueger, and J. Y. Perentes, “Vascular-targeted low dose photodynamic therapy stabilizes tumor vessels by modulating pericyte contractility,” Lasers Surg. Med. 51(6), 550–561 (2019). [CrossRef]  

2. B. Li, L. Lin, H. Lin, and B. C. Wilson, “Photosensitized singlet oxygen generation and detection: Recent advances and future perspectives in cancer photodynamic therapy,” J. Biophotonics 9(11-12), 1314–1325 (2016). [CrossRef]  

3. A. Kawczyk-Krupka, K. Wawrzyniec, S. Musiol, M. Potempa, A. Bugaj, and A. Sieroń, “Treatment of localized prostate cancer using WST-09 and WST-11 mediated vascular targeted photodynamic therapy-A review,” Photodiagn. Photodyn. Ther. 12(4), 567–574 (2015). [CrossRef]  

4. A. Noweski, A. Roosen, S. Lebdai, E. Barret, M. Emberton, F. Benzaghou, M. Apfelbeck, B. Gaillac, C. Gratzke, C. Stief, and A. R. Azzouzi, “Medium-term follow-up of vascular-targeted photodynamic therapy of localized prostate cancer using TOOKAD soluble WST-11 (Phase II Trials),” Eur. Urol. Focus 5(6), 1022–1028 (2019). [CrossRef]  

5. Z. Li, P. Agharkar, and B. Chen, “Therapeutic enhancement of vascular-targeted photodynamic therapy by inhibiting proteasomal function,” Cancer Lett. 339(1), 128–134 (2013). [CrossRef]  

6. A. Kawczyk-Krupka, A. Bugaj, M. Potempa, K. Wasilewska, W. Latos, and A. Sieroń, “Vascular-targeted photodynamic therapy in the treatment of neovascular age-related macular degeneration: Clinical perspectives,” Photodiagn. Photodyn. Ther. 12(2), 161–175 (2015). [CrossRef]  

7. D. Chen, J. Ren, Y. Wang, H. Zhao, B. Li, and Y. Gu, “Relationship between the blood perfusion values determined by laser speckle imaging and laser Doppler imaging in normal skin and port wine stains,” Photodiagn. Photodyn. Ther. 13, 1–9 (2016). [CrossRef]  

8. D. Chen, J. Ren, Y. Wang, B. Li, and Y. Gu, “Intraoperative monitoring of blood perfusion in port wine stains by laser Doppler imaging during vascular targeted photodynamic therapy: A preliminary study,” Photodiagn. Photodyn. Ther. 14, 142–151 (2016). [CrossRef]  

9. H. Qiu, Y. Mao, Y. Gu, Y. Wang, J. Zhu, and J. Zeng, “Vascular targeted photodynamic therapy for bleeding gastrointestinal mucosal vascular lesions: A preliminary study,” Photodiagn. Photodyn. Ther. 9(2), 109–117 (2012). [CrossRef]  

10. H. Qiu, Y. Mao, J. Zeng, Y. Wang, J. Zhang, N. Huang, Q. Liu, Y. Yang, E. Linghu, and Y. Gu, “Vascular-targeted photodynamic therapy of gastric antral vascular ectasia (GAVE),” J. Photochem. Photobiol., B 166, 58–62 (2017). [CrossRef]  

11. L. Lin, Y. Li, J. Zhang, Z. Tan, D. Chen, S. Xie, Y. Gu, and B. Li, “Vessel constriction correlated with local singlet oxygen generation during vascular targeted photodynamic therapy,” Proc. SPIE 9268, 92680T (2014). [CrossRef]  

12. H. S. de Bruijn, S. Brooks, T. L. ten Hagen, E. R. de Haas, and D. J. Robinson, “Light fractionation significantly increases the efficacy of photodynamic therapy using BF-200 ALA in normal mouse skin,” PLoS One 11(2), e0148850 (2016). [CrossRef]  

13. K. S. Samkoe, A. A. Clancy, A. Karotki, B. C. Wilson, and D. T. Cramb, “Complete blood vessel occlusion in the chick chorioallantoic membrane using two-photon excitation photodynamic therapy: implications for treatment of wet age-related macular degeneration,” J. Biomed. Opt. 12(3), 034025 (2007). [CrossRef]  

14. H. Buzzá, L. Silva, L. Moriyama, V. Bagnato, and C. Kurachi, “Evaluation of vascular effect of photodynamic therapy in chorioallantoic membrane using different photosensitizers,” J. Photochem. Photobiol., B 138, 1–7 (2014). [CrossRef]  

15. T. A. Middelburg, H. S. de Bruijn, L. Tettero, A. van der Ploeg van den Heuvel, H. A. Neumann, E. R. de Haas, and D. J. Robinson, “Topical hexylaminolevulinate and aminolevulinic acid photodynamic therapy: complete arteriole vasoconstriction occurs frequently and depends on protoporphyrin IX concentration in vessel wall,” J. Photochem. Photobiol., B 126, 26–32 (2013). [CrossRef]  

16. M. A. Sirotkina, A. A. Moiseev, L. A. Matveev, V. Y. Zaitsev, V. V. Elagin, S. S. Kuznetsov, G. V. Gelikonov, S. Y. Ksenofontov, E. V. Zagaynova, F. I. Feldchtein, N. D. Gladkova, and A. Vitkin, “Accurate early prediction of tumour response to PDT using optical coherence angiography,” Sci. Rep. 9(1), 6492 (2019). [CrossRef]  

17. S. Tzaridis, M. W. M. Wintergerst, C. Mai, T. F. C. Heeren, F. G. Holz, P. Charbel Issa, and P. Herrmann, “Quantification of retinal and choriocapillaris perfusion in different stages of macular telangiectasia type 2,” Invest. Ophthalmol. Visual Sci. 60(10), 3556–3562 (2019). [CrossRef]  

18. M. Pellegrini, M. Cozzi, G. Staurenghi, and F. Corvi, “Comparison of wide field optical coherence tomography angiography with extended field imaging and fluorescein angiography in retinal vascular disorders,” PLoS One 14(4), e0214892 (2019). [CrossRef]  

19. G. M. Palmer, A. N. Fontanella, S. Shan, G. Hanna, G. Zhang, C. L. Fraser, and M. W. Dewhirst, “In vivo optical molecular imaging and analysis in mice using dorsal window chamber models applied to hypoxia, vasculature and fluorescent reporters,” Nat. Protoc. 6(9), 1355–1366 (2011). [CrossRef]  

20. T. Tasdizen, E. Jurrus, and R. T. Whitaker, “Non-uniform illumination correction in transmission electron microscopy,” in Proceedings of MICCAI Workshop on Microscopic Image Analysis with Applications in Biology (2008), pp. 5–6.

21. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

22. T. Chanwimaluang and G. Fan, “An efficient blood vessel detection algorithm for retinal images using local entropy thresholding,” in Proceedings of IEEE International Symposium on Circuits and Systems (IEEE, 2003), pp. V-21–V-24.

23. H. Yuen, J. Princen, J. Illingworth, and J. Kittler, “Comparative study of Hough transform methods for circle finding,” Image Vision Comput. 8(1), 71–77 (1990). [CrossRef]  

24. D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recogn. 13(2), 111–122 (1981). [CrossRef]  

25. A. Goel, R. McColl, K. S. King, A. Whittemore, and R. M. Peshock, “Fully automated tool to identify the aorta and compute flow using phase-contrast MRI: Validation and application in a large population based study,” J. Magn. Reson. Imaging 40(1), 221–228 (2014). [CrossRef]  

26. X. Ni and P. Wu, “Study on target dimension measurement using linear CCD,” Opt. Instruments 33(6), 11–13 (2011).

27. R. C. Gonzalez and R. E. Woods, “Digital image processing,” (Prentice hall, Upper Saddle River, 2002).

28. O. Ronneberger, P. Fischer, and T. Brox, “U-Net : Convolutional networks for biomedical image segmentation,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, N. Naval, ed (Springer International Publishing, Switzerland, 2015), pp. 234–241.

29. DRIVE, Digital Retinal Images for Vessel Extraction. http://www.isi.uu.nl/Research/Databases/DRIVE/, 2004.

30. H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: speeded up robust features,” Comput. Vis. Image Und. 110(3), 346–359 (2008). [CrossRef]  

31. C. Wang, A. Stefanidis, A. Croitoru, and P. Agouris, “Map registration of image sequences using linear features,” Photogramm. Eng. Remote Sens. 74(1), 25–38 (2008). [CrossRef]  

32. M. M. Fraz, S. A. Barman, P. Remagnino, A. Hoppe, A. Basit, B. Uyyanonvara, A. R. Rudnicka, and C. G. Owen, “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Comput. Meth. Prog. Bio. 108(2), 600–616 (2012). [CrossRef]  

33. G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “VoxelMorph: A learning framework for deformable medical image registration,” IEEE Trans. Med. Imaging 38(8), 1788–1800 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Flow chart for auto-selected circular ROI using Hough Transform.
Fig. 2.
Fig. 2. Flow chart for training mixed sample by the U-Net model.
Fig. 3.
Fig. 3. Feature points are extracted by the SURF algorithm are constrained based on the position and the direction of the circular region. O and O’ are registration centers of the circles. A and A’ are registration points as one group.
Fig. 4.
Fig. 4. Flow chart for image registration.
Fig. 5.
Fig. 5. (a) Orginal and (b) Enhanced images of DSWC model. (c) Binary image; (d) Contour image; (e) Image of the detected circles with the different centre and radius; (f) Image of the detected circle overlaped with the orginal image.
Fig. 6.
Fig. 6. Images of DSWC model for (a) pre- and (d) post- V-PDT. Auto-selected circular ROI images for (b) pre- and (e) post-V-PDT by using Hough Transform. Segmentation images of (c) pre- (f) post- V-PDT. The binarized circle boundary was removed by reducing the ROI radius.
Fig. 7.
Fig. 7. Segmentation images of blood vessels achieved by dynamic threshold algorithm: (a) $N = 35$, $C = 1$; (b) $N = 35$, $C = 3$; (c) $N = 35$, $C = 7$; (d) $C = 3$, $N = 7$; (e) $C = 3$, $N = 35$; (f) $C = 3$, $N = 90$.
Fig. 8.
Fig. 8. Segmentation images of blood vessels processed by (a) artificial method; U-Net model without (b) Area filter and (c) with Area filter.
Fig. 9.
Fig. 9. (a) Original image and (b) segmented black-white image of syringe needle.
Fig. 10.
Fig. 10. (a) Non-registration and (b) registration images for pre- and post- V-PDT of DSWC model (blue and red images indicate pre- and post- V-PDT, respectively); of pre- and post-treatment images; (c) The white region is the overlapping mask; (d) vasoconstriction image (blue line is the overlap mask edge).
Fig. 11.
Fig. 11. Vasoconstriction of blood vessels in the DSWC model for post-treatment. Case 1 was treated without RB and light; case 2 was treated with RB but without light; case 3 was treated with light but without RB.
Fig. 12.
Fig. 12. Vasoconstriction of blood vessels in the DSWC model post- V-PDT. BALB/c nude mice were intravenously injected with RB solution (25 mg/kg body weight), and V-PDT was performed with laser irradiance of 50 mW/cm2 and a total light dose of 30 J/cm2.
Fig. 13.
Fig. 13. Enhanced binary images processed (a) with and (c) without Max filter, and the edge of a metal frame in the DSWC model processed by Canny detector (b) with and (d) without the Max filter.
Fig. 14.
Fig. 14. (a) Original image; (b) Ground true image; Segmentation image with (c) dynamic threshold algorithm and (d) U-Net model for the retina.
Fig. 15.
Fig. 15. Image registration with feature points in ROI image (a) before segmentation and (b) after segmentation.
Fig. 16.
Fig. 16. Image registration implemented by (a) Manual method and (b) SURF algorithm; (c) Difference between (a) and (b) image.
Fig. 17.
Fig. 17. Images of DSWC model for (a) pre- and (d) post- V-PDT. (c)-(h) vasoconstriction maps obtained by different methods for image segmentation and registration.

Tables (1)

Tables Icon

Table 1. Performance determined from retina and vascular image of DSWC model datasets

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

x = a + r cos θ y = b + r sin θ
D = D β = ( N 2 N 1 ) L 0 β
B ( x , y ) = { 255 , 0 , C I ( x , y ) I ( x , y ) C > I ( x , y ) I ( x , y )
V a s o c o n s t r i c t i o n = T c o n s t r i c t i o n / T b e f o r e
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.