Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fiber pattern removal and image reconstruction method for snapshot mosaic hyperspectral endoscopic images

Open Access Open Access

Abstract

Hyperspectral endoscopic imaging has the potential to enhance clinical diagnostics and outcome. Most commercial endoscopes utilize imaging fiber bundles to transmit the collected signal from the patient to the medical operator. These bundles consist of several fiber cores surrounded by a cladding layer creating comb structure-like artifacts, which complicate further analysis, both spatially and spectrally. Here we present an optical fiber pattern removal algorithm which we applied to hyperspectral bronchoscopic images robustly and quantitatively without the need for specific optical or electrical hardware. We validate the performance of the pattern removal by using a novel hyperspectral phasor approach. This algorithm can be generalized to all forms of fiber bundle hyperspectral endoscopy.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Flexible endoscopes (FE) have become commonly used in medical practice as a mean to look inside cavities of the body (Endon, inside and Skopeo, examine). Most flexible endoscopes include an optical fiber bundle which channels the light reflected from the inner body tissues on the distal end and conducts it to the proximal end where the detector lies. FE are powerful tools to examine inner body tissues with reduced invasiveness.

FE have been used in medical practice in different specialized forms to investigate specific body systems. For example, colonoscopy, enteroscopy and sigmoidoscopy are aimed at the gastrointestinal (GI) tract; laryngoscopy and rhinoscopy at the ear, nose and throat while uretoscopy and falloscopy are designed respectively for urological and gynaecological applications. In our case, we focus on bronchoscopy, which is used to examine the respiratory tract.

Recent technological advancements in detection sensors [1] simplified the acquisition of Hyperspectral Imaging (HSI) data and bridged the gap between HSI and multiple applications in the biomedical field. HSI explores the spectral dimension to differentiate tissues apart based on their intrinsic optical properties. Many groups, including us, seek to take advantage of HSI and combine it with FE. However, there is one issue with combining these two imaging modalities. The design of these optical fiber bundles can be generalized as a multitude of fiber cores surrounded by cladding. One common artifact of this bundle approach is a comb pattern on the images, which complicates image analysis [2]. Especially for HSI, it is complicated to directly and accurately implement spectral analysis if the image is divided by claddings.

Different approaches have been taken for fiber based hyperspectral and multispectral imaging [3] with a predominance of scanning techniques. Rotational scanning mechanisms have been applied on single-mode optical fibers [4] and enhanced to acquire multidimensional data, including hyperspectral endoscopic imaging [5]. Other approaches use different geometries to scan a fiber bundle [6–9], or physically scan the fiber to obtain an image [10].

Such approaches, however, require precise scanning hardware and relatively long hyperspectral cube acquisition time, making it challenging to image non-static samples. Recent development of snapshot hyperspectral imaging sensors [11] enabled the generation of high resolution hyperspectral images in real-time. However, it is a hardware-based method and the fiber pattern influence is not considered. An interpolation method has been proposed [12] for a custom made imaging system.

Motivated by the increasingly broader application of HSI using snapshot mosaic hyperspectral camera with endoscopic techniques and the need of a fast and robust and solution to overcome the fiber comb pattern limitations in such applications, we present a novel fiber pattern removal algorithm designed for hyperspectral imaging.

2. Materials and methods

2.1 Imaging setup

Our imaging configuration (Fig. 1(a)) comprises a commercial bronchoscope (PortaView LF-TP, Olympus, Tokyo, Japan) and a hyperspectral mosaic camera (MV1-D2014x1088-HS03, Photonfocus, Lachen, Switzerland). The bronchoscope has a Field of View of 90 degrees, with a depth of field of 3-50 mm. The visible portion of the fiberbundle consists of 3134 fibers with 10μm measured diameter. Illumination is provided by a Halogen light source (CLK – 4, Olympus, Tokyo, Japan). The super-Bayer filter presents a 16 physical channels layout (Fig. 1(b)). The peak wavelengths of these channels are not evenly distributed in the active wavelength range (VIS 470-630nm) and several channels have the second order response (Fig. 1(c)). To overcome this issue, the sensor’s manufacturer calibrates the quantum efficiency of the sensor and generates a spectrum-correction matrix that transforms 16 physical channels into N(in this paper, we have N = 13) virtual channels. This mathematical operation suppresses the second order response and reduces the crosstalk between different bands.

 figure: Fig. 1

Fig. 1 System setup. (a) Imaging system configuration. (1) Snapshot CMOS mosaic hyperspectral camera. (2) 1:2.5 matched pair BBAR 400-700nm, 35mm bronchoscope adapter (Thorlabs, Newton, NJ, USA), longpass filter (OD4 – 475nm/25mm, Edmund, Barrington, NJ, USA) and shortpass filter (OD4 – 650nm/25mm, Edmund, Barrington, NJ, USA). (3)(4) Bronchoscope and fiber bundle. (5) Halogen light source (Olympus CLK-4). (6) Light-guide cable (4.25mm, 3m, CF type, Olympus, Tokyo, Japan). (7) Sample. (8) Work station (PC). (9) Ethernet cable. (b) Snapshot mosaic hyperspectral CMOS sensor Bayer filter layout and peak wavelengths of each channel (SM4x4-460-630, IMEC, Leuven, Belgium). (c) Quantum Efficiency (QE) of 16 physical channels.

Download Full Size | PDF

2.2 Fiber pattern removal algorithm

Our HIS pattern removal strategy consists in a two-step process: a fiber pattern removal algorithm and a spectrum correction for fiber bundles. The fiber pattern removal algorithm further includes three steps: determination of fiber core pixels location (Fig. 2(b), 2(g)), Delaunay triangulation (Fig. 2(c), 2(h)) and Barycentric interpolation reconstruction (Fig. 2(d), 2(i)). Fiber core location is determined using white reference images. The positions of the fiber core pixels remain constant on each channel of the sample image if no movement occurs at the camera-bronchoscope interface. Delaunay triangulation and Barycentric interpolation are applied to both the white reference image and sample image (16-channel hypercube).

 figure: Fig. 2

Fig. 2 Fiber pattern removal algorithm diagram. (a) Pattern masked white reference image and (f) target image. (b)(g). Localization of fiber cores based on white reference image, same locations applied to target image. (c)(h). Application of Delaunay triangulation to generate Vonoroi diagrams. (d)(i). Apply barycentric interpolation to reconstruct pattern removed image. (e)(j). Resulting Pattern removed image (Channel 6 of 13 shown).

Download Full Size | PDF

2.2.1 Determination of fiber core pixels based on white reference

Our fiber pattern removal algorithm is based on interpolation method. We use fiber core pixels which have the highest intensity value among one fiber as the interpolation source and model the light coming out from the optical fiber as a 2D Gaussian function [2]. The hyperspectral mosaic CMOS sensor in our setup utilizes 4x4 Bayer filters hence fiber core pixel should have a relative same location in each channel. However, improved precision is achieved by locating fiber core pixels for each of the 16 channels. A white reference image is required for fiber core pixels localization and spectrum calibration. Light source intensity conditions were optimized to avoid saturation while filling [80%,85%] the 10bit pixel depth. These settings, acquired on a 99% reflectance standard color target (SRS-99-010, Labsphere, North Sutton, USA), ensured ideal acquisition with our samples. The uniformity of the reflectance target allows for localization of all fiber cores. Such process would be challenging on a real sample where unevenness of intensity values might fall below the sensor’s Gaussian noise, where max noise value was measured at intensity 70 in 10-bits. Our fiber core pixels localization method can be summarized in three parts.

Initially, we choose a threshold Ti, i=1,216 for each channel to remove the cladding pixels. The value of T will depend on the illumination settings. We then perform the first round of fiber core pixel candidates screening (Fig. 2(b)). The fiber pixel candidates of each channel are chosen using an algorithm that finds 2D local maxima. In our optical configuration, each fiber approximately illuminates a 3x3 pixel area. We utilize a 3x3 mask to scan the16-channel images (border extended images) in raster order. If the center pixel of the mask is higher than all the 8-connected neighbor pixels, it is chosen as the fiber core pixel candidate Pk,k=1N.

Finally, we perform the second round fiber core pixels screening to remove incorrect core pixels (Fig. 3). We sort the N fiber core pixel candidates Pk,k=1N, in descending order based on intensity values. Then we define a mask with size L×L, where L depends on the distance of nearby fiber core pixels. In our case, the horizontal and vertical distance is 4 pixels, hence the ideal value for L is 3. Following the sorting order, we use the current candidate as the center of the mask and check whether any of the 8-connected neighbor pixel was previously identified as a candidate. If other candidate pixels are present, we discard the candidate with lower intensity. Flood fill algorithm were considered for implementing the second round of screening [12], however we opted for the mask-based approach in consideration of time efficiency.

 figure: Fig. 3

Fig. 3 Fiber core pixels screening process. (a) Fiber core pixel candidates (magenta circles) after first round of screening. Notice the presence of false positive locations due to the presence of local maxima (b) Second screening identifies false positive coordinates (cyan circles) using relative distance analysis. (c) Fiber core pixels after two rounds of screenings, after removal of incorrect fiber core locations.

Download Full Size | PDF

2.2.2 Fiber core pixels-based Delaunay triangulation

Fiber core pixel positions for each channel serve as an input for generating a Voronoi diagram using Delaunay triangulation method [13]. As a result, every pixel of each channel is enclosed in one triangle with vertices corresponding to the closest three fiber core centers. Representative Delaunay triangulation results are reported in Fig. 2 for the ROI of one channel white reference image (Fig. 2(b)) and sample image (Fig. 2(h)).

2.2.3 Barycentric interpolation to every channel

In the following step, we use an established barycentric interpolation method [14] to remove fiber pattern and reconstruct images. Barycentric interpolation is a linear interpolation method based on three source points v1, v2 and v3 (Fig. 4). In this case, the points correspond to the fiber core centers. A1,A2  and A3 are the areas of three triangles v2v3p,v1v3p and v1v2p, where p is the center pixel. These areas also serve as barycentric coordinates [15]. We can estimate the intensity value I(p) of any pixel p located inside each Delaunay triangle with the three fiber core pixels using barycentric interpolation [16].

 figure: Fig. 4

Fig. 4 Barycentric coordinates and interpolation diagram. The vertices  vi,   i=1,2,3   represent fiber cores. Each pixel within the triangle is defined as p. The resulting areas for each pixel p and points vi are defined as Ai.

Download Full Size | PDF

I(p)=  A1A1+A2+A3I(v1)+A2A1+A2+A3I(v2)+A3A1+A2+A3I(v3)

2.3 Improved spectral correction for hyperspectral camera sensor

The method includes two spectral correction steps. The first one recovers same-bandwidth spectra from the snapshot mosaic hyperspectral camera sensor. The second one improves the performance of the sensor by correcting the spectral distortions due to the bronchoscope optical fiber. The snapshot mosaic hyperspectral camera has 16 physical channels which physically consists of 512x256 4x4 Bayer filters on the sensor. Several channels have the second order response and there is crosstalk among channels. The CMOS sensor manufacturer provides a correction matrix transforming 16 channels images to another N virtual channels (for our camera N = 13) which can be used to recover the sample spectrum more precisely. The original 16 channels are linearly transformed to 13 channels with same spatial size 512×256 for each channel as follows:

Roriginal=EwhiterefEsampleIsampleIdarkrefsampleIwhiterefIdarkrefwhiteref
Rcorrected=Roriginal×T
where Roriginal is a 2D reflectance matrix with size (512×256)×16. Ewhiteref and Esample are respectively the exposure time of white reference image and sample image. Isample, Iwhiteref, Idarkrefsample and Idarkrefwhiteref are respectively the reshaped 2D 16-channel hypercube of sample, white reference and two dark references. Idarkrefsample and Idarkrefwhiteref  are acquired by using the same exposure time respectively of sample image and white reference image and with the sensor being black out. T is a 16×13 correction matrix. Rcorrected is the 13-channel linearly transformed and spectrum corrected 2D reflectance matrix with the size (512×256)×13. Then we reshape it to get 3D reflectance matrix with the size 512×256×13. The steps can vary depending on the hyperspectral camera in use.

2.3.1 Spectral calibration for fiber bundles

Intensities of light in different wavelengths are likely to be distorted differently during the transmission through a fiber, resulting in a distorted recovered spectrum. For this reason we use a mathematical method to minimize the distortion.

Under the same light condition, we use our imaging configuration to acquire the images of six standard color targets (red: SCS-RD-010, green: SCS-GN-010, blue: SCS-BL-010, yellow: SCS-YW-010, grey 20% reflectance: SRS-20-010, grey 50% reflectance: SRS-50-010, Labsphere, North Sutton, USA) and apply the fiber pattern removal steps (2.2.1 to 2.2.3) to each of the six images. We account for spectral distortions caused by bending of the bronchoscope by acquiring images with the highest expected bronchoscope bend (90 degree) with a setup similar to Fig. 1(a). In these conditions, the variations in spectral transmission along the fiber bundle should be maximized, simulating the extreme scope operation scenario. We obtain a reshaped reflectance matrix Rcorrected with size 512×256×13 for each of the six targets. We calculate the average reflectance spectra within each color target obtaining a matrix of reflectance spectra rij in which  i = 1,2 … 13 is the index of channel, j = 1,2 … 6 is the index of color target. The color target manufacturer provides the calibrated reference reflectance spectra, here called sij. For each of the 13 channels, we search a weight factor wi to minimize its mean square error (Eq. (4) with respect to the reference target. Because the second order derivative of (4) is a positive real number, (4) is a convex function of wi. We derive (4) with respect to wi and set it to be zero, obtaining the optimized solution (Eq. (5) for each channel.

MSEi = (si_wiri_)T(si_wiri_), i=1,213
wi=si_Tri_ri_Tri_,i=1,2...13
The resulting scalar weight factor wi is applied to the corrected reflectance matrix (Eq. (6)
Rfinal,i= wiRcorrect,i
Thus obtaining the final reflectance matrix Rfinal.

2.4 Animals

All experiments performed on mice were approved by the IACUC of the Children’s Hospital of Los Angeles. Experimental research on vertebrates complied with institutional, national and international ethical guidelines. Adult 8 weeks old C57Bl mice were euthanized with euthasol. Throat was longitudinally cut to expose the trachea. Two small incisions (below the larynx and above the bronchi) were made on the half ventral side of the trachea. To expose the epithelium, the trachea was open by length-wise cutting its ventral side between the two incisions.

2.5 Software

Algorithms were implemented using MATLAB 2017a on a Windows machine (Intel I7 2.60GHZ, 16GB RAM DDR4 2133MHz). Computation time of the fiber pattern correction were obtained using a Region of Interest of 238x198 pixels. These number will depend on the number of fibers and the number of channels present in the system. Fiber core pixel locating is performed once on a white reference image and took 280ms/channel. These positions are reused on each channel for triangulation (70ms/channel), followed by interpolation (560ms/channel). In our case, using our prototype non-optimized software, the fiber pattern correction algorithm for 16 channels took 10.08s.

3. Results

The results of spectral calibration for fiber bundles are reported in Fig. 5. The method improves the quality of spectra in majority of channels, using the reference spectra as a comparison. To further test this improvement we perform a comparison between the spectra recovered from three images of color targets (red: SCS-RD-010, green: SCS-GN-010, blue: SCS-BL-010) and from three pattern masked images of color checkers (ColorChecker Classic, X-Rite, Grand Rapids, Michigan, USA) (Fig. 6). The calibration was applied to both targets images before and after fiber pattern removal algorithm. We use hyperspectral phasor plots [16] as a powerful tool for comparative analysis, to visually show the distribution change of recovered spectra with respect to the reference spectra. The reference spectra are recovered from images acquired using a 35mm Edmunds objective lens instead of bronchoscope in proper light situation. The fiber pattern in original endoscopic images (Fig. 6, fiber pattern masked column) introduces undesired spectra into the hyperspectral matrix. This effect is well represented by the spreading of the phasor plot. After applying fiber pattern removal algorithm to the images, we effectively suppress the majority of undesired mixed spectra (Fig. 6, fiber pattern removed column), reducing the spreading on the plot. The improvement is evident, when comparing these phasors to the reference one of 35mm lens (Fig. 6, reference column).

 figure: Fig. 5

Fig. 5 Reflectance spectra of six standard color targets at 13 peak wavelengths. Average spectra calculated on calibrated color targets. (Blue crosses) Standard reflectance spectra from manufacturer. (Cyan circles) Recovered reflectance without fiber spectrum calibration. (Magenta crosses) Recovered reflectance with fiber spectrum calibration applied.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Hyperspectral Phasor analysis on color targets and color checkers. (column 1) Phasor plots of reference images(35mm lens). (column 2) Phasor plots of fiber pattern masked images. (column 3) Phasor plots of fiber pattern removed images. (column 4) Color targets and color checkers. (row a) Blue color target, SCS-BL-010, Labsphere. (row b) Green color target, SCS-GN-010, Labsphere. (row c) Red color target, SCS-RD-010, Labsphere. (row d) Brown color checker, ColorChecker Classic Card, X-Rite. (row e) Yellow color checker, ColorChecker Classic Card, X-Rite. (row f) Neutral 3.5(grey) color checker, ColorChecker Classic Card, X-Rite.

Download Full Size | PDF

We use two measurements of errors, scatter error and shifted-mean error on the phasor plot [16], to quantify and compare the spectrum recovering performance. While a more thorough description of these two quantities is reported elsewhere [17], briefly, scatter error represents how widely the spectrum spreads from the mean spectrum (mean phasor coordinates) due to instrumental and Poissonian noise on the multiple channels. Shifted-mean error, instead, represents the displacement of the mean spectrum of the observed sample image, in this case the ROI, from the mean of spectrum of reference image, here the 35mm lens. The two error measures for reference images, fiber pattern masked images, and fiber pattern removed images (Fig. 6) are reported in Table 1.

Tables Icon

Table 1. Scatter error and shifted mean error

Scatter errors on fiber pattern removed images of color target and checkers are consistently smaller than those on pattern masked images. This result highlights quantitatively the capability of our fiber pattern removal algorithm to suppresses undesired spectra. Furthermore, the fiber pattern removal algorithm improves the recovery of the originalspectra. This advantage is reported by the reduced Shifted mean error.

Finally, we apply the algorithm to a real biological specimen (Fig. 7) a wild type mouse was prepared for imaging. In this experiment, we image the inner epithelium of the animal trachea utilizing our hyperspectral bronchoscope setup (Fig. 1(a). A “fly over” the longitudinal verse of airway produced a series of hyperspectral images. Stitching was performed using XuvTools [17] (www.xuvtools.org). The results reported in Fig. 7 present a striking improvement from the pattern masked image (Fig. 7(a)) to the pattern removed image (Fig. 7(b)).

 figure: Fig. 7

Fig. 7 Application example of fiber pattern removal algorithm on exposed mouse trachea. Images were acquired as a “fly-over” with our hyperspectral bronchoscope setup and stitched. Images represent a true-color visualization of the hyperspectral data sets for comparison purpose (a) Pattern masked stitched mouse trachea image. Visible honeycomb pattern affects the quality of the image (b) Pattern removed image. The hyperspectral pattern removing algorithm improves the visualization of the image. Color tones are unchanged, suggesting a reduced presence of spectral artifacts.

Download Full Size | PDF

4. Discussion and conclusions

We discussed a fast, robust and versatile fiber pattern removal method for snapshot hyperspectral images. The novel fiber spectrum correction further enhances the recovered spectra with reference to the true spectrum (Fig. 6). Using a phasor representation, we can visualize at once all the spectra and quantify the displacement from the original. Our results show consistent decrease of scatter in on phasor plot (Fig. 6) with multiple calibration and test samples. The amount of scatter corresponds to the quantity of artifacts introduced in the hyperspectral data set, suggesting a strong improvement. While we quantitatively only characterize calibration and checker targets, the intra-surgical application on mice (Fig. 7) shows considerable visual improvement compared the fiber-pattern masked data.

However, there are limitations to this method. Foremost, the interpolation method basically performs an estimation. The pixel intensities corresponding to cladding are estimated linearly starting from values from fiber core pixels. As a consequence, any abrupt change in intensity will be smoothed. The interpolation will thus act as an added low pass spatial filter which affects each image in the hyperspectral cube. If the samples we are observing is spectrally smooth within a small area, the most frequent case in biological samples (Fig. 7), our method is able to recover the spectra accurately. Interpolation will affect image resolution, hence this approach should be applied to samples with structural details larger than the diameter of the fiber cores to avoid smoothing features of interest. Another consideration is that our method synthesizes the useful spectral information from the original fiber pattern masked images. The quality of the original endoscopic images influences the result of our method. The presence of physical distortions as, for example, fiber bending, can affect the accuracy of the final reconstructed hyperspectral cube, by introducing distorted spectra at the fiber cores.

The advantages of our method still outweigh the limitations. First, it doesn’t require any scanning opto-mechanics or special controllers, simplifying the process of endoscopic image acquisition and effectively decreasing the initial instrumental cost. In addition, while our method has been applied here as a postprocessing method, it has the potential to be implemented to work with high frame rate in semi-real time. The algorithm versatility makes it applicable to a variety of snapshot mosaic hyperspectral cameras working with different fiber-bundle endoscopes. The reliable removal of fiber pattern can greatly simplify the application of image analysis tools as well as machine learning. With the recent development of fast snapshot hyperspectral camera sensors and the dramatic improvements in imaging quality, hyperspectral imaging has been gaining attention and potential for applications in multiple fields. In our case we focus on quantitative imaging of the airways, where complex observations are often paired with sample collection to aid in the diagnosis. The implementation of hyperspectral image analysis after pattern removal has the potential to simplify the patient screening process. Flexibility enables access of hyperspectral imaging to the inner hollow cavities of animal and human body with minimal invasiveness, opening to multiple potential fields of application such as enteroscopy, colonoscopy or cystoscopy. Our method serves as an enhancement of the combination of these snapshot hyperspectral endoscopy techniques providing a simplified application of image processing algorithms for tackling important clinical problems.

Funding

United States Department of Defense (PR150666), University of Southern California, Wallace H. Coulter Foundation and American Heart Association

Acknowledgments

The authors would like to thank Min Hubbard and Greg Bearman for useful discussions.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. J. Kaluzny, H. Li, W. Liu, P. Nesper, J. Park, H. F. Zhang, and A. A. Fawzi, “Bayer Filter Snapshot Hyperspectral Fundus Camera for Human Retinal Imaging,” Curr. Eye Res. 42(4), 629–635 (2017). [CrossRef]   [PubMed]  

2. M. Eiter, S. Rupp, and C. Winter, “Physically motivated reconstruction of fiberscopic images,” in Proceedings - International Conference on Pattern Recognition, 2006, vol. 3, pp. 599–602.

3. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). [CrossRef]   [PubMed]  

4. G. J. Tearney, S. A. Boppart, B. E. Bouma, M. E. Brezinski, N. J. Weissman, J. F. Southern, and J. G. Fujimoto, “Scanning single-mode fiber optic catheter-endoscope for optical coherence tomography,” Opt. Lett. 21(7), 543–545 (1996). [CrossRef]   [PubMed]  

5. J. Bec, J. E. Phipps, D. Gorpas, D. Ma, H. Fatakdawala, K. B. Margulies, J. A. Southard, and L. Marcu, “In vivo label-free structural and biochemical imaging of coronary arteries using an integrated ultrasound and multispectral fluorescence lifetime catheter system,” Sci. Rep. 7(1), 8960 (2017). [CrossRef]   [PubMed]  

6. K. Carlson, M. Chidley, K.-B. Sung, M. Descour, A. Gillenwater, M. Follen, and R. Richards-Kortum, “In vivo fiber-optic confocal reflectance microscope with an injection-molded plastic miniature objective lens,” Appl. Opt. 44(10), 1792–1797 (2005). [CrossRef]   [PubMed]  

7. B. E. Sherlock, J. E. Phipps, J. Bec, and L. Marcu, “Simultaneous, label-free, multispectral fluorescence lifetime imaging and optical coherence tomography using a double-clad fiber,” Opt. Lett. 42(19), 3753–3756 (2017). [CrossRef]   [PubMed]  

8. D. L. Dickensheets and G. S. Kino, “Micromachined scanning confocal optical microscope,” Opt. Lett. 21(10), 764–766 (1996). [CrossRef]   [PubMed]  

9. D. Ma, J. Bec, D. Gorpas, D. Yankelevich, and L. Marcu, “Technique for real-time tissue characterization based on scanning multispectral fluorescence lifetime spectroscopy (ms-TRFS),” Biomed. Opt. Express 6(3), 987–1002 (2015). [CrossRef]   [PubMed]  

10. R. T. Kester, N. Bedard, L. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16(5), 056005 (2011). [CrossRef]   [PubMed]  

11. N. Bedard and T. S. Tkaczyk, “Snapshot spectrally encoded fluorescence imaging through a fiber bundle,” J. Biomed. Opt. 17(8), 080508 (2012). [CrossRef]   [PubMed]  

12. W. G. M. Geraets, A. N. van Daatselaar, and J. G. C. Verheij, “An efficient filling algorithm for counting regions,” Comput. Methods Programs Biomed. 76(1), 1–11 (2004). [CrossRef]   [PubMed]  

13. D. T. Lee and B. J. Schachter, “Two algorithms for constructing a Delaunay triangulation,” Int. J. Comput. Inf. Sci. 9(3), 219–242 (1980). [CrossRef]  

14. S. Rupp, C. Winter, and M. Elter, “Evaluation of spatial interpolation strategies for the removal of comb-structure in fiber-optic images,” in Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Engineering the Future of Biomedicine, EMBC 2009, 2009, pp. 3677–3680. [CrossRef]  

15. J. A. Scott, “83.48 Some Examples of the Use of Areal Coordinates in Triangle Geometry,” Math. Gaz. 83(498), 472–477 (1999). [CrossRef]  

16. F. Cutrale, V. Trivedi, L. A. Trinh, C.-L. Chiu, J. M. Choi, M. S. Artiga, and S. E. Fraser, “Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging,” Nat. Methods 14(2), 149–152 (2017). [CrossRef]   [PubMed]  

17. M. Emmenlauer, O. Ronneberger, A. Ponti, P. Schwarb, A. Griffa, A. Filippi, R. Nitschke, W. Driever, and H. Burkhardt, “XuvTools: Free, fast and reliable stitching of large 3D datasets,” J. Microsc. 233(1), 42–60 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 System setup. (a) Imaging system configuration. (1) Snapshot CMOS mosaic hyperspectral camera. (2) 1:2.5 matched pair BBAR 400-700nm, 35mm bronchoscope adapter (Thorlabs, Newton, NJ, USA), longpass filter (OD4 – 475nm/25mm, Edmund, Barrington, NJ, USA) and shortpass filter (OD4 – 650nm/25mm, Edmund, Barrington, NJ, USA). (3)(4) Bronchoscope and fiber bundle. (5) Halogen light source (Olympus CLK-4). (6) Light-guide cable (4.25mm, 3m, CF type, Olympus, Tokyo, Japan). (7) Sample. (8) Work station (PC). (9) Ethernet cable. (b) Snapshot mosaic hyperspectral CMOS sensor Bayer filter layout and peak wavelengths of each channel (SM4x4-460-630, IMEC, Leuven, Belgium). (c) Quantum Efficiency (QE) of 16 physical channels.
Fig. 2
Fig. 2 Fiber pattern removal algorithm diagram. (a) Pattern masked white reference image and (f) target image. (b)(g). Localization of fiber cores based on white reference image, same locations applied to target image. (c)(h). Application of Delaunay triangulation to generate Vonoroi diagrams. (d)(i). Apply barycentric interpolation to reconstruct pattern removed image. (e)(j). Resulting Pattern removed image (Channel 6 of 13 shown).
Fig. 3
Fig. 3 Fiber core pixels screening process. (a) Fiber core pixel candidates (magenta circles) after first round of screening. Notice the presence of false positive locations due to the presence of local maxima (b) Second screening identifies false positive coordinates (cyan circles) using relative distance analysis. (c) Fiber core pixels after two rounds of screenings, after removal of incorrect fiber core locations.
Fig. 4
Fig. 4 Barycentric coordinates and interpolation diagram. The vertices   v i ,       i = 1 , 2 , 3     represent fiber cores. Each pixel within the triangle is defined as p. The resulting areas for each pixel p and points v i are defined as A i .
Fig. 5
Fig. 5 Reflectance spectra of six standard color targets at 13 peak wavelengths. Average spectra calculated on calibrated color targets. (Blue crosses) Standard reflectance spectra from manufacturer. (Cyan circles) Recovered reflectance without fiber spectrum calibration. (Magenta crosses) Recovered reflectance with fiber spectrum calibration applied.
Fig. 6
Fig. 6 Hyperspectral Phasor analysis on color targets and color checkers. (column 1) Phasor plots of reference images(35mm lens). (column 2) Phasor plots of fiber pattern masked images. (column 3) Phasor plots of fiber pattern removed images. (column 4) Color targets and color checkers. (row a) Blue color target, SCS-BL-010, Labsphere. (row b) Green color target, SCS-GN-010, Labsphere. (row c) Red color target, SCS-RD-010, Labsphere. (row d) Brown color checker, ColorChecker Classic Card, X-Rite. (row e) Yellow color checker, ColorChecker Classic Card, X-Rite. (row f) Neutral 3.5(grey) color checker, ColorChecker Classic Card, X-Rite.
Fig. 7
Fig. 7 Application example of fiber pattern removal algorithm on exposed mouse trachea. Images were acquired as a “fly-over” with our hyperspectral bronchoscope setup and stitched. Images represent a true-color visualization of the hyperspectral data sets for comparison purpose (a) Pattern masked stitched mouse trachea image. Visible honeycomb pattern affects the quality of the image (b) Pattern removed image. The hyperspectral pattern removing algorithm improves the visualization of the image. Color tones are unchanged, suggesting a reduced presence of spectral artifacts.

Tables (1)

Tables Icon

Table 1 Scatter error and shifted mean error

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I ( p ) =     A 1 A 1 + A 2 + A 3 I ( v 1 ) + A 2 A 1 + A 2 + A 3 I ( v 2 ) + A 3 A 1 + A 2 + A 3 I ( v 3 )
R o r i g i n a l = E w h i t e r e f E s a m p l e I s a m p l e I d a r k r e f s a m p l e I w h i t e r e f I d a r k r e f w h i t e r e f
R c o r r e c t e d = R o r i g i n a l × T
M S E i   =   ( s i _ w i r i _ ) T ( s i _ w i r i _ ) ,   i = 1 , 2 13
w i = s i _ T r i _ r i _ T r i _ , i = 1 , 2...13
R f i n a l , i =   w i R c o r r e c t , i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.