Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High resolution, programmable aperture light field laparoscope for quantitative depth mapping

Open Access Open Access

Abstract

Recent applications have shown that light field imaging can be useful for developing uniaxial three-dimensional (3D) endoscopes. The immediate challenges in implementation are a tradeoff in lateral resolution and acquiring enough depth information in the physically limited environment of minimally invasive surgery. Here we propose using programmable aperture light field imaging in laparoscopy to capture 3D information without sacrificing the camera sensor’s native, high spatial resolution. This hybrid design utilizes a programmable aperture to preserve the conventional laparoscope’s functionality and, upon demand, to compute a depth map for surgical guidance. A working prototype is demonstrated.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional laparoscopic systems provide surgeons with a two-dimensional (2D) view of the operative field, which limits depth cues due to the lack of binocular vision and causes loss of accurate depth perception and challenges of eye-hand coordination [1]. On the other hand, 3D laparoscopes offering more accurate depth cues such as binocular vision have gained significant popularity, especially when integrated with robotic surgery. Studies have reported less fatigue and more accurate and faster surgical performances with 3D laparoscopy [2,3].

There exist several different methods for implementing 3D laparoscopy, including dual-sensor stereo, single-sensor stereo, single-sensor 3D imaging via structured light, and uniaxial 3D imaging [4]. Engineering these methods for 3D laparoscopes faces unique design limitations. Dual-sensor stereo systems that integrate a pair of imaging optics and sensors to acquire binocular images of the surgical field into the constrained, standardized endoscope housing are one of the popular means that have been successfully adopted in commercial systems. Compared to the 4k ultra-high definition image quality of state-of-the-art 2D laparoscopes, improving optical performance in these systems is still challenging due to the naturally constrained dimensions and effective numerical aperture (NA). Single-sensor stereo systems capture stereo images with a single sensor by means of split-channel optics and result in compromise of resolution. Single-sensor 3D imaging systems via structured light record distorted structured light to determine 3D surface profile, but at the cost of an extra projection path. Uniaxial 3D acquisition techniques that extract 3D depth information using monocular endoscopes with a single optics channel have one advantage of maintaining the similar form factor to the monocular endoscopes and several strategies such as time of flight measurement, shape from defocus or shading, and various active illumination methods have been investigated actively [4].

Recently, another uniaxial 3D acquisition method called light field (LF) imaging was applied to minimally invasive surgery, such as LF otoscope [5], laryngoscope [6], and endoscope [7]. Capturing the LF of a surgical field requires recording both the spatial and angular information of the light rays from a 3D object and thus enables an imaging system to digitally refocus post-image capture, extend the depth-of-field, and acquire depth information [8]. In addition, LF capture can be implemented with a simple addition of a microlens array (MLA) to its original monocular imaging optics. The existing LF endoscopes [57], however, are subject to several major limitations. The most important limitation is their substantially reduced spatial and limited angular resolution due to tradeoffs between ray position and angular sampling. Furthermore, they are often dedicated 3D systems incapable of acquiring high-resolution 2D images, require splitting the imaging path, or are limited to acquiring depth information for a specific environment.

Here we propose a design of a high resolution, programmable aperture light field laparoscope (PALFL) and demonstrate its utility for quantitative depth mapping. By adopting a programmable aperture (PA) instead of an MLA for capturing light fields in a time-multiplexing fashion [9,10], the proposed laparoscope design is able to address the above-mentioned limitations of existing LF endoscopes. In this case, the spatial resolution of the acquired light fields is only subject to the limit of the image sensor and the undesirable tradeoff between spatial and angular resolution is removed.

2. Optical approach

Figure 1 illustrates the schematic layout of our proposed PALFL design, which consists of an objective lens, a 1:1 relay lens group, an eyepiece, a programmable aperture, a focusing lens, and a sensor. The objective lens with a focal length of fobj images the entire field of view (FOV) of an object and forms intermediate image 1 (II1). The 1:1 relay lens is necessary for rigid laparoscopes to extend the insertion length of the imaging probe within the limit of the housing tube diameter and relay the image to outside of the patient’s body at intermediate image 2 (II2). To fit the objective lens and relay lens within the standard 10 mm-diameter housing of laparoscopes, the objective lens is designed to be image-space telecentric with its entrance pupil (EP) placed at its front focal point while the relay lens group is designed to be double telecentric. The eyepiece with a focal length of feye projects the image toward optics infinity for direct viewing or further imaging. In the meantime, the eyepiece forms a conjugate image of the objective EP, labeled as “stop”, at which the programmable aperture is placed. Opening a given region of the PA component allows the focusing lens, with a focal length of ffl, to image different bundles of rays from the object onto the sensor.

 figure: Fig. 1.

Fig. 1. Conceptual model of the PALFL.

Download Full Size | PDF

By selectively opening different sub-apertures (e.g. three instantaneous sub-apertures are highlighted by the Red, Green and Blue pixels in Fig. 1) sequentially, the sensor captures different light ray angles incident upon the EP from the same object point. As illustrated by the zoomed view at the sensor, depending on the depth of the object of interest, the rays through the different sub-apertures may be imaged at the same pixel when the object depth is optically conjugate to the senor or at different pixels when it is either nearer or further than the conjugate depth. Such disparity information recorded by the sub-aperture images is to be used for reconstructing the depth map of the object field or refocusing the image at different depth.

One significant advantage of a PALFL design over existing LF endoscopes using MLA is that spatial and angular resolutions of the captured LF images are only subject to the limits of the sensor resolution and the pitch of the PA, respectively, while existing LF endoscopes are subject to the tradeoffs between the spatial resolution of the images and the angular resolution of ray direction samples. Another worth-noting feature of a PALFL system is its hybrid capability. The system’s instantaneous aperture can be switched between sub-aperture LF capture state and a normal capture state where a centered, regular-sized aperture is operated to capture a conventional 2D full-resolution, full FOV image that is the same as a conventional laparoscope. This capability provides a surgeon with the option, on demand, to receive guidance through the visualization of depth information.

Another interesting aspect of the PA approach is that the size and pattern of the sub-aperture can be customized based on what is needed. To match the throughput of existing LF endoscopes, sub-apertures can span multiple adjacent pixels in the PA while sensor pixels can be binned. In the case of insufficient illumination, the span can be further extended at the cost of depth-of-field or depth mapping range, and high angular resolution can still be maintained by allowing sequential sub-aperture regions to overlap. The drawback of a sequential capture is the cost of speed, but the ever-increasing frame rates of imaging sensors can well overcome this limitation. Multiplexed light field acquisition [9,10], which uses patterns spanning multiple regions of the PA per frame, can be implemented to increase signal-to-noise ratio and allow for faster frame rates.

3. Depth mapping resolution

A key aspect to the design of a PALFL system is to achieve adequate depth mapping resolution. This mainly depends on the maximal angular separation of the rays through the centers of the sub-apertures, which establishes the maximal baseline equivalent to a stereo system, and the minimally detectable ray separation of the imaging system. For the convenience of quantifying the depth resolution of different systems, we use the numerical aperture at the nominal working distance, NAWD, in the object space to characterize the maximal angular separation of the sub-apertures, and we use the equivalent sensor spatial resolution in the object space, Bobj, of the system to quantify the minimally detectable ray separation. We assume that distinguishing the three separated rays in Fig. 1 and confidently detecting a depth offset from the sensor conjugate depth, LWD, minimally requires a 2-pixel separation (2B at the sensor or 2Bobj at LWD) between the Red and Blue rays on the sensor. A higher depth resolution can be achieved by digitally interpolating pixel data and refining the location of rays that land in between two pixels, but this possibility is not demonstrated here. Using similar triangles with bases located at LWD and the EP and a Taylor series expansion for simplification, the depth resolution of a PALFL design is derived:

$${d_ \pm } \approx \frac{{{B_{obj}}}}{{N{A_{WD}}}}\left[ {1 \pm \frac{{2{B_{obj}}}}{{{D_{EP}}}}} \right],$$
where d+ and d- represent the absolute distances from the sensor conjugate depth, LWD, to the closest resolvable depths away from and towards the EP, respectively, and DEP is the EP diameter. Given the pixel resolution, B, of the sensor and first-order optics specifications, without considering the effects of diffraction and aberration, Bobj and NAWD are defined as:
$${B_{obj}} = \frac{{({{L_{WD}} - {f_{obj}}} ){f_{eye}}}}{{{f_{obj}}{f_{fl}}}}B,\quad N{A_{WD}} \approx \frac{{{{{D_{EP}}} \mathord{\left/ {\vphantom {{{D_{EP}}} 2}} \right.} 2}}}{{{L_{WD}} - {f_{obj}}}}.$$
Figure 2 plots the average depth resolution, d, of d+ and d- in relation to NAWD for systems of different spatial resolutions in the object space. At a nominal working distance of 50 mm, the NAWD of a standard monocular laparoscope is ∼0.003 while the 5 mm baseline of a state-of-the-art stereo laparoscope (with a 12 mm diameter rod) by Intuitive Surgical produces an equivalent NAWD of ∼0.05. The object-space spatial resolution here is quantified by the minimally discernable pair of line features per unit distance (lps/mm), equivalent to 1/(2Bobj). The object-space resolution of a commercial laparoscope is 2-6 lps/mm and the diffraction limited resolution of the multi-resolution foveated laparoscope reported in [11] is ∼12 lps/mm.

 figure: Fig. 2.

Fig. 2. Plot of achievable depth resolution in the laparoscopic environment for different NAWD and 1/(2Bobj)

Download Full Size | PDF

Figure 2 suggests that implementing a LF laparoscope using standard laparoscope optics, with a spatial resolution of 4 lps/mm and NAWD less than 0.01, can yield a depth resolution of worse than 12 mm. The combination of NAWD of 0.015 and resolution of 6 lps/mm provides a depth resolution of ∼5.5 mm, which can be useful for surgeons to determine the proximity of their surgical tools, but inadequate for accurately visualizing anatomical structures. Achieving sub-mm depth resolution with light field method requires substantial improvements in both the object-space resolution and numerical aperture of standard 2D laparoscopes. On the other hand, achieving this resolution in a LF laparoscope with dimensions like a stereo laparoscope seems possible.

4. Prototype and experimental setup

Figure 3(a) shows the optical layout of a prototyped bench-top PALFL system for proof of concept. An f/2.5 objective lens with a focal length of 1.8 mm from an existing laparoscope developed in [11] was repurposed for this prototype. The diameter of this objective lens group is 5.7 mm, which is small enough to allow space for fiber illumination and lens housing to build a standard 10-mm diameter rigid laparoscope as demonstrated in [11]. The optical system inside the rigid laparoscopic tube in [11] also consists of several groups of relay optics to relay the image of the objective to the distal end of the tube for further imaging. As the objective and the relay were optimized and custom-made independently, they can be used separately without the other and different number of the relay optics can be added or removed without affecting the optical performance. When building the PALFL bench prototype, we removed the relay optics for simplicity and only used the objective along with a newly added eyepiece, a PA, and a focusing lens as the relay optics does not add or change the imaging function of the system. The objective lens was originally optimized for an LWD of 120 mm, a DEP of 0.8 mm, and lens diameters < 6 mm, resulting in an effective NAWD of ∼0.003. However, for this PALFL prototype the objective lens was used at an LWD of 20 mm. Although this distance is short for laparoscopy, it yields an NAWD as large as ∼0.022 if the full EP is sampled and produces an NAWD that is more comparable to that of stereo laparoscopes. The relay lenses were omitted to simplify the lens design and optical alignment of this prototype. A 10 mm focal length eyepiece built with stock lenses was optimized to meet sufficient performance over a 60° full FOV and expanded the 0.8 mm EP of the objective lens to a 4.4 mm stop where a PA could be inserted. Note that the eyepiece diameter can be much larger than that of the objective and relay because it is outside of the patient’s body. These modules were aligned to a commercial focusing lens with a focal length of 25 mm and 1/3” color CCD sensor (1.3 MP Dragonfly2 from Point Grey). The pixel resolution of the sensor is 1280 × 960, and the color pixel size is 3.75 × 3.75 µm2. Using Eq. (2), we can estimate that the theoretical spatial resolvability of the system in the object space is ∼33 lps/mm. Using Eq. (1), the depth resolution of the prototype can potentially reach ∼0.69 mm if the sub-aperture images are sampled at the full aperture and the optics perform at its full resolution.

 figure: Fig. 3.

Fig. 3. (a) Optical layout of prototype, (b) construction of benchtop system, and (c) manual PA sampling scheme.

Download Full Size | PDF

Figure 3(b) shows the prototype. The objective lens and eyepiece were assembled in a 3D printed opto-mechanical housing, as shown in the grey cylinder. Instead of using a digital PA, a physical iris mounted on a two-axis linear stage was employed. Figure 3(c) illustrates the angular sampling scheme bounded by the stop. The grid of black dots represents the locations that would be sampled sequentially by the pitch of the sub-apertures and determines the angular resolution. The iris, indicated by the red circle with arrows, moves to each sampling location. An illuminated bladder model object is placed near an LWD of 20 mm, as shown in Fig. 3(b). On the image side, the sensor was adjusted to the new conjugate image position.

Since the preexisting objective lens was not optimized for this short LWD, aberrations and vignetting were introduced. To minimize degradation of data due to this issue, an effective LF calibration based on the aberration correction theory presented in [8] was developed and applied post-data capture. Similarly, this LF calibration can minimize the impact of aberrations from relay lenses. Since the focus here is the PALFL concept, this calibration is briefly summarized hereby and will be thoroughly discussed in a future paper. The calibration process began with a step of calibrating the amount of vignetting across the field of view by capturing the LF data of a flat Lambertian source extending across the full FOV. By comparing the peripheral sub-aperture images to the center one, the vignetting was quantified and minimized via multiplication for future LF data sets. Following the step of removing the vignetting effects, residual aberrations were minimized next using an analogous process. The LF data of a checkerboard extending across the full FOV was taken. By comparing the peripheral sub-aperture images to the center one, the aberrations were quantified and minimized via lateral shifting of pixels for future LF data sets.

5. Data capture

Figures 4(a) through 4(e) show the captured LF data organized into sub-aperture images bordered in green according to the sample scheme shown in Fig. 3(c). The captured scene consists of a part of the bladder model and a screwdriver placed in front within the FOV to simulate a laparoscopic surgical tool. For scaling reference, the width of the screwdriver is 3 mm while the background bladder model is minified since it is farther away. The center sub-aperture image, Fig. 4(a), is uncalibrated and colored and was used as a reference for LF calibration. For the peripheral sub-aperture images, Figs. 4(b) through 4(e), the calibrated greyscale results extracted from the green color channel were shown along with white grid lines representing matching locations on the sensor for reference. Each of the original sub-aperture images has a high pixel resolution of 1280 × 960 pixels, which is the same as that of the native sensor. Due to the LF calibration, the FOV of the peripheral sub-apertures was cropped as seen in these images. Figures 4(f) through 4(i) show magnified images of a small region, marked by a Red box on each of the corresponding sub-aperture images, 4(b) through 4(e), respectively. The small but slightly different displacements of the screwdriver relative to the white reference grids in the different sub-aperture images help to visualize the ray separations described in Fig. 1 and validate that the screwdriver is in front of the nominal working distance, LWD.

 figure: Fig. 4.

Fig. 4. LF data: (a) uncalibrated center and (b-e) calibrated peripheral sub-aperture images, and (f-i) magnified views of ray separations.

Download Full Size | PDF

The optical performance of the built prototype was limited by the quality of the stock lenses in the eyepiece and the use of a generic focusing lens. Therefore, we only utilized the greyscale images converted from the green color channel for further data processing to eliminate the effects of chromatic aberration, and we only used the center five angular samples to avoid severe vignetting and aberration-blurring, which increases significantly for sub-apertures farther from the optical axis. These five samples of sub-aperture images, however, are adequate to demonstrate the minimum data needed to achieve maximum data processing speeds and depth sensitivity from x or y-oriented image features in a PALFL system.

The angular sampling dimensions for the data in Fig. 4 were determined experimentally. A 1 mm diameter iris was found to produce sufficient sub-aperture image quality and depth-of-field for the object distances of interest. A 0.91 mm pitch between the sub-apertures at the stop provided a balance between enough light ray separation at different object depths, absence of sub-aperture image aberration, and aliasing during digital refocusing.

The diffraction limited spatial resolution of the sub-apertures was measured using a 1951 USAF resolution target (groups 0-3) placed at an LWD of 20 mm. Figure 5 shows the center sub-aperture image, a zoomed in view of groups 2 and 3, and green channel intensity profiles along group 3, element 3 and 4. The bars in element 3 are clear while in element 4, they begin to diminish. This indicates that the diffraction limited spatial resolution is in between these two elements, which is ∼10.7 lp/mm. Although the sub-aperture spatial resolution is limited by diffraction, the higher pixel sensor resolution is not wasted because it enables more precise measurement of disparity between sub-aperture images and will also be used for the conventional laparoscope, where the PA is fully opened and the optical resolution is higher.

 figure: Fig. 5.

Fig. 5. The center sub-aperture image and intensity profiles of a 1951 USAF resolution target placed at an LWD of 20 mm.

Download Full Size | PDF

6. Data processing

A modified open source code [12] was used to process the calibrated LF data for digital refocusing and to generate depth maps. Figure 6 demonstrates digital refocusing for three image planes corresponding to near, medium, and far object distances. At near focus, the screwdriver is identifiable while the background is blurry. At medium focus, the white protrusion on the bladder model becomes clear. At far focus, the screwdriver and white protrusion are defocused while the pink line features on the right side are beginning to defocus. Because of the minimum angular sampling for this experiment, when refocusing to one extreme depth, the opposite one shows some aliasing, as seen by the edges of the defocused screwdriver when the focus is far.

 figure: Fig. 6.

Fig. 6. Digital refocusing to depths: (a) near, (b) medium, and (c) far (see also Visualization 1)

Download Full Size | PDF

Figure 7(a) was constructed by applying an intensity gradient threshold to Fig. 4(a) to highlight pixels containing strong image features for confident depth estimation. The depth was then estimated at those pixels while the other pixels were nullified. These null regions were interpolated based on the nearest confident depth estimation to construct a full depth map. This strategy reduced noisy depth estimations. Figures 7(b) and 7(c) show full depth maps generated from algorithms based on focus contrast and on correspondence feature matching, respectively. For each object point, the depth estimation is obtained by measuring at the sensor the separation between light rays captured by adjacent sub-apertures (in units of sensor pixels). A negative pixel value indicates the separation occurred in the opposite direction, as shown in the zoomed view of Fig. 1 when comparing the ray separation from near and far images. Greyscale color illustrates that darker is closer and brighter is farther, allowing determination of relative depth.

 figure: Fig. 7.

Fig. 7. (a) Intensity gradient thresholding of Fig. 4(a) for depth mapping noise reduction. Relative depth reconstruction maps based on (b) depth from focus contrast and (c) depth from multi-view correspondence feature matching.

Download Full Size | PDF

Both depth maps identify the correct objects at three different depths, according to Fig. 6. However, depending on the image feature characteristics [12] and error from defocus aliasing, the algorithms perform differently. In the focus contrast map shown in Fig. 7(b), aliasing resulted in inconsistent depth estimation between the screwdriver’s edges and body. Also, aliasing likely caused slight inconsistency between the two algorithms in the depth estimation of the farthest layer of depth. Therefore, the feature matching algorithm performs better for larger depth ranges. On the contrary, for the grey valley and surrounding white region on the left side of the FOV where aliasing is absent, the focus contrast map provides a smoother depth reconstruction.

7. Quantitative depth mapping

A lookup table method was created to enable conversion of depth maps from the pixels measuring ray separations to absolute, quantitative depth values and to validate depth resolution based on the system design. Figure 8(a) shows the center sub-aperture view of a 45° tilted ruler providing 0.7 mm depth intervals across the vertical FOV. After applying the same LF calibrations as those in the bladder model experiment, a smooth focus contrast depth map was generated in Fig. 8(b). Based on the measured ray separation, Fig. 8(b) highlights the pixels corresponding to d± and the LWD of 20 mm. The corresponding pixels were found in Fig. 8(a), and knowing the ruler dimensions, the units were converted to physical depth.

 figure: Fig. 8.

Fig. 8. A (a) tilted ruler object and its (b) measured depth map create a lookup table for converting ray separations to absolute depth values.

Download Full Size | PDF

These results were compared to our derived depth resolution study in Sec. 3. Due to the optical performance limitations discussed earlier, we experimentally determined the following prototype specifications. Knowing the real image to object magnification and manufacturer pixel size (B), the sensor spatial resolution in the object space, Bobj, of the current prototype was calculated to be 21.3 lps/mm for the center angular samples. Calculating Bobj using dimensions known in Fig. 5 yields a similar result. We measured the equivalent DEP from the angular samples shown in Fig. 4 to be 0.345 mm and the equivalent NAWD of the sampled data to be 0.0074 for an LWD of 20 mm. From Eq. (1), d+ and d- are 3.6 and 2.7 mm, respectively. Measured from the labeled data points in Figs. 8(a) and 8(b), the ± 1 sensor pixel depths corresponding to d+ and d- are separated from the 0 sensor pixel depth on the ruler by + 5 and -4 intervals, respectively. Knowing the depth between each interval on the tilted ruler is 0.7 mm, they correspond to measured depth resolutions of 3.5 mm and 2.8 mm, respectively, resulting in a maximum percent error of 3.7% in comparison to the theoretical values. Because depth estimation may be nonuniform depending on the algorithm used and the variation of an object’s texture density, the percent error can fluctuate for different objects throughout the FOV. Nevertheless, the results presented here demonstrate the potential of the PALFL while depth estimation algorithms are continually being improved.

8. Conclusion

In conclusion, a PALFL was conceptualized to obtain high spatial resolution LF data up to that of the camera sensor for refocusing and quantitative depth mapping, without trading off angular resolution. By taking advantage of the PA’s flexibility, this hybrid system integrates the high performance of existing 2D endoscopes with 3D LF imaging. Theory was then developed to analyze, compare, and design laparoscopes regarding adequate depth resolution. A bench-top prototype using an existing laparoscope objective demonstrated proof of concept by performing quantitative depth mapping according to the depth resolution theory. Using our understanding of this prototype, the next generation PALFL will incorporate many improvements. We will optimize the optical system to achieve high performance at its full aperture, incorporate a liquid crystal array in either a transmissive or reflective mode with multiplexed LF acquisition capability to acquire data up to the sensor frame rate, include relay lenses to extend the optical system, and redesign the system to have a working distance and maximum baseline similar to current commercial stereo endoscopes.

Funding

National Institute of Biomedical Imaging and Bioengineering (1R01EB18921, T32EB000809).

Disclosures

Dr. Hong Hua has a disclosed financial interest in Magic Leap Inc. that was not involved in the work reported here. The terms of this arrangement have been properly disclosed to The University of Arizona and reviewed by the Institutional Review Committee in accordance with its conflict of interest policies.

References

1. R. Sinha, M. Sundaram, S. Raje, G. Rao, M. Sinha, and R. Sinha, “3D laparoscopy: Technique and initial experience in 451 cases,” Gynecol. Surg. 10(2), 123–128 (2013). [CrossRef]  

2. B. Alaraimi, W. El Bakbak, S. Sarker, S. Makkiyah, A. Al-Marzouq, R. Goriparthi, A. Bouhelal, V. Quan, and B. Patel, “A randomized prospective study comparing acquisition of laparoscopic skills in three-dimensional (3D) vs. two-dimensional (2D) laparoscopy,” World J. Surg. 38(11), 2746–2752 (2014). [CrossRef]  

3. G. Currò, G. La Malfa, S. Lazzara, A. Caizzone, A. Fortugno, and G. Navarra, “Three-Dimensional Versus Two-Dimensional Laparoscopic Cholecystectomy: Is Surgeon Experience Relevant?” J. Laparoendosc. Adv. Surg. Tech. 25(7), 566–570 (2015). [CrossRef]  

4. J. Geng and J. Xie, “Review of 3-D endoscopic surface imaging techniques,” IEEE Sens. J. 14(4), 945–960 (2014). [CrossRef]  

5. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovacevic, N. Balram, and I. Tosic, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). [CrossRef]  

6. S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(02), 1 (2018). [CrossRef]  

7. J. Liu, D. Claus, T. Xu, T. Keßner, A. Herkommer, and W. Osten, “Light field endoscopy and its parametric description,” Opt. Lett. 42(9), 1804 (2017). [CrossRef]  

8. R. Ng, “Digital light field photography,” PhD. Thesis, Stanford University (2006).

9. C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography,” ACM Trans. Graph. 27(3), 1 (2008). [CrossRef]  

10. C. Zuo, J. Sun, S. Feng, M. Zhang, and Q. Chen, “Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging,” Opt. Lasers Eng. 80, 24–31 (2016). [CrossRef]  

11. Y. Qin, H. Hua, and M. Nguyen, “Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery,” Biomed. Opt. Express 5(8), 2548 (2014). [CrossRef]  

12. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis., 673–680 (2013).

Supplementary Material (1)

NameDescription
Visualization 1       The video shows the digital refocusing of a phantom scene based on the light field information captured through our prototype.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Conceptual model of the PALFL.
Fig. 2.
Fig. 2. Plot of achievable depth resolution in the laparoscopic environment for different NAWD and 1/(2Bobj)
Fig. 3.
Fig. 3. (a) Optical layout of prototype, (b) construction of benchtop system, and (c) manual PA sampling scheme.
Fig. 4.
Fig. 4. LF data: (a) uncalibrated center and (b-e) calibrated peripheral sub-aperture images, and (f-i) magnified views of ray separations.
Fig. 5.
Fig. 5. The center sub-aperture image and intensity profiles of a 1951 USAF resolution target placed at an LWD of 20 mm.
Fig. 6.
Fig. 6. Digital refocusing to depths: (a) near, (b) medium, and (c) far (see also Visualization 1)
Fig. 7.
Fig. 7. (a) Intensity gradient thresholding of Fig. 4(a) for depth mapping noise reduction. Relative depth reconstruction maps based on (b) depth from focus contrast and (c) depth from multi-view correspondence feature matching.
Fig. 8.
Fig. 8. A (a) tilted ruler object and its (b) measured depth map create a lookup table for converting ray separations to absolute depth values.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

d ± B o b j N A W D [ 1 ± 2 B o b j D E P ] ,
B o b j = ( L W D f o b j ) f e y e f o b j f f l B , N A W D D E P / D E P 2 2 L W D f o b j .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.