Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Developing an optical design pipeline for correcting lens aberrations and vignetting in light field cameras

Open Access Open Access

Abstract

Light field cameras have been employed in myriad applications thanks to their 3D imaging capability. By placing a microlens array in front of a conventional camera, one can measure both the spatial and angular information of incoming light rays and reconstruct a depth map. The unique optical architecture of light field cameras poses new challenges on controlling aberrations and vignetting in lens design process. The results of our study show that field curvature can be numerically corrected for by digital refocusing, and vignetting must be minimized because it reduces the depth reconstruction accuracy. To address this unmet need, we herein present an optical design pipeline for light field cameras and demonstrated its implementation in a light field endoscope.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The light rays captured by an imaging system contain abundant information, which is described by a 7D plenoptic function P(θ, φ, λ, t, x, y, z) (θ, φ, angular coordinates; λ, wavelength; t, time; x, y, z, spatial coordinates) [1]. A conventional camera acquires only the 2D spatial information (x, y) of an input scene. By contrast, the light field camera measures both spatial (x, y) and angular information (θ, φ) [2], where the angular information can be further used to reconstruct a depth map (x, y, z). Due to its superior 3D imaging capability, the light field camera has been employed in various applications such as biomedical imaging [3,4], object recognition [57], and machine vision [8,9].

There are two types of light field cameras: the unfocused light field (ULF) camera [2,10] and the focused light field (FLF) camera [11]. Figure 1 shows the corresponding schematics. As shown in Fig. 1(a), in a ULF camera, three point objects ${S_1}$, ${S_2}$, and ${S_3}$ are first imaged by the main lens, forming intermediate image points $ S_1^{\prime}$, $S_2^{\prime}$ and $S_3^{\prime}$. These intermediate image points are then reimaged by the microlens array (MLA) onto a detector array. Because the distance from the MLA to the detector array is equal to the focal length of the MLA, the ULF camera essentially images the pupil associated with each microlens. We use (u, v) and (x, y) to denote the Cartesian coordinates at the pupil plane and the MLA, respectively. The captured raw images (${M_1}$, ${M_2}$, and ${M_3}$ in Fig. 1(a)) can be re-arranged as a 4D datacube (x, y, u, v), which is also referred to as a light field (LF) [12]. A 2D x-u slice of the LF is termed an epipolar plane image (EPI). As an example, Fig. 1(b) shows three EPIs associated with point ${S_1}$, ${S_2}$, and ${S_3}$, respectively. The corresponding depths can then be deduced by estimating the slope of lines in the EPIs. A refocused image at a given depth can be reconstructed from an integral projection of the 4D LF along a trajectory in the EPIs [2]. Reconstructing images at all depths creates a focal stack of images, and an extended depth of field (DOF) image can be rendered by fusing all the reconstructed images [13].

 figure: Fig. 1.

Fig. 1. Ray models of light field cameras. (a) ULF camera. (b) EPIs associated with point ${S_1}$, ${S_2}$ and ${S_3}$ in (a). (c) FLF camera. (d) Perspective images imaged by microlens ${L_1}$ and ${L_2}$ in (c).

Download Full Size | PDF

Unlike the ULF camera, the FLF camera directly images the object, rather than pupils, onto the detector array. There are two types of FLF cameras: the Keplerian and Galilean [14]. Figure 1(c) shows the schematic of a Galilean FLF camera. The spacing (B) between the MLA and the detector is smaller than the focal length of the MLA. In contrast, B is larger than the focal length of the MLA in the Keplerian configuration. The depth information can be derived from the disparities between adjacent perspective images (Fig. 1(d)), and an all-in-focus image can be reconstructed by projecting all the pixels in the raw image back to the intermediate image plane.

Although the depth calibration method and ray tracing model of light field camera have been extensively studied [1522], the optical design of its main lens has yet to be exploited. Because of the unique optical architecture of light field cameras, the handling of lens aberrations and vignetting is significantly different from conventional lens design methods [23,24]. To address this unmet need, we systematically analyzed the effect of aberrations and vignetting on the fidelity of reconstructed images and developed a design pipeline for the main lens of light field cameras. While the proposed lens design pipeline is generally applicable to all light field cameras, we focus on a niche application in endoscopy (Section 4: Design example).

2. Aberrations and vignetting in light field cameras

When designing an imaging lens, although aberrations and vignetting are usually unwanted, they are not equally weighted in the tolerancing budget. Here we limit our discussion to third-order Seidel aberrations and ignore defocus and wavefront tilt. The conventional optical design prioritizes the correction of aberrations, which increase the spot size at the image plane (i.e., spherical aberration, coma, astigmatism, field curvature). Particularly, when field curvature W222 exists, a flat object plane is imaged to a curved surface. Because the detector plane is flat, field-dependent defocus is then introduced to the final image. In the periphery field, the blur so induced is so severe that it often overshadows other aberrations. More problematically, field curvature is more difficult to correct for than other Seidel aberrations—common approaches such as lens bending/splitting and stop shifting cannot be applied because field curvature depends on only the power and refractive index of lenses, if the system is free of astigmatism. Therefore, in conventional optical design, field curvature is considered one of the toughest aberrations, and correcting for it normally leads to a bulky setup. By contrast, vignetting reduces the irradiance of the image but not the resolution, and it can be numerically corrected for in postprocessing. For this reason, vignetting is a less-concerned factor compared with Seidel aberrations.

Unlike conventional cameras that capture only the 2D (x, y) information of a scene, light field cameras measure a 4D (x, y, u, v) datacube and derive the depth from light ray angles. Therefore, designing the main lens needs a new standard. Particularly, the field curvature and vignetting must be assessed in 3D (x, y, z) rather than 2D (x, y). Figure 2 shows a light field camera with field curvature. The object is imaged by the main lens to a curved surface, as indicated by the black dashed line. The depth of field of the microlens array (MLA), denoted by DRM, determines the depth range of the main lens, while the DRM itself depends on the detector pixel size and the numerical aperture (NA) of the MLA [25]. Provided that the entire curved intermediate image locates within the DRM, the shape of the surface can be recovered through calibration [16]. As a result, the field curvature can be numerically corrected for by digital refocusing, and it can be loosely tolerated in light field cameras.

 figure: Fig. 2.

Fig. 2. Field curvature in a light field camera. DRM, depth range of the microlens array; MLA, microlens array.

Download Full Size | PDF

By contrast, in light field cameras, vignetting must be minimized. Because light field cameras estimate depths using the light ray angles, the loss of the angular information due to vignetting will reduce the number of views in the EPIs. To elaborate on this effect, we performed a simulation using Zemax (Zemax, LLC). Figure 3 shows the shaded model of an ULF camera. The object is a point source. We use a 4F system as the main lens, which consists of two paraxial lenses (f = 15 mm) and a physical stop. The stop is placed at the Fourier plane of the first lens (i.e., back focal plane). To match the NA of the main lens and the MLA, we set the stop diameter to 1.38 mm. To introduce vignetting, we place another aperture of the same diameter at a location 10 mm after the stop. A MLA (f = 0.65 mm, lens pitch = 60 µm) locates at the back focal plane of the second lens, and a detector array is placed at the back focal plane of the MLA. The pixel size of the detector array is 4 µm.

 figure: Fig. 3.

Fig. 3. Shaded model of an unfocused light field (ULF) camera. MLA, microlens array.

Download Full Size | PDF

We define the vignetting factor $\mathrm{\eta }$ as:

$$\mathrm{\eta } = 1 - \; \frac{E}{{{E_u}}},$$
where E and ${E_u}$ denote the total irradiance received by the detector array with and without vignetting, respectively, and $\mathrm{\eta }$ is zero if the image is unvignetted. In the simulation, the point source was placed at the front focal plane of the first lens, and we scanned it along the x-axis at 13 different locations from 0 mm to 1.2 mm with a step size of 0.1 mm. At each step, we traced 100,000 light rays to form a raw image and rendered an EPI at v = 0 and y = 0. Figure 4(a) shows three representative raw images at x = 0 mm, 0.6 mm, 1.2 mm, and their corresponding EPIs. The results indicate that although the slope of the line feature in the EPIs does not change, the number of pixels that forms the line (i.e., views) reduces as vignetting increases. The relation between the vignetting factor and the number of views is shown in Fig. 4(b). We calculated the number of views by enumerating the non-zero pixels in the EPI after image binarization. The light field camera reconstructs depth by estimating the slope of line features in EPIs through linear regression. The standard error of fitting can be computed by:
$$SE = \sqrt {\frac{{\sum {{({{b_i} - {{\hat{b}}_i}} )}^2}}}{{\sum {{({{a_i} - \bar{a}} )}^2}}}} \cdot \sqrt {\frac{1}{{n - 2}}} ,$$
where SE is the standard error, n is the number of observations, ${a_i}$ is an independent variable for the ith observation, $\bar{a}$ is the mean, ${b_i}$ is a dependent variable for the ith observation, and ${\hat{b}_i}$ is the estimated value of ${b_i}$. Equation 2 implies that the standard error decreases as the number of observations increases. In light field cameras, vignetting reduces the number of views in EPIs, resulting in a larger regression error and, therefore, a reduced depth accuracy. Particularly, when the number of detector pixels associated with a microlens is small, vignetting dramatically increases the regression error.

 figure: Fig. 4.

Fig. 4. Vignetting and number of views in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Number of views in an EPI vs. vignetting factor.

Download Full Size | PDF

To further illustrate the effect of vignetting on depth accuracy, we defocused the point source by 6 mm towards the first lens, and we scanned it under the same conditions. Because the depth of the point source has changed, the line in the EPI is tilted with respect to the vertical axis, and it is not aligned with the detector pixels. As a result, ambiguities are introduced by sampling. Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm are shown in Fig. 5(a). At each step, we computed the slope of the line in the EPI. The relation between the slope regression error and the vignetting factor is shown in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. Vignetting and slope regression error of the line feature in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Slope regression error vs. vignetting factor.

Download Full Size | PDF

It is worth mentioning that the slope regression error is also dependent on aberrations and noises. When aberrations exist, the image of a point source is no longer a sharp point, and the shape of the line in the EPI may be distorted. On the other hand, noises affect the intensity of the views and the background pixels. In both cases, a sufficient number of views is critical for faithful depth reconstruction. Therefore, vignetting must be minimized in light field cameras.

Finally, we validated the effect of vignetting through a real experiment. The optical setup of an unfocused light field camera is shown in Fig. 6(a). We used a 4F system as the main lens, which consists of two 50 mm focal length achromatic doublets (Thorlabs, AC254-050-A-ML). A 4.8 mm diameter stop was placed at the Fourier plane to match the NA of the main lens and the MLA. An MLA with a 50 µm pitch was placed at the back focal plane of the second lens, and the spacing between the MLA and the camera (Lumenera, Lt965R) is equal to the MLA focal length. A flat printed grid pattern was used as the object, and it locates near the front focal plane of the main lens. An adjustable aperture was positioned 12 mm before the camera, and its diameter was set to be 2.8 mm, 4 mm, and 5 mm to create different levels of vignetting. We captured a raw image for each aperture diameter and a baseline image when the aperture was removed (i.e. no vignetting). A representative raw image when the aperture diameter = 4 mm and the baseline image are shown in Fig. 6(b), each including two magnified subfields. Compared to the baseline, Area 2 from the raw image when the aperture diameter = 4 mm shows vignetted pupils. Next, we calculated the vignetting factor and generated a disparity map for each image, followed by computing the root-mean-squared error (RMSE) for each disparity map. Note that a depth map can be further rendered based on disparity-to-depth calibration [16]. The resultant disparity maps are shown in Fig. 7. The experimental results indicate that the disparity RMSE increases as the vignetting factor increases. Therefore, depth accuracy would be jeopardized if vignetting exists.

 figure: Fig. 6.

Fig. 6. Experimental setup and raw images of a flat printed grid pattern object. (a) Optical setup. (b) A raw image when the aperture diameter = 4 mm and the baseline image with two magnified subfields.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Disparity maps for each aperture diameter. RMSE, root-mean-squared error.

Download Full Size | PDF

3. Lens design for light field cameras

Compared to conventional cameras, light field cameras can tolerate field curvature but are sensitive to vignetting. The field curvature coefficient ${W_{220}}$ can be separated into two terms:

$${W_{220}} = \; \frac{1}{2}\; W_{222}^{\prime} + \; {W_{220p}},$$
where $W_{222}^{\prime}$ is proportional to astigmatism and ${W_{220p}}$ is Petzval curvature. Without astigmatism, the field curvature reduces to Petzval curvature. Because Petzval curvature depends on only the power and refractive index of lenses, it is insensitive to most aberration correction methods (e.g., lens bending/splitting, stop shifting). The primary method to flat Petzval surface is to add negative power lenses and create air spaces in between. However, it makes the system bulky and expensive. Therefore, releasing the tolerance on the field curvature can greatly reduce the system complexity and design constraints. For example, if we use a single ball lens as the main lens in a light field camera, all off-axis aberrations would be eliminated [26]. Digitally correcting for the remained field curvature provides an ideal solution to achieve a large field of view with a high resolution.

To minimize vignetting in a light field camera, we put a constraint on the lens aperture:

$$a\; \ge \; |{\bar{y}} |+ \; |y |,$$
where a is the radius of the aperture, and $\bar{y}$ and y are the chief ray height and marginal ray height at the aperture position, respectively. In addition, we force the telecentricity of the main lens in the image space.

Figure 8 illustrates the proposed optical design pipeline, which differs from the conventional standard in two aspects: first, the field curvature is not a primary design constraint and can be loosely tolerated, while vignetting must be strictly minimized. Second, optimization must be performed in 3D (x, y, z) rather than 2D (x, y)—we must account for all object points within both the depth range (z) and FOV (x, y). In practice, given radial symmetry, it is justified to sample object points only in the y-z plane. During optimization, we assign each (y, z) object point to a system configuration. We then perform ray tracing in each configuration and calculate the corresponding vignetting factor. Lastly, we construct a y-z vignetting factor map and compute the mean. We use this value as the metric to evaluate vignetting of the system.

 figure: Fig. 8.

Fig. 8. Optical design pipeline for light field cameras. Due to correction of aberrations/vignetting in a 3D space, our design pipeline yields optimized optical performance for computational refocusing and parallax-based depth estimation

Download Full Size | PDF

4. Design example

To demonstrate the implementation of the proposed pipeline, we designed the main lens for a light field endoscope using Zemax. The desired specifications are listed in Table 1.

Tables Icon

Table 1. Specifications of the light field endoscope

We selected a double Gauss lens as the initial configuration to reduce odd aberrations, followed by scaling down the lens to the required diameter. Next, nine object points within the depth range (z) and the FOV (x, y) were chosen to build the multi-configuration, as summarized in Fig. 9. The working distance (WD) is defined as the distance between an object point and the first surface of the main lens. We inserted a dummy surface after the nominal image plane (where the marginal ray height = 0 mm) in each configuration, which serves as the real image plane. Due to field curvature, defocus is introduced for off-axis object points. During optimization, the location of the dummy surface was set as a variable, and each configuration was optimized independently to compensate for the field-dependent defocus. In this way, the effect of field curvature is excluded in the merit function for image quality optimization.

 figure: Fig. 9.

Fig. 9. Multi-configuration in lens optimization.

Download Full Size | PDF

Next, we built the merit function based on design specifications. The activated operands are summarized in Table 2. The variables consist of the radius of surface curvature and the central thickness between adjacent surfaces. Only spherical surfaces are used for each lens element. The optimization process is divided into two steps: local optimization and global optimization. During the local optimization, the paraxial magnification is defined using operand PMAG, RECI, ABLT, and ABGT. The desired magnification of the main lens is −0.2. We used operand AXCL to minimize the axial color, while other aberrations (spherical aberration, coma, astigmatism, distortion, and lateral color) are optimized together to minimize the root-mean-squared (RMS) spot size using default operand TRAC. Particularly, we limited vignetting by image space telecentricity. The operand RAID was used to confine the chief ray angle (CRA) at the last surface of the lens. In addition, the semi-diameter of the lens group was limited by operand MXSD, and the air and glass thicknesses were constrained by operand MNCA, MXCA, MNEA, MNCG, MXCG, and MNEG. During the global optimization, we made two changes: first, we replaced operand TRAC with operand OPDX to minimize the RMS wavefront error. Second, the glass type of each element was set as “substitute” for better performance.

Tables Icon

Table 2. Activated operands in the merit function

To meet the length requirement, we further used Hopkins rod lenses as the relay lens. The desired magnification of the relay lens is 1. We started with two thick doublets, which are symmetric about the stop. As a result, the lens does not introduce coma, distortion, and lateral color. We used the same merit function as that in the main lens, except the object space telecentric was forced to match the pupil. The variables consist of the radius of curvature of each surface and the spacing between adjacent surfaces. After optimization, we duplicated the lenses to extend the relay optics to the required length.

The schematic of the final endoscope is shown in Fig. 10 The original lens design file is shown in Dataset 1 (Ref. [31]). The effective focal length (EFFL) of the system is 14.6 mm, and the total length (TOTR) is 212 mm. The back focal length is 3 mm, and the paraxial magnification is −0.206. Figure 11 shows spot diagrams of three configurations when working distance = 65 mm and object height = 0 mm, 7 mm, 10 mm, respectively, and the corresponding modulation transfer functions (MTFs) are shown in Fig. 12. Finally, we performed ray tracing to calculate vignetting factors for all object points within the depth range and the FOV, and the result is shown in Fig. 13, where the pixel value represents the normalized percentage of unvignetted rays. The mean of this map is 0.99, implying that only one percent of total rays are vignetted. The resultant design, therefore, maximizes the depth reconstruction fidelity.

 figure: Fig. 10.

Fig. 10. Optical setup of the endoscope.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Spot diagrams corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Modulation transfer functions (MTFs) corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Vignetting factor map within the depth range and the FOV.

Download Full Size | PDF

5. Conclusion

In this paper, we systemically studied the effect of field curvature and vignetting on light field depth reconstruction accuracy. We show that the field curvature in light field cameras can be loosely tolerated, while vignetting must be minimized to assure high reconstruction fidelity. To incorporate this finding into the lens design process, we developed a pipeline that optimizes the optical performance of light field cameras in a 3D space, facilitating the computational refocus and parallax-based depth estimation. We expect this work will lay the foundation for future light field camera lens design development, particularly in biomedical applications where diagnosis and treatment heavily rely on the accuracy of the 3D measurement [2729].

Noteworthily, our current optical design pipeline is applicable to only ray optics models. This premise holds valid for light field cameras with a relatively small aperture, such as a light field endoscope. For large NA imaging, to account for the diffraction effect that occur when recording the light field, we must adapt the design process for a wave optics model [30] instead. This study is out of the scope of current work, and we will leave it for future investigation.

Funding

National Institutes of Health (R01EY029397, R21EB028375, R35GM128761); National Science Foundation (1652150).

Acknowledgment

We thank Prof. Rongguang Liang for providing the initial design for Hopkin rod lenses.

Disclosures

The authors declare no conflicts of interest.

References

1. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

2. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).

3. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

4. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). [CrossRef]  

5. R. Raghavendra, K. B. Raja, and C. Busch, “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015). [CrossRef]  

6. K. Maeno, H. Nagahara, A. Shimada, and R. I. Taniguchi, “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135.

7. S. Zhu, X. Lv, X. Feng, J. Lin, P. Jin, and L. Gao, “Plenoptic Face Presentation Attack Detection,” IEEE Access 8, 59007–59014 (2020). [CrossRef]  

8. K. Lynch, T. Fahringer, and B. Thurow, “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14.

9. M. Z. Alam and B. K. Gunturk, “Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range,” arXiv preprint arXiv:1611.05008 (2016).

10. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

11. T. G. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).

12. M. Levoy and P. Hanrahan, “light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, (1996), P. 31–42.

13. A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Trans. Graphic 23(3), 294–302 (2004). [CrossRef]  

14. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).

15. I. Tosic and K. Berkner, “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442.

16. L. Gao, N. Bedard, and I. Tosic, “Disparity-to-depth calibration in light field imaging,” in Imaging and Applied Optics, OSA Technical Digest, (Optical Society of America, 2016), paper CW3D.2.

17. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51(20), 4691–4702 (2012). [CrossRef]  

18. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández, “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). [CrossRef]  

19. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016). [CrossRef]  

20. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017). [CrossRef]  

21. Y. Q. Chen, X Jin, and Q. H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

22. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Baseline and triangulation geometry in a standard plenoptic camera,” Int. J. Comput. Vis. 126(1), 21–35 (2018). [CrossRef]  

23. R. E. Fisher and B. Tadic-Galeb, Optical System Design (McGraw-Hill, 2000).

24. R. Kingslake and R. B. Johnson, Lens design fundamentals (Academic, 2009).

25. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018). [CrossRef]  

26. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057.

27. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). [CrossRef]  

28. E. Kwan, Y. Qin, and H. Hua, “Development of a Light Field Laparoscope for Depth Reconstruction,” in Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP), OSA Technical Digest (online) (Optical Society of America, 2017), paper DW1F.2.

29. S. Zhu, P. Jin, R. Liang, and L. Gao, “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018). [CrossRef]  

30. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

31. Q. Cui, Z. Zhu, and L. Gao, Original lens design file of the main lens of a light field endoscope, 1fighsare12020https://doi.org/10.6084/m9.figshare.12901013 .

Supplementary Material (1)

NameDescription
Dataset 1       Original lens design file of the main lens of a light field endoscope

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Ray models of light field cameras. (a) ULF camera. (b) EPIs associated with point ${S_1}$, ${S_2}$ and ${S_3}$ in (a). (c) FLF camera. (d) Perspective images imaged by microlens ${L_1}$ and ${L_2}$ in (c).
Fig. 2.
Fig. 2. Field curvature in a light field camera. DRM, depth range of the microlens array; MLA, microlens array.
Fig. 3.
Fig. 3. Shaded model of an unfocused light field (ULF) camera. MLA, microlens array.
Fig. 4.
Fig. 4. Vignetting and number of views in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Number of views in an EPI vs. vignetting factor.
Fig. 5.
Fig. 5. Vignetting and slope regression error of the line feature in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Slope regression error vs. vignetting factor.
Fig. 6.
Fig. 6. Experimental setup and raw images of a flat printed grid pattern object. (a) Optical setup. (b) A raw image when the aperture diameter = 4 mm and the baseline image with two magnified subfields.
Fig. 7.
Fig. 7. Disparity maps for each aperture diameter. RMSE, root-mean-squared error.
Fig. 8.
Fig. 8. Optical design pipeline for light field cameras. Due to correction of aberrations/vignetting in a 3D space, our design pipeline yields optimized optical performance for computational refocusing and parallax-based depth estimation
Fig. 9.
Fig. 9. Multi-configuration in lens optimization.
Fig. 10.
Fig. 10. Optical setup of the endoscope.
Fig. 11.
Fig. 11. Spot diagrams corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.
Fig. 12.
Fig. 12. Modulation transfer functions (MTFs) corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.
Fig. 13.
Fig. 13. Vignetting factor map within the depth range and the FOV.

Tables (2)

Tables Icon

Table 1. Specifications of the light field endoscope

Tables Icon

Table 2. Activated operands in the merit function

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

η = 1 E E u ,
S E = ( b i b ^ i ) 2 ( a i a ¯ ) 2 1 n 2 ,
W 220 = 1 2 W 222 + W 220 p ,
a | y ¯ | + | y | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.