Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimization of prism-based stereoscopic imaging systems at the optical design stage with respect to required 3D measurement accuracy

Open Access Open Access

Abstract

We address the optical design procedure of prism-based stereoscopic imaging systems. Conventional approach includes two sequential stages: selection of the hardware and development of the proper digital image processing algorithms. At each of these stages, specific techniques are applied, which are almost unrelated to each other. The main requirements to the imaging system include only the key parameters and the image quality. Therefore, the insufficient measurement accuracy may be revealed only after the prototype is assembled and tested. In this case, even applying complex time-consuming image processing and calibration procedures does not ensure the necessary precision. A radical solution of this issue is to include the measurement error estimation into the optical design stage. In this research, we discuss a simplified implementation of this approach and demonstrate the capabilities of optical design software for this purpose. We demonstrate the effectiveness of this approach by the analysis and optimization of a prism-based stereoscopic imager with respect to required 3D measurement accuracy. The results are meaningful for the development of 3D imaging techniques for machine vision, endoscopic and measurement systems.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Computational imaging systems allow retrieving information, which is difficult, costly or even impossible to obtain using conventional imaging techniques [1]. Both the imaging system and the digital image processing are essential parts of such systems. At the same time, the conventional methodology of optoelectronic system design implies that the optical system and the image processing pipeline are designed and optimized sequentially. This two-stage approach hampers the development process and complicates the search of the optimal design solution. The joint optimization of the optical system parameters and the parameters of the image processing algorithm leads to better results compared to the conventional approach [2,3].

Stereoscopic measurement systems are now widely used for machine vision in industrial and scientific applications to obtain shape of the objects and to measure their 3D geometric features. Being a kind of computational imaging systems, they implement a number of image processing algorithms including image enhancement, rectification, stereo matching, calibration and 3D reconstruction [4,5].

In the conventional design approach, the optical system and algorithms are optimized separately using different merit functions (MF) as shown in Fig. 1. MFs for optical system are usually based on RMS spot radius or wavefront aberrations [3,6]. The required 3D measurement errors are taken into account only indirectly, through the specified requirements for aberrations and main parameters of the optical system, such as focal length and base distance of stereopair. Stereo correspondence search algorithms are usually tested on image databases Middlebury Stereo Vision [7] and KITTI Vision Benchmark Suite [8], which contain indoor and outdoor scenes captured by off-the-shelf lens and cameras. The optimization of these algorithms for particular applications requires image samples containing both the peculiarities of measured objects and specific image aberrations introduced by the designed optical system. Thus, measurement accuracy is not considered at the stage of optical system design and may be revealed only after the prototype is assembled and tested. If the result is unsatisfactory, designers usually try to improve it by adding complex image processing and calibration procedures because repeating the design and prototyping of the optical system is too costly. Moreover, the results of tests for a single prototype cannot ensure their appropriateness in mass production. To implement the joint design approach for stereoscopic systems, the choice of the algorithms and their parameters should be considered at the stage of optical system design and the evaluation of the entire system performance should be based on 3D measurement errors (See Fig. 1). It is especially important for the development of mirrors-based and prism-based stereoscopic systems which design parameters directly affect the measurement accuracy [911].

 figure: Fig. 1.

Fig. 1. Design stages and merit functions for conventional (a) and joint design (b) approaches.

Download Full Size | PDF

The simple method for estimating uncertainty of 3D measurements uses the key parameters (focal length, base distance and parallax angle) of stereoscopic system and the uncertainty of 2D coordinates of corresponding points in the image plane. The geometrical analysis of prism-based stereoscopic system allows associating the parameters of the prism with the field of view (FOV) and the key parameters of virtual stereopair [1013]. The measurement uncertainty is estimated using first-order (linearization) technique and explicit equations for measured distance in 2D representation of biprism-based stereoscopic system whereas the coordinates uncertainty in the image plane is set equal to pixel size [10,11]. This technique may be applied for the initial choice of prism parameters before optical system design using specialized software or for the selection of the prism to combine with off-the-shelf lens and camera.

Full implementation of the joint design approach requires the detailed simulation of image acquisition and data processing pipeline. Proper simulation of image formation needs 3D rendering with respect to optical aberrations [2,14]. To analyze the impact of optical aberrations on the error in determining 2D coordinates of corresponding points and associated 3D measurement errors, the Monte Carlo analysis should be applied by adding noise to simulated images and processing them using the implemented image rectification and stereo correspondence search algorithms. The computer simulation of the calibration procedure for prism-based stereoscopic system has been implemented using Zemax optical design software and has been applied to find the optimal camera model and to optimize calibration procedure, i.e. to reduce systematic and random 3D measurements errors [15,16]. However, all these procedures are time-consuming and, therefore, suit more for the performance analysis of designed optical system rather than for the optical system parameters optimization.

In this research, we present a simplified implementation of the joint design approach for prism-based stereoscopic system and propose the technique for analyzing 3D distribution of measurement errors at optical system design stage. In contrast to existing methods, it allows calculating uncertainties for any 3D point within the analyzed volume, and may be integrated into optical design software for the optimization of optical system parameters.

2. Prism-based stereoscopic system

To illustrate the proposed approach, we consider the design of prism-based stereoscopic system capable to obtain two images of the object from different viewpoints on a single sensor (Fig. 2) [11,15,16]. To fit in a small diameter of the video endoscopic probe, the system should be small-size and have as few components as possible. The presence of the prism leads to a strong image distortion and pupil aberrations that cannot be completely corrected by conventional lenses and should be properly considered in a camera model [15]. The optical layout of the prism-based stereoscopic system is shown in Fig. 2. The prototype has been designed for 1920×1080 image sensor with 1.4×1.4 µm2 pixel. It has aperture of F/11, effective focal length 2.36 mm and FOV of each channel 40° × 45°. The range of working distances is 5-40 mm measured from the first optical surface. The material of the prism is BAK4 glass, the angle θ is equal to 35°. The points 1-6 in the image plane were used for aberration analysis and the optical system optimization. In this study, we vary the refractive index n and angle θ of the prism and analyze the uncertainty of 3D measurements with respect to image aberrations at the optical design stage.

 figure: Fig. 2.

Fig. 2. The optical layout of prism-based stereoscopic system (left) and the points in the image plane used for aberration analysis (right).

Download Full Size | PDF

3. Geometrical analysis

The choice of the prism’s material and angle θ is the key task at the stage of stereoscopic system design. These parameters affect the FOV, base distance of the stereopair, distortion and other aberrations of the optical system. The equations for the calculation of the FOV and the base distance can be found in [10,12,13]. These equations were derived for the biprism oriented so that the apex is at the image side. In our case, the biprism backplane is at the image side as shown in Fig. 3. For the simplified geometrical analysis, we assume that the prism is symmetric about the optical axis, and that the prism backplane, the principal plane of the main lens and the image plane are parallel. The angles ζ1 and ζ2 of the rays in the object space can be calculated as

$$\begin{array}{l} {\zeta _1} = \arcsin \left( {n\sin \left[ {\arcsin \left\{ {\frac{{\sin \omega }}{n}} \right\} - \theta } \right]} \right) + \theta ,\\ {\zeta _2} = \arcsin ({n\sin \theta } )- \theta , \end{array}$$
where ω is the half-angle of the main lens FOV.

 figure: Fig. 3.

Fig. 3. FOV of prism-based stereoscopic system.

Download Full Size | PDF

The FOV of each channel is equal to (ζ1+ζ2). If ζ12, the common FOV is 2ζ2, and the system is classified as the divergent one [12]. If 0 12, the common FOV is equal to 2ζ2 for the distance z less than some value z0 and equal to 2ζ1 otherwise (semi-divergent system). If ζ1< 0, the system is convergent and has a limited working volume. The configurations of working volume in these cases are described in [12,13]. Thus, we should analyze the dependence of ζ1 and ζ2 on θ and n. The results for ω = 37,82° are shown in Fig. 4. This value has been chosen with respect to the FOV of the main lens in the prototype. nd = 1.568 corresponds to BAK4 glass on 587.6 nm.

 figure: Fig. 4.

Fig. 4. The dependencies of angles ζ1 and ζ2 (a) and ζ1+ζ2 (b) on the angle θ for different values of prism refractive index n.

Download Full Size | PDF

Since the choice of θ and n is determined mainly by the requirements to the common FOV, we can use the desired values of ζ1 and ζ2 as the main constraints for the optical system design. The optical design software allows calculating these angles, visualizing the working volume and keeping the required values of ζ1 and ζ2 during the optical system optimization. For real ray tracing, we have to take into account the distortion of the main lens when we calculate the FOV. For the simplified geometrical analysis, we can use the paraxial model for the main lens. After the initial optical system layout is designed, 3D measurement uncertainty estimation is possible.

4. Estimation of 3D measurement uncertainty

The estimation of 3D point coordinates $\mathbf{x} = {({x,y,z} )^T}$ corresponding to 2D coordinates ${\mathbf{p}_1} = {({{u_1},{v_1}} )^T}$ and ${\mathbf{p}_2} = {({{u_2},{v_2}} )^T}$ of its projections in the left and right parts of the image obtained by the prism-based stereoscopic system can be considered as the optimization problem [15]. The first step is calculation of rays l1 and l2 in the object space for 2D points p1 and p2. This calculation requires a mathematical model of the optical system with parameters determined via calibration [15]. Instead of this, we can use the optical design software to perform the ray tracing as discussed in the next section. The second step is the calculation of the 3D point coordinates $\mathbf{\hat{x}} = {({\hat{x},\hat{y},\hat{z}} )^T}$ for which the distances from $\mathbf{\hat{x}}$ to l1 and l2 are minimal. If the cost function is based on the sum of squared distances, the solution of the problem can be found as the midpoint of the common perpendicular to l1 and l2 and does not require additional iterations.

To estimate the uncertainty of the measured 3D coordinates $\mathbf{\hat{x}}$, we use the unscented transformation method [17] which lets us compute the bias and consider the calculation of 3D point coordinates as a “black box” [18]. In this case, 2D coordinates vector ${\mathbf{p}_\textrm{u}} = {({{\mathbf{p}_1}^T,{\mathbf{p}_2}^T} )^T}$ (dimension is N = 4) and its covariance matrix $\textrm{Cov}[{{\mathbf{p}_\textrm{u}}} ]$ are the input data. The set of (2N + 1) vectors ${\mathbf{\tilde{p}}_{\textrm{u,}i}}$ and weight coefficients Wm,i and Wc,i are defined as

$$\begin{array}{l} {{\mathbf{\tilde{p}}}_{\textrm{u,}i}} = {\mathbf{p}_\textrm{u}},\;\;\;{W_{\textrm{m,}i}} = {\gamma / {({N + \gamma } ),\;\;\;}}{W_{\textrm{c,}i}} = {W_{\textrm{m,}i}} + 1 - {\alpha ^2} + \beta ,\;\;\;\;\;\;\;\;\;\textrm{if }i = 1;\\ {{\mathbf{\tilde{p}}}_{\textrm{u,}i}} = {\mathbf{p}_\textrm{u}} + \left( {\sqrt {({N + \gamma } )\textrm{Cov}[{{\mathbf{p}_\textrm{u}}} ]} } \right)_i^T,\;\;{W_{\textrm{m,}i}} = {W_{\textrm{c,}i}} = {1 / {({2N + 2\gamma } )}},\;\;\textrm{if 2 } \le i \le \;N + 1;\\ {{\mathbf{\tilde{p}}}_{\textrm{u,}i}} = {\mathbf{p}_\textrm{u}} - \left( {\sqrt {({N + \gamma } )\textrm{Cov}[{{\mathbf{p}_\textrm{u}}} ]} } \right)_i^T,\;\;{W_{\textrm{m,}i}} = {W_{\textrm{c,}i}} = {1 / {({2N + 2\gamma } )}},\;\;\textrm{if }N + 2\textrm{ } \le i \le \;2N + 1; \end{array}$$
where i = 1…(2N + 1); γ = α2(N + κ) - N; α, β and κ are parameters. The notation $\left( {\sqrt {\mathbf A} } \right)_i^T$ stands for the transposed i-th row of the upper triangular matrix U obtained as the result of the Cholesky decomposition A = UTU. Then 3D point coordinates ${\mathbf{\tilde{x}}_i}$ are calculated for each vector ${\mathbf{\tilde{p}}_{\textrm{u,}i}}$. Finally, the mean value $\textrm{M}[{\mathbf{\hat{x}}} ]$ and the covariance matrix $\textrm{Cov}[{\mathbf{\hat{x}}} ]$ of measured 3D coordinates $\mathbf{\hat{x}}$ can be estimated as
$$\textrm{M}[{\mathbf{\hat{x}}} ]= \sum\limits_{i = 1}^{2N + 1} {{W_{\textrm{m},i}}{{\tilde{{\mathbf x}}}_i}} ,\textrm{ }Cov [{\hat{{\mathbf x}}} ]= \sum\limits_{i = 1}^{2N + 1} {{W_{\textrm{c},i}}({{{\tilde{{\mathbf x}}}_i} - M [{\hat{{\mathbf x}}} ]} )} {({{{\tilde{{\mathbf x}}}_i} - M [{\hat{{\mathbf x}}} ]} )^T}.$$

Miniature prism-based stereoscopic systems are mainly used for measuring geometric parameters such as segments or areas. Since the probability distributions of these parameters are not symmetrical, 95% intervals are better criteria for describing their uncertainty than standard deviation [18]. We can estimate measurement uncertainty of segment length as follows. Denote the measured 3D coordinates of two segment points as ${\mathbf{\hat{x}}_1}$ and ${\mathbf{\hat{x}}_2}$. Their mean values $\textrm{M}[{{{\mathbf{\hat{x}}}_1}} ]$, $\textrm{M}[{{{\mathbf{\hat{x}}}_2}} ]$ and covariance matrices $\textrm{Cov}[{{{\mathbf{\hat{x}}}_1}} ]$, $\textrm{Cov}[{{{\mathbf{\hat{x}}}_2}} ]$ are estimated using the technique described above. The measured length $\hat{r}$ of the segment is equal to $\hat{r} = |{\hat{{\mathbf d}}} |= |{{{\hat{{\mathbf x}}}_2} - {{\hat{{\mathbf x}}}_1}} |$. Hence, the mean value and the covariance matrix of vector $\hat{{\mathbf d}}$ can be found as

$${\rm M} [{\hat{{\mathbf d}}} ]= {\rm M} [{{{\hat{{\mathbf x}}}_2}} ]- {\rm M} [{{{\hat{{\mathbf x}}}_1}} ],\textrm{ }{\rm{Cov}} [{\hat{{\mathbf d}}} ]= {\rm{Cov}} [{{{\hat{{\mathbf x}}}_1}} ]+ {\rm{Cov}} [{{{\hat{{\mathbf x}}}_2}} ]- 2{\rm{Cov}} [{{{\hat{{\mathbf x}}}_1},{{\hat{{\mathbf x}}}_2}} ].$$
Here, the cross-covariance matrix ${\rm{Cov}} [{{{\hat{{\mathbf x}}}_1},{{\hat{{\mathbf x}}}_2}} ]$ is a zero matrix because the 2D coordinates of the points on the image are calculated independently and the calibration uncertainty is not considered [18]. Then, we use the approximation by non-central chi-squared distribution [19] to estimate 95% intervals for $\hat{r}$. First, the parameters s1 and s2 are calculated as
$${s_1} = {c_3}/c_2^{3/2},\textrm{ }{s_2} = {c_4}/c_2^2,\textrm{ }{c_m} = {\rm{tr}} ({{\rm{Cov}} {{[{\hat{{\mathbf d}}} ]}^m}} )+ m{\rm{M}} {[{\hat{{\mathbf d}}} ]^T}{\rm{Cov}} {[{\hat{{\mathbf d}}} ]^{m - 1}}\rm{M} [{\hat{{\mathbf d}}} ],$$
where m = 1…4. Next, the number of degrees of freedom f and the non-central parameter δ are calculated depending on value of $\Delta s = s_1^2 - {s_2}$:
$$\begin{array}{l} f = {a^2} - 2\delta ,\textrm{ }\delta = {s_1}{a^3} - {a^2},\textrm{ }a = 1/({{s_1} - \Delta s} ),\textrm{ if }\Delta s > 0;\\ f = 1/s_1^2,\textrm{ }\delta = 0,\textrm{ if }\Delta \textrm{s} \le \textrm{0}\textrm{.} \end{array}$$
The value of $\hat{r}$ which will not be exceeded with the probability α can be found as
$$\hat{r}(\alpha )= \sqrt {\frac{{\chi _{\alpha ,f}^2(\delta )- ({f + \delta } )}}{{\sqrt {2({f + 2\delta } )} }}\sqrt {2{c_2}} + {c_1}} ,$$
where $\chi _{\alpha ,f}^2(\delta )$ is the quantile of non-central chi-squared distribution with f degrees of freedom and the non-central parameter δ for probability α. The borders ${\hat{r}^ - }$ and ${\hat{r}^ + }$ of 95% interval are estimated using Eq. (7) as ${\hat{r}^ - } = \hat{r}({0.025} )$ and ${\hat{r}^ + } = \hat{r}({0.975} )$.

5. Estimation of measurement uncertainty using optical design software

The usage of the prism leads to strong pupil aberrations. Thus, performing the forward ray tracing through the optical system requires iterative calculations to find the chief ray for each point of object which passes exactly through the center of the aperture stop (Fig. 2). To implement it in Zemax optical design software, the “ray aiming” feature should be switched on [6]. These iterative calculations are time-consuming and slow down the optical system optimization. We can avoid them at the initial design stages if we reverse the optical system. It suits well to the uncertainty estimation method described above because this method mainly requires ray tracing from the specified image points to the object space. When we still need to trace ray from the specified object point, we can set up the field points as “Real image height” for the reversed optical system in Zemax. The reverse of the system also allows a faster visualization of the whole working volume.

To estimate 3D measurement uncertainty for evenly distributed points in the object space, it is necessary to carry out a few procedures listed below for several distances z from the object plane to the first surface of the optical system. The distances z are incremented from zmin to zmax with 1 mm step. Although we consider the reversed optical system in this section, we still refer to the object plane, the image plane and the first surface as presented in conventional configuration shown in Fig. 2.

  • 1. Trace chief rays from the corner points in the image plane (points 3, 4 and 6 in Fig. 2) to the object plane for estimating the linear FOV in this plane.
  • 2. Define the grid of points with 3D coordinates x distributed in the object plane with 1 mm step inside the region a bit larger than the estimated linear FOV. Trace chief ray to each point x through the left and right parts of the prism (set up the field points as “Real image height”) and save the 2D coordinates of points p1 and p2 in the image plane. Select the points x which corresponding points p1 and p2 are within the sensor area.
  • 3. For each selected point x and corresponding vector ${{\mathbf p}_\textrm{u}} = {({{{\mathbf p}_1}^T,{{\mathbf p}_2}^T} )^T}$, define the set of vectors ${\mathbf{\tilde{p}}_{\textrm{u,}i}}$. Trace chief rays from the specified image points and find the mean values $M [{\hat{{\mathbf x}}} ]$ and the covariance matrices $\textrm{Cov}[{\mathbf{\hat{x}}} ]$ according to the method described above.
  • 4. Use calculated mean values $\textrm{M}[{{{\mathbf{\hat{x}}}_1}} ]$, $\textrm{M}[{{{\mathbf{\hat{x}}}_2}} ]$ and covariance matrices $\textrm{Cov}[{{{\mathbf{\hat{x}}}_1}} ]$, $\textrm{Cov}[{{{\mathbf{\hat{x}}}_2}} ]$ for neighboring points of the grid to estimate 95% intervals for measured lengths of segments aligned with Ox and Oy axes. Use the results of calculation for the previous z distance to estimate the intervals for segments oriented along Oz axis.
This algorithm has been implemented in Zemax optical design software using macros written in Zemax Programming Language and external DLLs. In this research, we mainly used this program to visualize the distribution of 3D measurement uncertainty across the working volume. We have also implemented similar macros which may be used as optimization operands calculating $M [{\hat{{\mathbf x}}} ]$ and $\textrm{Cov}[{\mathbf{\hat{x}}} ]$ or 95% intervals for segments for the input 3D coordinates x or 2D coordinates p1 and distance z.

6. Numerical examples

To demonstrate the proposed technique, we analyzed 3 optical systems based on the layout shown in Fig. 2 with different angles θ: 26.6° (semidivergent), 31° (divergent) and 35° (divergent, the same as in the developed prototype). The parameters of the sensor were the same as in the prototype. The material of the prism was BAK4 with refractive index nd = 1.568 and Abbe number Vd = 56.13. Radii and central thicknesses of the lenses as well as air gaps were optimized in Zemax for each value of θ using default merit function based on RMS spot radius for 6 image points (Fig. 2) and 3 wavelengths: 486.3 nm, 587.6 nm and 656.3 nm. The distance from the first surface to the object plane was equal to 15 mm. In this example, the diameter of the aperture stop was set equal to 0.8 mm. The values of ζ1 and ζ2 corresponding to specified θ and initial specification of the main lens were set as hard constraints during the optimization. These angles were calculated as the angles of chief rays for points 1 and 3 in the image plane.

The spot diagrams for points 1 and 4 are shown in the upper row of Fig. 5. The chromatic aberrations of the prism cannot be compensated by the aberrations of the main lens in these points and, therefore, reach significant values. Airy disk is shown in black. The estimation of 3D measurement uncertainty has been done for 587.6 nm. The results for measured coordinates $\hat{{\mathbf x}}$ of 3D points are presented in the lower row of Fig. 5 as the grid of points in xOz plane when $\textrm{Cov}[{{{\mathbf p}_\textrm{u}}} ]= \sigma _\textrm{p}^2{\mathbf I}{{\mathbf d}_{4 \times 4}}$, Id4×4 is the 4×4 identity matrix and σp = 0.1 pixel. The position and orientation of the coordinate system (CS) are in agreement with Fig. 2. We should note that the calculations were implemented for 3D points distributed over the whole working volume, but Fig. 5 shows only points in xOz plane. The ellipses representing the covariance matrix $\textrm{Cov} [{\hat{{\mathbf x}}} ]$in xOz plane are shown in black color (magnified 4 times). The values σm (square root of maximal eigenvalue of $\textrm{Cov} [{\hat{{\mathbf x}}} ]$) are represented by color circles according to the color bar in the right part of the figure. These values are proportional to the size of the covariance ellipsoids along their major axes. For better visual comparison, Fig. 6 shows the values of σm calculated for x= 1 mm. The estimated uncertainties of measured segments lengths aligned with Ox and Oz axes are presented in Fig. 7 as the widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals. Nominal segment length is 1 mm.

 figure: Fig. 5.

Fig. 5. The spot diagrams for points 1 and 4 in the image plane calculated for 486.3 nm (blue), 587.6 nm (green) and 656.3 nm (red) wavelengths (upper row) and the projections of covariance ellipsoids (magnified 4 times) for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (lower row). Airy disk is shown in black color. The centers of the circles correspond to nominal values of 3D coordinates x.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The dependencies of square root σm of maximal eigenvalue of ${\rm{Cov}} [{\hat{{\mathbf x}}} ]$ on distance z for x = 1 mm and different angles θ of the prism.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals of measured segment lengths $\hat{r}$ aligned along Ox (a) and Oz (b) axes. The centers of color circles correspond to the midpoints of segments (a) or the farthest points of segments (b).

Download Full Size | PDF

The results presented in Fig. 5 indicate that increasing the angle θ from 26.6° to 31° allows expansion of the working volume and reduction of the measurement uncertainty, but leads to increased chromatic aberrations. However, further increase of θ from 31° to 35° shrink the common FOV at distances larger than 16 mm and slightly reduces the measurement error with a noticeable image quality degradation. Thus, we can find the optimal value of θ when its further increase becomes impractical.

The same values of angles ζ1 and ζ2 can be obtained with different combinations of n and θ, which correspond to different chromatic aberrations introduced by the prism. We analyzed 3 optical systems based on the layout shown in Fig. 2 with different materials of the prism: N-FK56, BAK4 and P-SF68. The systems were optimized as in the previous case. The values of ζ1 and ζ2 corresponding to θ = 29.4° for BAK4 were set as hard constraints. The spot diagrams and the estimated uncertainties of 3D point coordinates are shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The spot diagrams for points 1 and 4 in the image plane calculated for 486.3 nm (blue), 587.6 nm (green) and 656.3 nm (red) wavelengths (upper row) and the projections of covariance ellipsoids (magnified 4 times) for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (lower row). Airy disk is shown in black color. The centers of the circles correspond to nominal values of 3D coordinates x.

Download Full Size | PDF

Using P-SF68 with higher refractive index and small Abbe number allows assigning a smaller θ and leads to smaller monochromatic aberrations and larger chromatic ones compared to BAK4. Using N-FK56 requires larger angle θ and leads to decrease of chromatic aberrations. The estimated 3D measurement uncertainty for N-FK56 and BAK4 is approximately the same and is slightly smaller for P-SF68.

We have demonstrated that the proposed technique allows estimating the uncertainty of 3D point coordinates and segment lengths using the optical design software. According to our analysis, the common FOV of the prototype can be increased and the chromatic aberrations can be reduced without significant loss in 3D measurement accuracy. Since the prism parameters are mainly determined by the requirements to the common FOV, we can propose the following design strategy. At the stage of initial geometric analysis, we set up the values of ζ1 and ζ2 to provide the required FOV, choose the prism parameters and FOV of the main lens to satisfy the requirements. Next, we estimate 3D measurement uncertainty. If the results are appropriate, the optical system is optimized to reduce aberrations using the values of ζ1 and ζ2 as hard constraints. As we can see from our analysis, 3D measurements uncertainty does not change significantly if the angles ζ1 and ζ2 are fixed. After the optimization, we check 3D measurement accuracy and repeat the last two steps if needed. As a result, we can combine aberration analysis and measurement uncertainties analysis in optical design software. The same technique can be potentially used for optimization using the values of 3D measurement uncertainty, but for the analyzed optical system, setting the values of ζ1 and ζ2 as hard constraints is sufficient. Although the prism was symmetric about the optical axis when performing the calculations in this section, the proposed technique is applicable without this limitation.

7. Experiments

We have conducted experiments with the prototype of prism-based stereoscopic system to estimate 3D measurement uncertainties and compare them with the results of calculations using optical design software. The material of the prism is BAK4, the angle θ is 35°, so it corresponds to the modelling example in the right column in Figs. 5 and 7. We used the ray tracing camera model and the calibration technique with precise shift of flat calibration target [16]. The calibration target with chessboard pattern have been produced by chrome etching on glass with inaccuracy of about 1 µm. The chessboard square size was 1 mm. To capture images for calibration and measurements, we used the linear translation stage to provide precise shift of calibration target along z axis. The diffusing glass was set as the background for calibration target and illuminated by white LED. Figure 9 shows our calibration setup and a few images of the calibration target.

 figure: Fig. 9.

Fig. 9. Calibration setup (left) and images of the calibration target (right)

Download Full Size | PDF

The calibration sequence consisted of 2 positions with the relative shift of 10 mm. To acquire the test sequence, the calibration target was shifted by 10 mm with 1 mm step. We captured 100 images at each position. All images were processed automatically to extract the coordinates of the chessboard corners using the sub-pixel corner finder algorithm based on Harris corner detector [20], to match image points and points on calibration target, and, therefore, to find corresponding chessboard corners on the left and right parts of images. The 2D coordinates of chessboard corners were averaged over 100 images and used to find the camera model parameters [16]. These parameters were then used to calculate 3D coordinates for each chessboard corner on the images of test sequence.

We calculated mean values and covariance matrices of vectors ${{\mathbf p}_\textrm{u}} = {({{{\mathbf p}_1}^T,{{\mathbf p}_2}^T} )^T}$ of 2D point coordinates for each chessboard corner visible on both image parts and each position of the calibration target in the test sequence using series of 100 images. Then we estimated values of σp for each chessboard corner as the RMS of standard deviations of coordinates u1, v1, u2 and v2. The results are shown in Fig. 10(a). The values of σp are approximately 2 times lower than the values used for the computer simulation. Next, we calculated mean values and covariance matrices of measured 3D point coordinates $\hat{{\mathbf x}}$ for each chessboard corner. The ellipses representing the covariance matrix $\textrm{Cov} [{\hat{{\mathbf x}}} ]$ in xOz plane and the values σm of square root of maximal eigenvalue of ${\rm{Cov}} [{\hat{{\mathbf x}}} ]$ are presented in Fig. 10(b).

 figure: Fig. 10.

Fig. 10. The estimated standard deviations σp of 2D point coordinates of chessboard corners on the images (a). The projections of covariance ellipsoids for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (black lines, magnified 4 times) and the values σm of square root of maximal eigenvalue of $Cov [{\hat{{\mathbf x}}} ]$ (color circles) (b). The widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals of measured segment lengths $\hat{r}$ aligned with Ox (c) and Oz (d) axes. The centers of color circles correspond to the mean values of measured 3D coordinates $\hat{{\mathbf x}}$ of chessboard corners (a,b), the midpoints of segments (c) or the farthest points of segments (d).

Download Full Size | PDF

We should note that calculated 3D coordinates were originally obtained in the camera CS [16]. Its origin is purely virtual and is defined as the result of calibration. Thus, this CS is different from the CS used for the computer simulation, where its origin has been set in the center of the aperture stop. To make z coordinates in Fig. 10 comparable to the previous figures, we applied the approximate transformation based on the measured distance from the calibration target to the first surface of the optical system and the nominal distance from the first surface to the aperture stop.

Orientation of the calibration target corresponds to the grid lines appeared approximately horizontal on the images. We used the distance between chessboard nodes (the nominal value is 1 mm) to measure the segment lengths aligned along Ox and Oy axes. Points to measure the segments along Oz axis were taken from the previous image of the sequence, i.e. when the calibration target was shifted by 1 mm. The estimated uncertainties of measured segment lengths aligned along Ox and Oz axes are presented in Figs. 10(c) and (d) as the widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals. The experimentally measured segments are not exactly aligned with the Ox, Oy, and Oz axes in any CS, but we assume this data is sufficient to analyze measurement uncertainty for different segment orientation and to compare it with the computer simulation results.

We analyzed the 3D measurements precision and compared the results with the results obtained using the optical design software. We did not compare the measurement accuracy because it depends on the camera model, calibration procedure [15,16], and other factors that have not been considered in our computer simulation. We should note that the FOV in Fig. 10 is narrower than the FOV in Figs. 58 because chessboard corners near the edges of the half-images were not detected.

8. Conclusion

We have addressed the development pipeline of prism-based stereoscopic imaging systems. We have demonstrated that it may be optimized if the optical design stage includes 3D measurement error estimation. For this purpose, a simplified implementation of such joint design approach was developed using optical design software. We have presented the technique for analyzing 3D distribution of the measurement errors, which allows calculating uncertainties for any 3D point within the working volume and may be integrated into this software for the optimization of optical system performance. Thus, 3D measurement error can now be embedded into the merit function for the imager optimization. We believe that the joint approach to the design of stereoscopic systems may enable a significant measurement accuracy increase in comparison to conventional two-stage technique as well simplifying the optical system.

Funding

Ministry of Science and Higher Education of the Russian Federation (0069-2019-0010).

Disclosures

The authors declare no conflicts of interest.

References

1. J. N. Mait, G. W. Euliss, and R. A. Athele, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

2. D. G. Stork and M. D. Robinson, “Theoretical foundations for joint digital-optical analysis of electro-optical imaging systems,” Appl. Opt. 47(10), B64–B75 (2008). [CrossRef]  

3. D. G. Stork, “Toward a signal-processing foundation for computational sensing and imaging: Electro-optical basis and merit functions,” APSIPA Trans. on Signal and Inform. Process. 6, E8 (2017). [CrossRef]  

4. C. Zhou and S. K. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. on Image Process. 20(12), 3322–3340 (2011). [CrossRef]  

5. N. Pears, Y. Liu, and P. Bunting, 3D Imaging, Analysis and Applications (Springer-Verlag, 2012), Chap. 2.

6. H. Sun, Lens Design: A Practical Guide (CRC Press, 2016).

7. D. Scharstein, R. Szeliski, and R. Zabih, Middlebury Stereo Vision Page. URL: http://vision.middlebury.edu/stereo/ (online; accessed: 2020-06-08)

8. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, The KITTI Vision Benchmark Suite: Stereo Evaluation 2015. URL: http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo (online; accessed: 2020-06-08)

9. L. Yu and B. Pan, “Structure parameter analysis and uncertainty evaluation for single-camera stereo-digital image correlation with a four-mirror adapter,” Appl. Opt. 55(25), 6936–6946 (2016). [CrossRef]  

10. K. B. Lim and Y. Xiao, “Virtual stereovision system: New understanding on single-lens stereovision using a biprism,” J. Electron. Imaging 14(4), 043020 (2005). [CrossRef]  

11. W. L. Kee, Y. Bai, and K. B. Lim, “Parameter error analysis of singlelens prism-based stereovision system,” J. Opt. Soc. Am. A 32(3), 367–373 (2015). [CrossRef]  

12. W. L. Kee, K. B. Lim, Z. L. Tun, and B. Yading, “New understanding on the effects of angle and position of biprism on singlelens biprism stereovision system,” J. Electron. Imaging 23(3), 033005 (2014). [CrossRef]  

13. X. Cui, Y. Zhao, K. Lim, and T. Wu, “Perspective projection model for prism-based stereovision,” Opt. Express 23(21), 27542–27557 (2015). [CrossRef]  

14. J. E. Farrell, P. B. Catrysse, and B. A. Wandell, “Digital camera simulation,” Appl. Opt. 51(4), A80–A90 (2012). [CrossRef]  

15. A. V. Gorevoy, A. S. Machikhin, V. I. Batshev, and V. Y. Kolyuchkin, “Optimization of stereoscopic imager performance by computer simulation of geometrical calibration using optical design software,” Opt. Express 27(13), 17819–17839 (2019). [CrossRef]  

16. A. V. Gorevoy, A. S. Machikhin, D. D. Khokhlov, and V. I. Batshev, “Modeling and optimization of a geometrical calibration procedure for stereoscopic video endoscopes,” J. Opt. Soc. Am. A 36(11), 1871–1882 (2019). [CrossRef]  

17. S. J. Julier, “The scaled unscented transformation,” Proc. 2002 Am. Control Conf. 6, 4555–4559 vol.6 (2002). [CrossRef]  

18. A. V. Gorevoy, V. Y. Kolyuchkin, and A. S. Machikihin, “Estimation of the geometrical measurement error at the stage of stereoscopic system design,” Comput. Opt. 42(6), 985–997 (2018). [CrossRef]  

19. H. Liu, Y. Tang, and H. H. Zhang, “A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables,” Comput. Stat. Data Anal. 53(4), 853–856 (2009). [CrossRef]  

20. Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett 71, 66–72 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Design stages and merit functions for conventional (a) and joint design (b) approaches.
Fig. 2.
Fig. 2. The optical layout of prism-based stereoscopic system (left) and the points in the image plane used for aberration analysis (right).
Fig. 3.
Fig. 3. FOV of prism-based stereoscopic system.
Fig. 4.
Fig. 4. The dependencies of angles ζ1 and ζ2 (a) and ζ1+ζ2 (b) on the angle θ for different values of prism refractive index n.
Fig. 5.
Fig. 5. The spot diagrams for points 1 and 4 in the image plane calculated for 486.3 nm (blue), 587.6 nm (green) and 656.3 nm (red) wavelengths (upper row) and the projections of covariance ellipsoids (magnified 4 times) for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (lower row). Airy disk is shown in black color. The centers of the circles correspond to nominal values of 3D coordinates x.
Fig. 6.
Fig. 6. The dependencies of square root σm of maximal eigenvalue of ${\rm{Cov}} [{\hat{{\mathbf x}}} ]$ on distance z for x = 1 mm and different angles θ of the prism.
Fig. 7.
Fig. 7. The widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals of measured segment lengths $\hat{r}$ aligned along Ox (a) and Oz (b) axes. The centers of color circles correspond to the midpoints of segments (a) or the farthest points of segments (b).
Fig. 8.
Fig. 8. The spot diagrams for points 1 and 4 in the image plane calculated for 486.3 nm (blue), 587.6 nm (green) and 656.3 nm (red) wavelengths (upper row) and the projections of covariance ellipsoids (magnified 4 times) for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (lower row). Airy disk is shown in black color. The centers of the circles correspond to nominal values of 3D coordinates x.
Fig. 9.
Fig. 9. Calibration setup (left) and images of the calibration target (right)
Fig. 10.
Fig. 10. The estimated standard deviations σp of 2D point coordinates of chessboard corners on the images (a). The projections of covariance ellipsoids for measured 3D point coordinates $\hat{{\mathbf x}}$ corresponding to 95% probability onto xOz plane (black lines, magnified 4 times) and the values σm of square root of maximal eigenvalue of $Cov [{\hat{{\mathbf x}}} ]$ (color circles) (b). The widths $({{{\hat{r}}^ + } - {{\hat{r}}^ - }} )$ of 95% intervals of measured segment lengths $\hat{r}$ aligned with Ox (c) and Oz (d) axes. The centers of color circles correspond to the mean values of measured 3D coordinates $\hat{{\mathbf x}}$ of chessboard corners (a,b), the midpoints of segments (c) or the farthest points of segments (d).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ζ 1 = arcsin ( n sin [ arcsin { sin ω n } θ ] ) + θ , ζ 2 = arcsin ( n sin θ ) θ ,
p ~ u, i = p u , W m, i = γ / ( N + γ ) , W c, i = W m, i + 1 α 2 + β , if  i = 1 ; p ~ u, i = p u + ( ( N + γ ) Cov [ p u ] ) i T , W m, i = W c, i = 1 / ( 2 N + 2 γ ) , if 2  i N + 1 ; p ~ u, i = p u ( ( N + γ ) Cov [ p u ] ) i T , W m, i = W c, i = 1 / ( 2 N + 2 γ ) , if  N + 2   i 2 N + 1 ;
M [ x ^ ] = i = 1 2 N + 1 W m , i x ~ i ,   C o v [ x ^ ] = i = 1 2 N + 1 W c , i ( x ~ i M [ x ^ ] ) ( x ~ i M [ x ^ ] ) T .
M [ d ^ ] = M [ x ^ 2 ] M [ x ^ 1 ] ,   C o v [ d ^ ] = C o v [ x ^ 1 ] + C o v [ x ^ 2 ] 2 C o v [ x ^ 1 , x ^ 2 ] .
s 1 = c 3 / c 2 3 / 2 ,   s 2 = c 4 / c 2 2 ,   c m = t r ( C o v [ d ^ ] m ) + m M [ d ^ ] T C o v [ d ^ ] m 1 M [ d ^ ] ,
f = a 2 2 δ ,   δ = s 1 a 3 a 2 ,   a = 1 / ( s 1 Δ s ) ,  if  Δ s > 0 ; f = 1 / s 1 2 ,   δ = 0 ,  if  Δ s 0 .
r ^ ( α ) = χ α , f 2 ( δ ) ( f + δ ) 2 ( f + 2 δ ) 2 c 2 + c 1 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.