Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Heterogeneous compound eye camera for dual-scale imaging in a large field of view

Open Access Open Access

Abstract

Multi-scale imaging with large field of view is pivotal for fast motion detection and target identification. However, existing single camera systems are difficult to achieve snapshot multi-scale imaging with large field of view. To solve this problem, we propose a design method for heterogeneous compound eye, and fabricate a prototype of heterogeneous compound eye camera (HeCECam). This prototype which consists of a heterogeneous compound eye array, an optical relay system and a CMOS sensor, is capable of dual-scale imaging in large field of view (360°×141°). The heterogeneous compound eye array is composed of 31 wide-angle (WA) subeyes and 226 high-definition (HD) subeyes. An optical relay system is introduced to re-image the curved focal surface formed by the heterogeneous compound eye array on a CMOS sensor, resulting in a heterogeneous compound eye image containing dual-scale subimages. To verify the imaging characteristics of this prototype, a series of experiments, such as large field of view imaging, imaging performance, and real-world scene imaging, were conducted. The experiment results show that this prototype can achieve dual-scale imaging in large field of view and has excellent imaging performance. This makes the HeCECam has great potential for UAV navigation, wide-area surveillance, and location tracking, and paves the way for the practical use of bio-inspired compound eye cameras.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

There is trade-off between large field of view and high resolution, due to the fact that a system with large field of view and high resolution is not an easily achievable design. Many researchers are dedicated to research and find an optimal solution for large field of view and high-resolution imaging system. The insect’s vision system, which features many excellent characteristics including large field of view, low aberration, high sensitivity to moving objects, and infinite depth of field, provides them with an enormous amount of inspiration [17]. Many different kinds of artificial compound eye systems have been proposed for different application needs, such as small size [3,6,8,9], 3D imaging [1012], light field imaging [13,14], medical and biological imaging [1517], 3D display [18], navigation [19], large field of view imaging [6,8,2023].

Unlike the human eye, the insect’s eye is a compound vision system that consists of many ommatidia arranged on a curved surface. This multi-aperture system provides more scalability, allowing the existence of subeyes with different focal lengths in the vision system. In recent years, a number of multi-focal compound eye arrays have been proposed to solve problems that are difficult to overcome with single aperture systems or uniform array systems. For example, Guo and his team fabricated a multi-focusing compound eye [24,25], which is a non-uniform micro-lens array on a negative meniscus substrate using the melting photoresist method. In this multi-focusing compound eye, the focal length of the subeyes varies with the curvature of the substrate. This system effectively reduces the defocus of the limbic ommatidia on the planar detector. Similarly, a multi-focal compound eye was fabricated by combining photolithography, hot embossing, soft photolithography, and gas-assisted deformation techniques by Wu et al. in 2020 [26]. In 2022, a dual-focal artificial compound eye was reported by Huang et al. to achieve dual-scale varifocal imaging in a large field of view [27]. The depth of field was expanded to facilitate continuous clear imaging in a wider range by the primary and secondary micro-lenses with two different sizes.

In practical applications, long-distance high-resolution imaging is required for systems with sufficiently large apertures, even though the above multi-focal micro-lens arrays solve the problem of mismatch between curved array and planar sensor or extend the depth of field. The single-size multi-camera arrays or multi-aperture systems are capable of large field of view high resolution imaging, but they are often bulky and data-intensive, making it difficult to achieve real-time detection and identification. Effective information is sparse and non-uniformly distributed, and the undifferentiated acquisition and processing of all information leads to a huge waste of computational resources [1,2830].

The multi-scale imaging using non-uniform apertures is expected to overcome this problem. An imaging system consisting of a stationary camera and an active pan-tilt zoom camera has been demonstrated to monitor the entire scene and obtain high-resolution images in a certain region of interest [31]. The stationary camera with low resolution is used to detect and locate moving objects in a wide-angle range. The target position was then used to determine the direction which the pan-tilt camera need to move, and capture the area of interest at a higher resolution. This active optics based on dual-scale imaging has a great potential for surveillance applications. In 2021, Dai et al. proposed a modular hierarchical array camera with high flexibility, real-time and robustness inspired by the human brain nerves [32]. Through the cooperation of global and local cameras, active local high-resolution imaging for the region of interest is performed, leading to significantly reduced bandwidth and computational requirements for real-time gigapixel videography. Furthermore, the multi-resolution images captured by the non-uniform aperture array are essential in applications such as stereo ranging and target tracking. In 2022, a miniaturized six-channel multi-resolution imaging system, which can obtain six sub-images at different scales simultaneously on a single detector, was fabricated by Belay and his team. This system was able to achieve a magnification ratio of up to 10 times between the six channels and estimate object distances from 1m to 100m [33]. In addition, as early as 2012, a three-channel multi-resolution imaging system had been proposed by Belay’ research group [34,35] and a dual-scale refocusing imaging system using a tunable liquid lens had been designed by Smeesters et al. in 2014 [36].

Therefore, the multi-scale imaging of non-uniform aperture systems has unique properties and has important applications in many fields, but most of them remain huge in size [32,33]. We attempt to develop an integrated multi-scale imaging system with large field of view based on the bio-inspired compound eye. This system has the potential of active selective local high-resolution imaging in a large field of view, and greatly improve the flexibility and real-time performance. In our previous work, a bionic spherical compound eye camera [23] has been realized by introducing an optical relay system to solve the problem of mismatch between the curved image and the planar sensor. In this work, we proposed a design method of heterogeneous compound eye array and fabricated a prototype of the HeCECam whose maximum radial diameter and axial length are 100mm and 182mm respectively. This prototype consisting of a heterogeneous compound eye array, an optical relay system and a CMOS sensor, is capable of dual-scale imaging in large field of view (360°×141°). The heterogeneous compound eye array is composed of 31 WA subeyes and 226 HD subeyes. The WA subeyes enable to capture an ultra-wide field with very low resolution, which is a great advantage in target detection and real-time localization. The HD subeyes allow the system to observe local details in any region of the field. An optical relay system is introduced to re-image the curved focal surface formed by the heterogeneous compound eye array on a CMOS sensor, resulting in a heterogeneous compound eye image containing dual-scale subimages. This compact dual-scale imaging system with large field of view has great potential for UAV navigation, wide-area surveillance, and location tracking, and paves the way for the practical use of bio-inspired compound eye cameras.

2. Methods and materials

2.1. Design method of heterogeneous compound eye array

The heterogeneous compound eye array is the core component of the HeCECam, which is responsible for collecting dual-scale information of the scene. The distribution of subeyes with different parameters in the heterogeneous compound eye array directly affects the outputted compound eye image. To design an appropriate heterogeneous compound eye array, we summarize the parameters and distribution of these subeyes need to meet the following four constraints: 1) At least two types of subeyes with different parameters should be contained in the compound eye array; 2) The total field of view of each type of subeyes should cover the entire detection space; 3) Different types of subeyes should image on the same focus surface when a relay optical system is adopted; 4) Each subeye in the array should work normally without occlusion. According to these basic constraints, the main parameters are calculated when two types of subeyes are included in the array. Figure 1 shows the schematic diagram of the general compound eye array and the heterogeneous compound eye array.

 figure: Fig. 1.

Fig. 1. Main parameters in general compound eye array (a) and heterogeneous compound eye array (b). (c) Schematic diagram of sectional structure of heterogeneous compound eye array.

Download Full Size | PDF

Figure 1(a) gives the main parameters in general compound eye array. The diameter of each subeye, radius of the array, field of view of each subeye, and inter-ommatidia angle are represented by d, R, $\Delta \varphi$ and $\Delta \phi$ respectively. There is an overlap in the field of view of adjacent subeyes when $\Delta \varphi$ is greater than $\Delta \phi$. $\alpha$ is the range of overlapping area and D is the minimum working distance. There will be a blind area when the working distance is less than D. The minimum working distance D can be calculated by:

$$D = \frac{{R\sin \left( {\frac{{\Delta \varphi }}{2}} \right) - \frac{d}{2}\cos \left( {\frac{{\Delta \varphi }}{2}} \right)}}{{\sin \left( {\frac{\alpha }{2}} \right)}} - R,$$
where $\alpha = \Delta \varphi - \Delta \phi$. Further, D is jointly determined by d, R, $\Delta \varphi$ and $\Delta \phi$, and
$$\frac{d}{R} < \Delta \phi < \Delta \varphi .$$

Figure 1(b) shows the main parameters of the heterogeneous compound eye array. The heterogeneous compound eye array is composed of two types of subeyes, WA subeyes and HD subeyes. To make the imaging range of each subeyes array evenly cover the entire detection space, the spatial distribution ratio of WA subeyes and HD subeyes is the reciprocal of the ratio of its field angle. Similar to the general compound eye array, the main parameters of the heterogeneous compound eye array are (${d_1}$, ${d_2}$, $\Delta {\varphi _1}$, $\Delta {\varphi _2}$, ${R_1}$, ${R_2}$, ${f_1}$, ${f_2}$), which represent the parameters of WA subeyes and HD subeyes respectively. The difference is that, three parameters ($\Delta {\phi _1}$, $\Delta \phi _2^\prime$, $\Delta {\phi _2}$) exist on the inter-ommatidia angle since two types of subeyes are not evenly cross arranged. $\Delta {\phi _1}$ and $\Delta {\phi _2}$ are inter-ommatidia angle of WA subeyes and HD subeyes. $\Delta \phi _2^\prime$ is the angle between two HD subeyes separated by a WA subeye.

Figure 1(c) shows the sectional structure of heterogeneous compound eye array. Due to its particular structure, a minimum value exists for $\Delta \phi _2^\prime$, named as $\Delta \phi _{2\textrm{ limit }}^\prime$. It can be calculated by:

$$\Delta \phi _{2\textrm{ limit }}^\prime = \left[ {\frac{{{d_1}}}{{{R_1}}} + \frac{{{d_2}}}{{{R_2}}} + \frac{{2\Delta h \cdot \tan \left( {\frac{{\Delta {\varphi_1}}}{2}} \right)}}{{{R_2}}}} \right],$$
where $\Delta h = {R_2} - {R_1}$. Considering the constraint relationship between field of view and the inter-ommatidia angle, $\Delta \phi _2^\prime$ is shown as:
$$\Delta \phi _{2\textrm{ limit }}^\prime < \Delta \phi _2^\prime < \Delta {\varphi _2}.$$

Assuming that the minimum working distance of this camera is ${D_{admit}}$, the working distance of WA subeyes and HD subeyes can be calculated by:

$${D_1} = D_2^\prime + \Delta h\textrm{ = }{D_{admit}} + \Delta h.$$

Based on the above formula, $\Delta {\phi _1}$ can be calculated by:

$$\sin \left( {\frac{{{\alpha_1}}}{2}} \right) = \frac{{\left[ {{R_1}\sin \left( {\frac{{\Delta {\varphi_1}}}{2}} \right) - \frac{{{d_1}}}{2}\cos \left( {\frac{{\Delta {\varphi_1}}}{2}} \right)} \right] \cdot \sin \left( {\frac{{\Delta {\varphi_2} - \Delta \phi_2^\prime }}{2}} \right)}}{{{R_2}\sin \left( {\frac{{\Delta {\varphi_2}}}{2}} \right) - \frac{{{d_2}}}{2}\cos \left( {\frac{{\Delta {\varphi_2}}}{2}} \right)}},$$
where ${\alpha _1} = \Delta {\varphi _1} - \Delta {\phi _1}$. N is the number of HD subeyes distributed between two adjacent WA subeyes and must be a positive integer value, shown as below:
$$\frac{{\Delta {\phi _1}}}{{\Delta \phi _2^\prime }} \le N \le 1 + \frac{{\Delta {\phi _1} - \Delta \phi _2^\prime }}{\gamma },$$
where $\gamma = {d_2}/{R_2}$. The specific value of $\Delta {\phi _2}$ can be obtained by:
$$\Delta {\phi _2} = \frac{{\Delta {\phi _1} - \Delta \phi _2^\prime }}{{N - 1}},N > 1.$$

2.2. Optical design

Two types of subeyes in heterogeneous compound eye array are imaged on the same focal surface. By introducing an optical relay system, the curved image is re-imaged on a planar CMOS sensor. Based on the above design method of heterogeneous compound eye array, the main parameters of the heterogeneous compound eye array could be determined and an optical relay system was designed in this part. The radii of these two arrays are 44.5mm (${R_1}$) and 56mm (${R_2}$), respectively. The diameters of the WA subeyes and HD subeyes are set in the sizes of 2.5mm (${d_1}$) and 5mm (${d_2}$). The field of views / focal lengths of these two types of subeyes are 26° ($\Delta {\varphi _1}$) / 15° ($\Delta {\varphi _2}$) and 3.77 mm (${f_1}$) / 15 mm (${f_2}$), respectively. The minimum working distance was set as 500mm (${D_{admit}}$). The inter-ommatidia angle of $\Delta \phi _2^\prime$ can be calculated by formula (4).

$$13.767\mathrm{^\circ } < \Delta \phi _2^\prime < 14.004\mathrm{^\circ }\textrm{.}$$

By formula (6), $\Delta {\phi _1}$ is calculated to be about 24.3°. In our design, $\Delta \phi _2^\prime$ and $\Delta {\phi _1}$ was chosen as 14° and 21°, respectively. The number of HD subeyes distributed between two adjacent WA subeyes N can be determined as 2 by formula (7). Finally, $\Delta {\phi _2}$ is calculated to be 7° by formula (8).

Figure 2 shows the ray path of the HeCECam which consists of WA subeyes, HD subeyes and an optical relay system. Ten channels are set in the simulation optical path, including 4 WA subeyes and 6 HD subeyes. The partial enlarged views of the WA subeyes and HD subeyes are shown as dotted circle. An optical relay system is introduced to re-image the curved surface formed by the heterogeneous compound eye array into a planar CMOS sensor. This ingeniously solves the mismatch between the curved image and the planar sensor, and is expected to further improve the imaging quality. This optical relay system is composed of 7 groups, 11 lenses, and its paraxial magnification, focal length and F number are -0.143, 5.3 mm and 2.91, respectively. It is a telecentric optical system, which converts the curved image 30mm in front into a plane image.

 figure: Fig. 2.

Fig. 2. Optical design of the HeCECam. The curved image captured by two types of subeyes was re-imaged on a CMOS sensor by an optical relay system.

Download Full Size | PDF

Figure 3 gives the result analysis of this optical design. From left to right, the three columns of graphs are spot diagram, modulation transfer function (MTF) curve, and distortion. The four rows from top to bottom represent the results analysis for the central WA subeye, the marginal WA subeye, the central HD subeye, and the marginal HD subeye, respectively. It can be seen that the root mean square (RMS) radii of the central and marginal WA subeyes are less than 1.661µm and 3.244µm, respectively. The HD subeyes have better results, whose RMS radii do not exceed 1.609µm and 2.533µm. The RMS radii of all subeyes are smaller than the pixel size (4.5µm) of photoreceptor adopted in the prototype, which indicates good results for aberration calibration in this design. The MTF is also an essential indicator for image quality evaluation of optical systems. The Nyquist cutoff frequency is constrained by the pixel size of the detector, as given by:

$${f_{Nyquist}} = \frac{{1000}}{{2\varepsilon }},$$
where $\varepsilon$ is the pixel size of the sensor (Unit: µm). Hence, the cutoff frequency of this system is calculated to be 111lp/mm. The second column in Fig. 3 shows the MTF of the central and marginal WA/HD subeyes. As can be seen, the MTF value of the central WA subeye is up to 0.6 at Nyquist cutoff frequency, and the value of the marginal WA subeye is 0.3. Although the MTF value of HD subeyes decrease slightly, all of them are higher than 0.1 at the cutoff frequency. The third column in Fig. 3 is the distortion plots of WA and HD subeyes. The curves reveals that the distortion of the HD subeyes is well controlled within 8%, and the distortion of the WA subeyes is not exceeding 14%. In summary, the optical design of the HeCECam has excellent image quality.

 figure: Fig. 3.

Fig. 3. Three columns from left to right show the spot diagrams, MTF curves and distortions, respectively. Four rows from top to bottom give the results analysis for the central WA subeye, the marginal WA subeye, the central HD subeye, and the marginal HD subeye, respectively.

Download Full Size | PDF

2.3. Fabrication of the HeCECam prototype

Based on the above method and optical design, a prototype of HeCECam which consists of a heterogeneous compound eye array, an optical relay system, and a CMOS sensor has been fabricated. Figures 4(a-c) give the pictures of the fabricated heterogeneous compound eye array, optical relay system and CMOS sensor. The heterogeneous compound eye array contains two types of subeyes, 31 WA subeyes and 226 HD subeyes. Both types of subeyes are made of K9 glass. They are arranged on a hemispherical resin shell with a radius of 56mm, which is made by 3D printing technology. The hemispherical shell surface is painted black to diminish the effect of stray light.

 figure: Fig. 4.

Fig. 4. The prototype of HeCECam consists of three parts: (a) a heterogeneous compound eye array, (b) an optical relay system and (c) a CMOS sensor. (d) Assembled prototype of the HeCECam. (e) Schematic diagram of the distribution of WA subeyes and HD subeyes in the heterogeneous compound eye array.

Download Full Size | PDF

The optical relay system is similar to the rhabdom in the insect’s compound eye which transmits light rays to the photoreceptor. It is also mainly made of K9 glass. An ON-Semiconductor Python NOIP1XX016KA (Frame rate: 20fps, Pixel size: 4.5µm, Pixel resolution: 4096 × 4096), was adopted as the photoreceptor. Figure 4(d) shows the assembled prototype of the HeCECam whose maximum radial diameter and axial length are 100mm and 182mm, respectively. This prototype which contains two types of subeyes with the field of view of $360^\circ{\times} 141^\circ$ has the capable of simultaneous dual-scale imaging.

Figure 4(e) illustrates the distribution of WA subeyes and HD subeyes in the heterogeneous compound eye array. All of subeyes are distributed in concentric circles on the hemispherical shell. The HD subeyes whose field of view is small, are densely distributed, and the distribution of the WA subeyes is sparser, appearing only at two intervals. The field of view overlap of subeyes at the distance of 600mm in three typical regions is given in the block diagram. The different colors represent the number of the field of view overlap. There is a partial field of view overlap between both the WA subeyes and the HD subeyes, thus ensuring no blind spots in the detection range. In particular, several HD subeyes around the WA subeye can see the entire scene captured by this WA subeye at a higher resolution.

3. Results and discussion

To verify the imaging characteristics of the HeCECam, a series of experiments are performed, including large field of view imaging, imaging performance verification, and real-world scene imaging. Figure 5(a) shows the experimental setup for large field of view imaging of the prototype. Three toys were placed at different positions (0°, + 70.5° and -70.5°) as the targets. Figure 5(b) gives a raw image on the three toys taken by this prototype. The three toys at different positions are captured by different subeyes. The left enlarged views are sectional high-resolution images of toys acquired by HD subeyes located at 0° and ±70.5°. The right enlarged views are wide angle images of toys captured by WA subeyes located at 0° and ±70.5°. This reveals that the HeCECam has an overall field of view of up to 141°. In order to directly compare the difference between the wide-angle image and the high-resolution image, an enlarged view of the central area is given. It can be seen that the wide-angle image can see the whole toy while the high-resolution images only see parts of it. The subimages with two different scales appear simultaneously in the heterogeneous compound eye image.

 figure: Fig. 5.

Fig. 5. (a) Experimental setup for verifying large field of view imaging of the prototype. (b) The raw image captured by the prototype and enlarged views of some sub-images. (c) Illustration of the experimental setup to verify the field of view of subeyes. (d) Experimental results of the detection range for WA subeyes and HD subeyes.

Download Full Size | PDF

Figure 5(c) illustrates the experimental setup for measuring the field of view of the WA subeye and the HD subeye. A standard ruler is placed at a distance X from the front of the prototype. The field of view ($\Delta \varphi$) of the single subeye can be calculated by $\Delta \varphi = 2\arctan ({L\textrm{/2}X} )$, where L is the length of the ruler observed by the subeye. The left image shown in Fig. 5(d) gives the results observed by the WA subeye, when the ruler was placed at 13cm from the front of the prototype. The field of view of the WA subeye can be calculated as 26.0°, which is consistent with ground-truth value. The right image shown in Fig. 5(d) is the results observed by the HD subeye, when the ruler was placed at 17cm. The field of view of the HD subeye is obtained to be 15.1°, which is close to the theoretical value.

To test the imaging performance of the HeCECam, the checkerboards, an ISO 12233 chart, and a point source were used to test the contrast, resolution and point spread function (PSF) of this prototype, respectively. The results of these experiments are shown in Fig. 6. Wherein, Fig. 6(a) shows the experiment setup for testing the contrast. Three checkerboards, with a grid size of 10mm × 10mm, are placed at three different positions (0° and ±70.5°) to test the contrast of different subeyes. They are distributed on the same sphere whose radius is 20cm. The result image captured by the HeCECam is shown in Fig. 6(b). An enlarged view of the central area is shown in Fig. 6(c), which clearly shows the significant difference between subeyes with two scales. Figure 6(d) gives subimages of WA subeyes and HD subeyes at different positions. The corresponding intensity profiles along the dotted lines are given in the block diagrams on the right. The Michelson contrast can be calculated based on the diagrams. The Michelson contrasts for the central subimages of WA subeye and HD subeye are 0.725 and 0.9, respectively. The contrasts of the marginal subimages are 0.6 and 0.8, respectively. The contrast of the marginal subimages is slightly lower than that of the central subimages due to the effect of the optical relay system.

 figure: Fig. 6.

Fig. 6. (a) Schematic illustration of Experiment setup for testing contrast. (b) The raw image of the checkboards taken by the HeCECam. (c) The enlarged view of the central area marked by blue circle. (d) The subimages of WA subeyes and HD subeyes at different positions, and the corresponding intensity profiles along the dotted lines. (e) Illustration of Experiment setup for testing the resolution of the subeyes. (f) The resolution test results of the central WA subeye (f1), the marginal WA subeye (f2), the central HD subeye (f3) and the marginal HD subeye (f4). (g) Schematic illustration of Experiment setup for testing PSF curves. (h) The PSF curves of WA subeyes and HD subeyes.

Download Full Size | PDF

Figure 6(e) illustrates the experiment setup for testing resolution of the HeCECam. An ISO 12233 chart is placed at the front of the prototype. The imaging results of this chart for two types of subeyes at different positions is given in Fig. 6(f). The resolution values of the central and marginal WA subeyes are 6lp/mm and 4lp/mm, and that of the central and marginal HD subeyes are 10lp/mm and 7lp/mm, respectively. The number of pixels owned by HD subeyes and WA subeyes are about 120 × 120 and 60 × 60, respectively. Combined with their field of view, it can be calculated that their angular resolutions are 7.2´ and 25.8´ per pixel. To test the PSF of each subeye, a point source was used as a target shown in Fig. 6(g). The point light source is formed by the laser beam passing through a 100µm micropore. The PSF curves were plotted using the point source intensities recorded by all subeyes, as shown in Fig. 6(h). It reveals that the closer to the center the PSF curves is sharper, and the PSF curves of the HD subeyes is sharper than that of the WA subeyes. These results are consistent with the theoretical optical design. The MTF curve of the HD subeye (Ch.7°) has been derived from the measured PSF, as shown in the right diagram of Fig. 6(h). It is decreased compared with the design value, due to the machining and assembly errors.

To further validate the imaging performance of the prototype, two random scene imaging experiments were performed. Figure 7(a) shows the raw heterogeneous compound eye image of the laboratory scene shown in Fig. 7(d) captured by the HeCECam. Several subimages of WA subeyes and HD subeyes are given in Fig. 7(b). The subimages marked by green circles are the WA subimages which have a large field of view. All of them can see the carton marked by the red dotted lines, proving that a certain field of view overlap exists between the adjacent WA subeyes. The subimages marked by blue circles are the HD subimages with higher resolution. Figure 7(c) shows the enlarged view of the white dotted region in Fig. 7(a). It clearly reveals the properties of the dual-scale images captured by the HeCECam. The large field of view and high-resolution information simultaneously imaged in a heterogeneous compound eye image. The positions of the subimages in the raw heterogeneous compound eye image are reversed, making it difficult to see the overall outline of scene. A clear outline of the scene can be seen in the corrected heterogeneous compound eye image given in Fig. 7(e). In addition, a random outdoor scene shown in Fig. 7(g) was captured by the HeCECam. Figure 7(f) gives the corresponding heterogeneous compound eye raw image and partial enlarged views. The WA subimages in the green circle and the HD subimage in the blue circle show that this prototype has good imaging performance outdoors. A corrected heterogeneous compound eye image, which shows a clear outline of the outdoor scene, is shown in Fig. 7(h). The above experimental results show that this prototype not only has very good imaging performance but also has been practical. This paves the way for the application of the bio-inspired compound eye camera. This heterogeneous compound eye camera can be used for fast motion detection and precise identification in large field of view.

 figure: Fig. 7.

Fig. 7. (a) Raw heterogeneous compound eye image of the laboratory scene captured by the HeCECam. (b) The WA subimages and HD subimages at different positions. (c) An enlarged view of the central region in white dashed lines. (d) The picture of laboratory scene captured by a cell phone camera. (e) The corrected heterogeneous compound eye image of this laboratory scene. (f) The raw heterogeneous compound eye image of a random outdoor scene captured by the HeCECam. (g) The picture of the random outdoor scene. (h) The corrected heterogeneous compound eye image of this random outdoor scene.

Download Full Size | PDF

4. Conclusion

In this work, we propose a design method of heterogeneous compound eye and fabricate a prototype of HeCECam. This prototype, which achieves dual-scale imaging in large field of view (360°×141°), consists of a heterogeneous compound eye array, an optical relay system and a CMOS sensor. There are 31 WA subeyes and 226 HD subeyes in the heterogeneous compound eye array. The optical relay system is introduced to re-image the curved image formed by the heterogeneous compound eye array on the planar CMOS sensor. Finally, a heterogeneous compound eye image containing lots of dual-scale subimages is obtained. To verify the imaging characteristics of the HeCECam, a series of experiments are performed. The results show that this prototype has excellent imaging performance and is capable of dual-scale imaging. These characteristics make the HeCECam has great potential for fast motion detection and precise identification, UAV navigation, and security surveillance. Our next works mainly focus on rapid detection, location and recognition based on this HeCECam and high-definition panoramic image restoration based on the heterogeneous compound eye image. Moreover, the effective ratio of pixels in the HeCECam is only 56%, because the distribution of subeyes is not well optimized. To further improve the performances of this HeCECam, such as further increasing the field of view of HeCECam by increasing the number of subeyes in the heterogeneous compound eye array and optimizing the aberration of the optical relay system, and improving the effective ratio of pixels by optimizing the distribution of subeyes in the heterogeneous compound eye array, is also extremely important research direction.

Funding

National Natural Science Foundation of China (62005277); CIOMP Aurora Talent Development Program (E2S011Y5B0); CIOMP-Fudan University Joint Fund (Y9R733N190).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef]  

2. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454(7205), 748–753 (2008). [CrossRef]  

3. G. J. Lee, C. Choi, D.-H. Kim, and Y. M. Song, “Bioinspired Artificial Eyes: Optic Components, Digital Cameras, and Visual Prostheses,” Adv. Funct. Mater. 28(24), 1705202 (2018). [CrossRef]  

4. J. Li, W. Wang, X. Mei, A. Pan, X. Sun, B. Liu, and J. Cui, “Artificial Compound Eyes Prepared by a Combination of Air-Assisted Deformation, Modified Laser Swelling, and Controlled Crystal Growth,” ACS Nano 13(1), 114–124 (2019). [CrossRef]  

5. M. Ma, H. Li, X. Gao, W. Si, H. Deng, J. Zhang, X. Zhong, and K. Wang, “Target orientation detection based on a neural network with a bionic bee-like compound eye,” Opt. Express 28(8), 10794–10805 (2020). [CrossRef]  

6. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R.-H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497(7447), 95–99 (2013). [CrossRef]  

7. Y. Zheng, L. Song, J. Huang, H. Zhang, and F. Fang, “Detection of the three-dimensional trajectory of an object based on a curved bionic compound eye,” Opt. Lett. 44(17), 4143–4146 (2019). [CrossRef]  

8. Z.-Y. Hu, Y.-L. Zhang, C. Pan, J.-Y. Dou, Z.-Z. Li, Z.-N. Tian, J.-W. Mao, Q.-D. Chen, and H.-B. Sun, “Miniature optoelectronic compound eye camera,” Nat. Commun. 13(1), 5634 (2022). [CrossRef]  

9. V. Iyer, A. Najafi, J. James, S. Fuller, and S. Gollakota, “Wireless steerable vision for live insects and insect-scale robots,” Sci. Robot. 5(44), eabb0839 (2020). [CrossRef]  

10. J. Li, S. Thiele, B. C. Quirk, R. W. Kirk, J. W. Verjans, E. Akers, C. A. Bursill, S. J. Nicholls, A. M. Herkommer, H. Giessen, and R. A. McLaughlin, “Ultrathin monolithic 3D printed optical coherence tomography endoscopy for preclinical and clinical use,” Light: Sci. Appl. 9, 124 (2020). [CrossRef]  

11. M. Ma, F. Guo, Z. Cao, and K. Wang, “Development of an artificial compound eye system for three-dimensional object detection,” Appl. Opt. 53(6), 1166–1172 (2014). [CrossRef]  

12. M. Ma, Y. Zhang, H. Deng, X. Gao, L. Gu, Q. Sun, Y. Su, and X. Zhong, “Super-resolution and super-robust single-pixel superposition compound eye,” Opt. Lasers Engineering 146, 106699 (2021). [CrossRef]  

13. R. J. Lin, V.-C. Su, S. Wang, M. K. Chen, T. L. Chung, Y. H. Chen, H. Y. Kuo, J.-W. Chen, J. Chen, Y.-T. Huang, J.-H. Wang, C. H. Chu, P. C. Wu, T. Li, Z. Wang, S. Zhu, and D. P. Tsai, “Achromatic metalens array for full-colour light-field imaging,” Nat. Nanotechnol. 14(3), 227–231 (2019). [CrossRef]  

14. Y. Luo, C. H. Chu, S. Vyas, H. Y. Kuo, Y. H. Chia, M. K. Chen, X. Shi, T. Tanaka, H. Misawa, Y.-Y. Huang, and D. P. Tsai, “Varifocal Metalens for Optical Sectioning Fluorescence Microscopy,” Nano Lett. 21(12), 5133–5142 (2021). [CrossRef]  

15. H. Pahlevaninezhad, M. Khorasaninejad, Y.-W. Huang, Z. Shi, L. P. Hariri, D. C. Adams, V. Ding, A. Zhu, C.-W. Qiu, F. Capasso, and M. J. Suter, “Nano-optic endoscope for high-resolution optical coherence tomography in vivo,” Nat. Photonics 12(9), 540–547 (2018). [CrossRef]  

16. T. Yang, Y.-H. Liu, Q. Mu, M. Zhu, D. Pu, L. Chen, and W. Huang, “Compact compound-eye imaging module based on the phase diffractive microlens array for biometric fingerprint capturing,” Opt. Express 27(5), 7513–7522 (2019). [CrossRef]  

17. K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, and L. Waller, “Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy,” Light: Sci. Appl. 9, 171 (2020). [CrossRef]  

18. Z.-F. Zhao, J. Liu, Z.-Q. Zhang, and L.-F. Xu, “Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance,” Opt. Lett. 45(6), 1491–1494 (2020). [CrossRef]  

19. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. A. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Natl. Acad. Sci. 110(23), 9267–9272 (2013). [CrossRef]  

20. H. Deng, X. Gao, M. Ma, Y. Li, H. Li, J. Zhang, and X. Zhong, “Catadioptric planar compound eye with large field of view,” Opt. Express 26(10), 12455–12468 (2018). [CrossRef]  

21. L. C. Kogos, Y. Li, J. Liu, Y. Li, L. Tian, and R. Paiella, “Plasmonic ommatidia for lensless compound-eye vision,” Nat. Commun. 11(1), 1637 (2020). [CrossRef]  

22. C. Shi, Y. Wang, C. Liu, T. Wang, H. Zhang, W. Liao, Z. Xu, and W. Yu, “SCECam: a spherical compound eye camera for fast location and recognition of objects at a large field of view,” Opt. Express 25(26), 32333–32345 (2017). [CrossRef]  

23. S. Zhang, Q. Wu, C. Liu, T. Wang, H. Zhang, J. Wang, Y. Ding, J. Chi, W. Xu, Y. Xiang, and C. Shi, “Bio-inspired spherical compound eye camera for simultaneous wide-band and large field of view imaging,” Opt. Express 30(12), 20952–20962 (2022). [CrossRef]  

24. J. Luo, Y. Guo, X. Wang, and F. Fan, “Design and fabrication of a multi-focusing artificial compound eyes with negative meniscus substrate,” J. Micromech. Microeng. 27(4), 045011 (2017). [CrossRef]  

25. X. Wang, Y. Guo, and J. Luo, “Implementation and image processing of a multi-focusing bionic compound eye,” International Conference on Optical Instruments and Technology 2017 (SPIE, 2018), Vol. 10620.

26. G. Lian, Y. Liu, K. Tao, H. Xing, R. Huang, M. Chi, W. Zhou, and Y. Wu, “Fabrication and Characterization of Curved Compound Eyes Based on Multifocal Microlenses,” Micromachines 11(9), 854 (2020). [CrossRef]  

27. J. Li, W. Wang, Z. Fu, R. Zhu, and Y. Huang, “Fabrication of Dual-Focus Artificial Compound Eye with Improved Imaging Based on Modified Microprinting and Air-Assisted Deformation,” SSRNSSRN:4003241 (2022). [CrossRef]  

28. J. Fan, J. Suo, J. Wu, H. Xie, Y. Shen, F. Chen, G. Wang, L. Cao, G. Jin, Q. He, T. Li, G. Luan, L. Kong, Z. Zheng, and Q. Dai, “Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution,” Nat. Photonics 13(11), 809–816 (2019). [CrossRef]  

29. D. S. Kittle, D. L. Marks, H. S. Son, J. Kim, and D. J. Brady, “A testbed for wide-field, high-resolution, gigapixel-class cameras,” Rev. Sci. Instrum. 84(5), 053107 (2013). [CrossRef]  

30. B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. Targove, S. Wein, P. Brewer, D. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” SPIE Defense and Security Symposium (SPIE, 2008), Vol. 6981.

31. N. Bellotto, E. Sommerlade, B. Benfold, C. Bibby, I. Reid, D. Roth, C. Fernandez, L. V. Gool, and J. Gonzalez, “A distributed camera system for multi-resolution surveillance,” in 2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC), (2009), 1–8.

32. X. Yuan, M. Ji, J. Wu, D. J. Brady, Q. Dai, and L. Fang, “A modular hierarchical array camera,” Light: Sci. Appl. 10, 37 (2021). [CrossRef]  

33. G. Y. Belay, R. Bollhorst, M. Vervaeke, H. Thienpont, and J. Van Erps, “Design and demonstration of a six-channel multiresolution imagingsystem,” Appl. Opt. 61(10), 2683–2689 (2022). [CrossRef]  

34. G. Y. Belay, Y. Meuret, H. Ottevaere, P. Veelaert, and H. Thienpont, “Design of a multichannel, multiresolution smart imaging system,” Appl. Opt. 51(20), 4810–4817 (2012). [CrossRef]  

35. G. Y. Belay, H. Ottevaere, Y. Meuret, M. Vervaeke, J. V. Erps, and H. Thienpont, “Demonstration of a multichannel, multiresolution imaging system,” Appl. Opt. 52(24), 6081–6089 (2013). [CrossRef]  

36. L. Smeesters, G. Y. Belay, H. Ottevaere, Y. Meuret, M. Vervaeke, J. V. Erps, and H. Thienpont, “Two-channel multiresolution refocusing imaging system using a tunable liquid lens,” Appl. Opt. 53(18), 4002–4010 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Main parameters in general compound eye array (a) and heterogeneous compound eye array (b). (c) Schematic diagram of sectional structure of heterogeneous compound eye array.
Fig. 2.
Fig. 2. Optical design of the HeCECam. The curved image captured by two types of subeyes was re-imaged on a CMOS sensor by an optical relay system.
Fig. 3.
Fig. 3. Three columns from left to right show the spot diagrams, MTF curves and distortions, respectively. Four rows from top to bottom give the results analysis for the central WA subeye, the marginal WA subeye, the central HD subeye, and the marginal HD subeye, respectively.
Fig. 4.
Fig. 4. The prototype of HeCECam consists of three parts: (a) a heterogeneous compound eye array, (b) an optical relay system and (c) a CMOS sensor. (d) Assembled prototype of the HeCECam. (e) Schematic diagram of the distribution of WA subeyes and HD subeyes in the heterogeneous compound eye array.
Fig. 5.
Fig. 5. (a) Experimental setup for verifying large field of view imaging of the prototype. (b) The raw image captured by the prototype and enlarged views of some sub-images. (c) Illustration of the experimental setup to verify the field of view of subeyes. (d) Experimental results of the detection range for WA subeyes and HD subeyes.
Fig. 6.
Fig. 6. (a) Schematic illustration of Experiment setup for testing contrast. (b) The raw image of the checkboards taken by the HeCECam. (c) The enlarged view of the central area marked by blue circle. (d) The subimages of WA subeyes and HD subeyes at different positions, and the corresponding intensity profiles along the dotted lines. (e) Illustration of Experiment setup for testing the resolution of the subeyes. (f) The resolution test results of the central WA subeye (f1), the marginal WA subeye (f2), the central HD subeye (f3) and the marginal HD subeye (f4). (g) Schematic illustration of Experiment setup for testing PSF curves. (h) The PSF curves of WA subeyes and HD subeyes.
Fig. 7.
Fig. 7. (a) Raw heterogeneous compound eye image of the laboratory scene captured by the HeCECam. (b) The WA subimages and HD subimages at different positions. (c) An enlarged view of the central region in white dashed lines. (d) The picture of laboratory scene captured by a cell phone camera. (e) The corrected heterogeneous compound eye image of this laboratory scene. (f) The raw heterogeneous compound eye image of a random outdoor scene captured by the HeCECam. (g) The picture of the random outdoor scene. (h) The corrected heterogeneous compound eye image of this random outdoor scene.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

D = R sin ( Δ φ 2 ) d 2 cos ( Δ φ 2 ) sin ( α 2 ) R ,
d R < Δ ϕ < Δ φ .
Δ ϕ 2  limit  = [ d 1 R 1 + d 2 R 2 + 2 Δ h tan ( Δ φ 1 2 ) R 2 ] ,
Δ ϕ 2  limit  < Δ ϕ 2 < Δ φ 2 .
D 1 = D 2 + Δ h  =  D a d m i t + Δ h .
sin ( α 1 2 ) = [ R 1 sin ( Δ φ 1 2 ) d 1 2 cos ( Δ φ 1 2 ) ] sin ( Δ φ 2 Δ ϕ 2 2 ) R 2 sin ( Δ φ 2 2 ) d 2 2 cos ( Δ φ 2 2 ) ,
Δ ϕ 1 Δ ϕ 2 N 1 + Δ ϕ 1 Δ ϕ 2 γ ,
Δ ϕ 2 = Δ ϕ 1 Δ ϕ 2 N 1 , N > 1.
13.767 < Δ ϕ 2 < 14.004 .
f N y q u i s t = 1000 2 ε ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.