Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact self-interference incoherent digital holographic camera system with real-time operation

Open Access Open Access

Abstract

The video recording–capable compact incoherent digital holographic camera system is proposed. The system consists of the linear polarizer, convex lens, geometric phase lens, and the polarized image sensor. The Fresnel hologram is recorded by this simple configuration in real time. The system parameters are analyzed and evaluated to record a better-quality hologram in a compact form-factor. The real-time holographic recording and its digitally reconstructed video playback are demonstrated with the proposed system.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As sensors and communication equipment are getting smaller and at the same time the computation speed is increasing, a variety of immersive content playback systems are reported. A virtual or augmented display system is already paying attention from many fields, such as a medical or industrial purposes [1, 2]. Other next-generation display systems, such as an auto-stereoscopic display [3–7], aerial display [8–10], or holographic display [11–19], are widely developed from many research groups, and some of them achieved a significant goal with the advent of the high-resolution display panels, the advanced laser technology, and the graphic processor unit based powerful computation. Among the aforementioned realistic information playback methods, only the holographic video device is believed to be able to deliver a full three-dimensional depth cue toward the viewer’s eye [20, 21]. Therefore, the research and development for advanced holographic display technology are worth to continue.

Since the holographic display device mostly requires a wavefront information as an input source data, the ideal counterpart is the holographic video camera that faithfully records the wave field of the scene. However, the holographic recording systems announced so far have three disadvantages simultaneously or one at a time. First of all, the system size is inevitably bulky because they have a cumbersome interferometer structure or optoelectronic devices. Secondly, most systems use a laser to record interference patterns clearly, but this special light source is rarely used in our daily life. Lastly, the hologram video recording is hard to achieve because a time-division phase-shifting technique is performed to solve the bias and twin image problem that occurs in an on-axis holographic recording system. The three disadvantages of the existing holographic technique as described above greatly hinder various applications outside the laboratory.

Recently, our team proposed a relatively simplified digital holographic camera totally operated with geometric phase effect, under the low-coherent lighting condition [22, 23]. We call this system temporarily as GPSIDH, that is the geometric phase self-interference incoherent digital holography. This system can be categorized into the self-interference incoherent digital holography (SIDH) [24, 25], or Fresnel incoherent correlation holography (FINCH) [26–28], since the incoming wavefront is divided and modulated into two different curvatures of spherical wavefronts. Especially, in our system, the thin and light-weighted geometric phase (GP, or also known as Pancharatnam–Berry phase) lens is employed. The GP optical component is usually made from the polymerized liquid crystal (LC) that only the orientation state of the LC director induces the phase variation [29, 30]. Since this phase modulation is not affected by the physical path length or optical retardation, but only by the topological state of the polarization states, the phase called as a geometric phase. When the orientation map of LC directors corresponds to the quadratic phase profile, this component can serve as a lens. The very distinctive feature of this lens is that the lens serves either convex or concave lens according to the input handedness of the circular polarization (CP) states, due to the orthogonal ordinary and extraordinary axis of the LC. And if a linearly polarized light comes in, this light is considered to be a superposition of two orthogonal CP components, half converges and the other half diverges. In the GPSIDH system, the special property of the GP lens is utilized as a polarization selective wavefront modulator, so that opened a new possibility to make a digital holographic camera with portability and simplicity. This system solves two drawbacks of the digital holographic recording systems simultaneously, which are low-coherent light capability and relatively small and simple system size. But still, the system is not small enough to be implemented even inside the mobile devices, nor operated in real-time to be as a video camera, due to the time-division phase-shifting method.

Development of the compact incoherent video holographic camera is essential for the practical usage in various fields. Some of the reports related to the incoherent holographic camera, including our previous system [23], reached a significant goal toward such practical applications. For the case of single-exposure bias and twin image elimination, the off-axis system configuration [31–33], or parallel phase-shifting method is applied inside the SIDH or FINCH systems [34, 35]. Both methods sacrifice the spatial resolution instead of losing the temporal resolution, however, the off-axis method is less desirable since the limitation of the narrow bandwidth of the image sensor [36]. On the other hand, the parallel phase-shifting method, which the details are explained in the following section, requires four adjacent pixels to record a single complex-valued data, therefore the pixel numbers in a horizontal and vertical direction are reduced in half [37]. Even though the parallel phase-shifting method also sacrifices the spatial resolution, it is a good candidate to simplify the system structure, since this method employs the geometric phase-shifting, which is mostly performed only with the combination of the waveplates and polarizers, and their relative angle [38–40]. In the case of the wavefront division method, the implementation of a passive type common-path wavefront modulator is effective to build a compact and simple holographic system [23, 28], instead of using the bulky conventional interferometric structure [24, 25, 41, 42], or the phase-only spatial light modulator [26, 34, 35]. The birefringent crystal lens and the GP lens serve a similar role as a common-path wavefront modulator inside the incoherent holographic system. The FINCH system with the birefringent crystal lens has already proven its extreme capability as a holographic microscopy [28]. However, it requires a variable waveplate or the combination of the waveplates to obtain a phase-shifted digital hologram. On the other hand, the GP lens is originally designed as a half-waveplate, the geometric phase-shifting method with the rotation of the linear polarizer is effectively applied, which brings a great simplicity in the system design.

In this paper, we propose and analyze a compact holographic video camera, which is developed from the previous version of GPSIDH [23]. The system is only configured with the band-pass filter (BPF), convex lens, one polarizer, GP lens, and the polarized image sensor. The wavefront division is achieved by the GP lens, and the parallel phase-shifting is operated by the combination of the polarizer, GP lens, and the polarized image sensor. Unlike the previous study, the image sensor is located right behind the GP lens, that the system path length is significantly reduced. We believe the aforementioned studies about incoherent holographic systems are great candidates to function as a holographic video camera with compactness in size, with a simple modification. Though, we think that the proposed system in this study is highly compact and simple to operate than any other reported works above. In the following section, the working principle of the system is introduced, and the design rule of the compact system is analyzed. Then in the experimental section, the design-rule based recording results are presented. And the real-time hologram acquisition are demonstrated with the digitally reconstructed video. After the discussion, the summary of the paper follows.

 figure: Fig. 1

Fig. 1 (a) the illustration of the GPSIDH system, (b) schematic diagram of the system with the parameter and field labels. Red and blue dashed lines after the GP lens indicate the diverging and converging rays, respectively, each of which modulated by the negative and positive focal length of the GP lens. BPF, band-pass filter (used if necessary); lo, objective lens with focal length fo; P1,2, first and last polarizer; lgp, GP lens with focal length of ±fgp; zo, object distance from lo; d, lo to lgp distance; and zh, lgp to sensor distance.

Download Full Size | PDF

2. GPSIDH system structure and working principle

The basic structure of compact GPSIDH system is shown in Fig. 1(a). By referring to the expressions of the FINCH system proposed by Rosen, et al. and the related works [26, 43], let us describe the proposed GPSIDH system. If the object point source is located at zo from the objective lens lo, then the field before the lens is described with a diverging spherical wave, C1(ro¯)Q[zo1]L[ro¯/zo]. Here, Q[z1]=exp [jπz1λ1(x2+y2)] is the quadratic phase function. L[r¯]=exp [j2π(rxx+ryy)/λ] is the linear phase function. λ is the central wavelength of the input light source. ro¯=(xo,yo) is the object point coordinate, and C1(ro¯) is the complex constant of each object point source. The diverging spherical wave from the object point source experiences the objective lens with the transmission function of Q[fo1], where fo is the focal length of the lens [44]. Hence, the complex amplitude of the field after the objective lens becomes C1(ro¯)Q[zo1]L[ro¯/zo]×Q[fo1]. The field propagates further to the GP lens with the distance d. And the field passes through the GP lens, where the transmission function is expressed as (Q[fgp1]ejδ/2+Q[fgp1]ejδ/2)/2. The GP lens transmission function has two terms, one for the lens with the focal length of fgp and the positive phase shift of δ/2. On the other hand, another term is described with the focal length of fgp and the phase shift of δ/2. Here, δ is the total phase-shifting value and is discussed later in this section. After the GP lens, the field propagates to the image sensor with a distance zh. Then the intensity of the Fresnel hologram from the single object point at (xo,yo,zo) is recorded as,

Ih(xh,yh;xo,yo,zo)=|C2(ro¯)Q[1zo]L[ro¯zo]Q[1fo]*Q[1d](Q[1fgp]ejδ/2+Q[1fgp]ejδ/2)*Q[1zh]|2.

In Eq. (1), C2(ro¯) is the complex constant, and the propagation is denoted as a linear convolution with the operational symbol of *. As introduced in Eq. (9) of the work of Katz, et al. in [43], the spherical wave propagated from the original spherical wave C(ro¯)Q[z11]L[A¯] with the distance z2 can be described as C(ro¯)Q[(z1+z2)1]L[A¯z1/(z1+z2)]. By following this rule, Eq. (1) is expressed as,

Ih(xh,yh;xo,yo,zo)=C3+C4(ro¯)Q[zrec1]L[Mpzrecro¯]eiδ+C4 '(ro¯)Q[zrec1]L[Mnzrecro¯]eiδ,
where zrec is the reconstruction distance, and Mp,n are the magnification factors of the images due to the two lenses with focal lengths of fo and ±fgp, respectively. And C3,C4, and C4 ' are the constants. The parameters, zrec and Mp,n are calculated by using the ray optical relations as below:
1zrec=1zhzn1zhzp,
Mp,n=zizozp,ndzi,

Here, zi is the imaging distance deduced from the thin lens relation with the object distance zo and the lens focal length fo, that is zo1+zi1=fo1. And, zp,zn are the second imaging distance due to the positive and negative focal lengths of GP lens, respectively. These imaging distances are also derived by the thin lens relation, which are (dzi)1+zp1=fgp1 and (dzi)1+zn1=fgp1. As the single object point source generates the Fresnel hologram Ih(xh,yh;xo,yo,zo) as shown in Eq. (2), the entire hologram H(xh,yh) generated from the group of object points with Io(xo,yo,zo) is expressed as Eq. (5).

H(xh,yh)=Io(xo,yo,zo)Ih(xh,yh;xo,yo,zo)dxodyodzo

According to Eq. (2), three terms remain in the equation, each of which is the bias, complex hologram, and the twin image components, respectively. To properly extract the complex hologram from the recorded intensity image, the phase-shifting method is utilized [45].

 figure: Fig. 2

Fig. 2 Poincaré sphere representation of the proposed system. (a) the second polarizer is in 45° state, (b) the second polarizer is rotated to the vertically aligned state. The geometric phase difference δ between (a) and (b) is 90°. The coordinate labels, S1,2,3 are the Stokes parameters.

Download Full Size | PDF

In this system, there is no conventional phase-shifting device such as a Piezo-adjuster, or variable retarder. In our previous research, we replaced such phase-shifting device into a geometric phase shifter, which is the combination of linear polarizer and GP lens [23]. More specifically, the polarization state of the incoming wave inside the GPSIDH system is defined firstly at point A on Fig. 2, and it is soon separated on two poles (point R and L) of the Poincaré sphere due to the GP lens, then they arerealigned with the second linear polarizer into the state of point B on the equator. The solid angle ΩABC of the arbitrary spherical triangle surrounded by the points A, B, and C is given as Eq. (6), that is defined by three unit vectors a^,b^,c^, from the sphere center O to each points A, B, and C, respectively [46].

tan (ΩABC/2)=a^(b^×c^)|a^||b^||c^|+(a^b^)c^+(a^c^)b^+(b^c^)a^

The amount of geometric phase-shifting in the previous study and the proposed method can be deduced with Eq. (6). The arbitrary spherical triangular path ABC is now specified by the path ARB and ALB, as shown in Fig. 2. When the point A represents the horizontal and B is the 45° linearly polarized states, then their unit vector from the origin of the sphere O is a^=[1,0,0],b^=[0,1,0], respectively. Since the right and left handedness of the circular polarization states are the north and south poles of the Poincaré sphere, the unit vector for the origin to the point R, and L is r^=[0,0,1],l^=[0,0,1]. By using Eq. (6), the solid angle ΩARB becomes π/2. And conversely, ΩALB is π/2. Owing to the geometric phase gain is the half of the solid angle, each spherical triangular path ARB,ALB has π/4 and π/4 geometric phase angle which is denoted as δ/2 and δ/2 in Eq. (1) respectively [38, 47].

Therefore, their relative phase difference becomes π/2. By changing the relative angle between the two polarizers, let’s say from point B to B’ as in Figs. 2(a) to 2(b), the amount of geometric phase gain is controlled. The amount of geometric phase gain by changing the state between point A and B is also described by Jones matrix method with the same result in Eqs. (1) and (2) of the previous work [23]. If the rotational variation in four step with 45° per step, the four phase-shifted images with 90° per step can be obtained. Then the four recorded images are recombined to eliminate the bias and twin images.

However, even with this simple phase-shifting method, the real-time acquisition of a clear hologram is hard to achieve. If the object moves faster than the time period of the entire phase-shifting process, then the acquired and reconstructed hologram image would be blurred or nothing is observed. To simply solve this problem, the second polarizer and general image sensor of the previous system are replaced into the polarized image sensor, that the micro-polarizer is attached on every sensor pixels. By doing so, the motionless single-shot phase-shifting digital holography can be achieved. This spatial division phase-shifting method is referred as a parallel phase-shifting [37], and widely applied in various digital holography systems. The very recent successful implementation of the parallel phase-shifting method into the FINCH system is the work of Tahara et al. [34].

 figure: Fig. 3

Fig. 3 Illustration of the structure of polarized image sensor, and parallel phase-shifting method

Download Full Size | PDF

The structure of the polarized image sensor is as shown in Fig. 3. The micro-polarizer array is attached on the pixel array. The micro-polarizers are arranged to be rotated at an interval of 45° with the adjacent micro-polarizer. Therefore, the intensity values of the polarization component of 0,45,90, and 135° of light incident on the 2 x 2 array of pixel group are simultaneously recorded. Once the raw image file is obtained from the polarized image sensor, the four phase-shifted images are extracted with a proper sampling method. The sampled period of the phase-shifted images is considered as the twice of the original pixel size, as illustrated in Fig. 3. The four phase-shifted images, each of which has the hologram intensity by following Eqs. (1) to (5), are recombined into the complex-valued hologram (CH) as [45],

CH[p,q]=(H3[p,q]H1[p,q])j(H4[p,q]H2[p,q]).

In Eq. (7), H1,2,3,4 correspond the phase shifted images with δ=0,90,180,270, where the relative rotation angles between the first polarizer are 0,45,90,135, respectively. p and q is the pixel index of the two-dimensional array data. Then, the object field is retrieved by using the conventional Fresnel back propagation method, that is to correlate CH[p,q] with the quadratic phase function Q[zrec1;p,q]=exp[jπλzrec(p2Δx2+q2Δy2)] [44]. Here, zrec is a reconstruction distance, λ is the central wavelength of light, Δx,y is the width and height of the sampling grid, respectively. The distance zrec is calculated as Eq. (3).

Analysis of required focal length of GP lens and GP lens to sensor distance for better hologram acquisition The interference due to the superposition of the two wavefronts can occur only when the optical path length difference(ΔOPL) at a certain sensor location is shorter than the coherence length cl, that cl=λ2/Δλ, where λ is the central wavelength and Δλ is the spectral bandwidth of the light source [48]. In the short focal length of GP lens, the curvature differences between the two converging and diverging waves after the GP lens is too large, that only a small overlapping area fulfills the condition of ΔOPLcl. For this reason, in previous versions of the system, the relay lens is used to reduce the difference in curvature of the two wavefronts. Otherwise, due to the small interfering area, the obtained hologram looks faint, and its reconstructed image becomes also blurry and noisy. The overall hologram is obtained from the group of point source of the object that can be considered as a δ–function separately. Each δ–function creates their own fringe pattern by the transfer function of the system as expressed in Eq. (4). The fringe pattern is in the shape of Fresnel ring pattern as illustrated in Fig. 4, and the radius the pattern is denoted as rh herein.

 figure: Fig. 4

Fig. 4 (a) schematic optical diagram of the analyzed system, (b, c) conceptual illustration of the reconstructed image quality of hologram related to rh, where (b) has a large rh, and (c) has a small rh.

Download Full Size | PDF

In this section, to set a basic design rule of the compact GPSIDH system without the relay lens, we analyze rh of the recorded Fresnel hologram, which is closely related with the system parameters. Especially, rh

determines the numerical aperture (NA) of the system input, as illustrated in Fig. 4 (blue solid line). Since the optical system with larger NA has improved imaging quality in terms of resolution either lateral or transverse, it is desirable to maintain the system NA higher to obtain a high quality hologram [44, 49–51].

As shown in Fig. 4(a), let us suppose that the δ–function like point light source at the front focal plane of lens lo on the optic axis shines the system with a sufficient diverging angle. The rays are collected by lo, then they simultaneously converge and diverge after passing through the GP lens lgp, which is located behind the distance of d after lo. Let us call each converging or diverging rays after passing through lgp as a positive and negative ray respectively since they experience GP lens by its positive and negative focal lengths. Then these rays propagate with the distance zh to reach the sensor surface with a radius of rs. For the perfectly coherent light source, if rs is smaller than the radius of the marginal positive ray rp, the interference fringe pattern should be recorded all over the sensor area. However, due to the limitation of cl of the incoherent light source, the radius rh of the recorded fringe area is determined where the sensor position having the maximum ΔOPL that is bounded by cl. For every sensor pixels rs[n], where n is the pixel index, the ΔOPL[n] is calculated by using Eq. (8), with simple geometries between the positive (OPL[n]pos) and negative (OPL[n]neg) rays drawn respectively as blueand red solid lines in Fig. 4(a).

ΔOPL[n]=|OPL[n]posOPL[n]neg|=|[fo2+(fgprs[n]fgpzh)2+zh1+(rs[n]fgpzh)2][fo2+(fgprs[n]fgp+zh)2+zh1+(rs[n]fgp+zh)2]|,

The sensor position rs[n] where ΔOPL[n]<cl becomes the maximum fringe pattern radius rh. This radius can be increased with the proper selection of the system parameters, fgp and zh. For example, when fo= 100 mm, d= 18 mm, zh = 20 mm, the ΔOPL for every sensor pixels are presented as Fig. 5. The black-dashed horizontal line in Fig. 5 indicates the cl boundary. The cl in Fig. 5 is derived from the light source with the central wavelength of 525 nm, and the spectral bandwidth of 32 nm. This is the same light source used in the experimental results described in the following section. The colored arrows under the plot area indicate the bounded rh of the system with different fgp.

 figure: Fig. 5

Fig. 5 Plot of ΔOPL[n] to rs[n], when fo= 100 mm, d= 18 mm, zh= 20 mm, cl= (525 nm)2/32 nm =9.76 μ m. The legend denotes fgp. The solid horizontal arrows indicate the rh defined by each fgp.

Download Full Size | PDF

The entrance pupil ro bounded by rh is calculated as,

ro=fgpfgpzhrh.

If the GP lens is large enough and the point object is located at the front focal point of lo, then the NA of the system becomes,

NA=rofo=rhfofgpfgpzh.

According to above analysis, the relation of rh to zh and NA to zh are plotted on Fig. 6. To obtain a wider Fresnel hologram as possible, it is desirable to select longer focal length of GP lens and locate the sensor closer to the GP lens. However, there are several trade-off relation with the long fgp and zh that should be taken account as well.

 figure: Fig. 6

Fig. 6 (a) rh to zh relation, (b) system NA to zh relation. The legend denotes fgp.

Download Full Size | PDF

The Fresnel hologram can be considered as a digitized diffractive lens that has its own radius and focal length. In the case of hologram, its radius is rh, and the focal length is the reconstruction distance, which can be derived as Eq. (3). The reconstruction distance equation is simply understood as a thin lens equation that the Fresnel hologram is a lens with a focal length of zrec, when its object distance is fgpzh and the imaging distance is fgp+zh, where each object and image points correspond to the virtual and real imaging points due to the GP lens. Similar to the analysis about the NA of the hologram obtained in FINCH system [49], the NA of the hologram obtained in GPSIDH system can be represented as rh/zrec. For example, in Fig. 6(a), if fgp increased from 100 mm to 500 mm, rh increases about twice. However, zrec increases in five folds. Therefore, the NA of the hologram decreases faster when it comes to the longer fgp.

The decreased zh would demagnify the reconstructed image size severely due to the magnification factor of the system decreases as zh/fo. In increased rh condition, but with the short zrec, the fine fringe ring frequency can likely to generate the aliasing pattern. When the image sensor has a rectangular pixel where the size is Δx=Δy=Δs, the sampling frequency fs is defined as 1/Δs. The maximum local fringe frequency of the recorded Fresnel hologram have to be lower than fs

to avoid the aliasing problem. This relation is described as Eq. (11) [36].

fs2×(rh/2)/λzrec

These kind of trade-off relations so far should be considered carefully to build the optimal GPSIDH system.

3. Experimental results

In this section, to demonstrate the compactness and the video recording capability of the proposed system, the recording experiments and their reconstruction results are presented. Firstly, to validate the system parameters with compactness, the GPSIDH system with three different focal lengths of the GP lens is evaluated. The first two GP lenses are the off-the-shelf model (ImagineOptix, USA), and their focal lengths are 50, and 100 mm at 550 nm. The lens diameter is about 1 inch. The other one is the custom-made model, where fgp 164 mm at 525 nm, and its diameter is about 0.5 inch. The custom-made GP lens is fabricated by using a template convex lens with 200 mm focal length, and the recording beam wavelength of 457 nm. Even though the GP lens functions as a lens in a broadband spectral range, its focal length depends on the wavelength of input light, with the relation of fgp(λ)=ft(λt/λ) [52]. Ideally, the fabricated GP lens should have a focal length of 174 mm for the input wavelength λ= 525 nm, which is calculated by the focal length of the template lens ft= 200 mm, writing beam wavelength λt= 457 nm. The actual focal length is measured experimentally as 164 mm. The focal length is decided by repeatedly measuring the various object and imaging distances, using the resolution target shined by the 525 nm LED backward illumination. The disagreement between the ideal and experimental focal length is possibly due to the positioning issue of the optical components inside the fabrication interferometer, which is illustrated in Fig. 7.

 figure: Fig. 7

Fig. 7 (a) illustration of the custom-made GP lens profile recording system with component labels, (b) polarization optical microscopic image of the fabricated GP lens sample. The axis with label P and A indicate polarizer and analyzer axis, respectively.

Download Full Size | PDF

To fabricate the custom-made GP lens, the similar methods are utilized such as the work of Kim et al. [29]. To begin with, a glass substrate is cleaned, then the photo-alignment layer of Brilliant Yellow (BY, Model No.: B0783, Tokyo Chemical Industry Co., Ltd., Japan) is formed above the glass to obtain the alignment properties of liquid crystalline polymer layer (RM257, Merck CO., Ltd. Germany). To implement the GP lens, the optical set-up with λ= 457 nm (Cobolt Twist, Cobolt Co., Ltd., Sweden) is developed, whose two inferred beams of reference and writing beam have counterclockwise CP states, such as right handedness CP and left handedness CP, as shown in Fig. 7(a). A plano-convex lens with 200 mm focal length is utilized as a template lens inside the writing beam path to obtain the GP hologram interference pattern on the substrate. The writing beam and reference beam are illuminated on the BY coated substrate for 5 min with optical power of 50 mW/cm2. After the photo-alignment process, a UV-curable liquid crystalline polymer, or the material referred as a reactive mesogen (RM), is spin-coated with 3000 rpm for 30 sec on the photo-aligned BY substrate to obtain the half wave plate optical property. The coated RM layer is baked on a hot plate to vaporize the residual solvent during 1 min with 60C, then cured under the UV illumination with optical power of 10 mW/cm2 for 1 min to implement the polymer film. The polarization optical microscopic (POM, Eclipse LV100POL, Nikon, Japan) image of thefabricated GP lens sample is presented in Fig. 7(b).

The photograph of the prototype system is presented in Fig. 8. Here, lo is a general convex lens with fo= 100 mm. The monochromatic polarized image sensor (PHX050S-PC, LUCID Vision Labs, Canada) has the pixel resolution of 2448 × 2048, and only the central 2048 × 2048 pixels are utilized for the simplicity. Both horizontal and vertical pixel size of the image sensor is 3.45 μm. But to sample the complex hologram data, the sampling period can be regarded as the twice of the original pixel size, which is 6.9 μm, and the sampling number is 1024 × 1024. The bit-per-pixel, or the bit-depth of the sensor is set to 8-bit, which means that the obtained signal is sampled and stored with 256 levels. The sensor is connected to the personal computer with GigE Vision interface. The frame rate of the sensor is 25 frames per second, when the region of interest is set to 2048 × 2048 pixels. To hold all components as compact as possible, the 30 mm cage plates and SM1 lens tube from Thorlabs, Inc. are utilized, as shown in Fig. 8. The marginal area of GP lens is attached to the cage plate by using a sticky tape. The C-mount barrel of the image sensor is held by the custom-made adapter ring to fit into the cage plate.

 figure: Fig. 8

Fig. 8 Photograph of compact GPSIDH video camera prototype

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Reconstructed results from the various systems with fgp= 50 mm, 100 mm, 164 mm from the top to bottom rows, and zh= 18 mm, 23 mm, 28 mm from the left to right columns. The digits inside the figure indicate the PSNR values compared to the system results with fgp= 164 mm and the equivalent zh.

Download Full Size | PDF

Figures 9 and 10 shows the recording results of the United States air force (USAF) 1951 negative target (R1DS1N, Thorlabs, USA) shined by the LED light (LED525E, Thorlabs, USA), where the lights are collected and converged by the 16 mm 1:1.4 lens. The target is located 100 mm in front of the entrance of the GPSIDH system. The central wavelength of the utilized LED is 525 nm, and the spectral bandwidth is 32 nm. The optical power measured at the entrance of the system is about 18.5 μW. The optical power measurement is operated using the off-the-shelf photodiode power sensor (S120C, Thorlabs, USA). On Fig. 9, fgp of 50 mm, 100 mm, and 164 mm are employed inside the system subsequently from top to bottom rows, respectively. And zh is varied with 18 mm, 23 mm, and 28 mm from left to right columns. The digits on the first and second rows indicate the peak signal-to-noise ratio (PSNR) values where the reference image is of fgp= 164 mm, with the equivalent zh. The PSNR value represents the quality of the image compared to the referenced image, and the value gets lower if the compared image is degraded more [53]. The image size is almost same when zh is equal. As zh increases, the image size becomes larger. As confirmed in Fig. 5, the shorter focal length of the GP lens (fgp) decreases the area of interference, where its radius is denoted as rh. Accordingly, the NA of the system is decreased, as expected in Eqs. (9) and (10). The reconstruction results in Fig. 9 are also degraded as fgp decreases when zh is fixed. When zh is increased, with the fixed fgp, the reconstruction results are degraded by following Eqs. (9) and (10) as well, though the image is magnified more. The degradation of the reconstructed image with an inappropriate selection of fgp and zh is easily notified by examining the decreasing PSNR value as well.

 figure: Fig. 10

Fig. 10 (a) reconstruction results of the system with fgp= 164 mm when zh is changed from 18 mm to 63 mm, (b) chart of visibility values of group 2 element 2 of the USAF resolution target and the magnification factors of each sub-figures in (a).

Download Full Size | PDF

The target recording results for varying zh are presented on Fig. 10(a), when fgp= 164 mm. And its visibility and magnification data of each reconstructed image is plotted on Fig. 10(b). Here, the visibility is calculated as (ImaxImin)/(Imax+Imin), where Imax is the average intensity of target area, and Imin is of the marginal area. In visibility calculation, the area of group 2 and element 2 is used. As expected, the magnification of the image increases with increasing zh, whereas the visibility decreases due to the decreased NA of the system.

Lastly, to verify the video recording capability of the proposed system, the real-time holographic acquisition and its digital reconstruction are performed, as shown in Figs. 11 and 12. In this GPSIDH system, custom-made GP lens where fgp= 164 mm is utilized, and the GP lens to sensor distance zh is 18 mm. The polarized beam splitter (CCM1-PBS251, Thorlabs, USA) is mounted between the lo and lgp, to serve as a horizontal polarizer to the transmitting beam, and also to be applied as a viewfinder by using the reflecting beam.

 figure: Fig. 11

Fig. 11 (a) schematic illustration of the target configuration, (b) phase angle-only image of 91th frame of the recorded hologram video, (c,d) numerical reconstruction results of the hologram, where the best of focus is changed between the NBS and USAF targets. The images on (b-d) are cropped in 500 x 500 pixels. Also note Visualization 1 and Visualization 2, which is the hologram of two targets, while the digits on USAF target shifts vertically by the author’s manual control. On Visualization 1, the real-time captured hologram is represented as a phase-angle only video hologram, where the level of intensity from 0 to 255 represents the phase-angle from 0 to 2π. On Visualization 2, the digitally reconstructed video hologram is presented.The first five seconds, the reconstruction plane is on the USAF 1951 target. The last five seconds, the plane is moved to focus on the NBS 1963A target. Visualization 2 is cropped to 500 x 500 pixels. BC: beam combiner.

Download Full Size | PDF

Two negative targets are used as the recording objects, one is the USAF 1951 target, and the other one is the National Bureau of Standards (NBS) 1963A resolution target (39-856, Edmund Optics, USA). Each target is arranged perpendicularly to one another and is aligned in one direction by a beam combiner to represent an object having the volume of two layers, as shown in Fig. 11(a). Both targets are shined by the green LED illuminating modules which are used in the previous result (on Fig. 9). The shined areas are the digits on the group 0 elements 2 to 6 of USAF 1951, and 4.0 cycles/mm of NBS 1963A targets. The NBS 1963A target is fixed and located at the front focal plane of lo, and USAF 1951 target is located behind 12 mm from the NBS 1963A target. The real-time hologram of Fig. 11 and related movies Visualization 1 and Visualization 2 are taken while the USAF 1951 target is moving vertically, controlled manually by hand. The raw video data which contain four phase-shifted intensity profile for each frame is recorded and stored in the memory. Then the frames are digitally reconstructed sequentially and saved into a video file format. The digital reconstruction of each frame by using a conventional Fresnel back propagation method consumes about 0.5 seconds in MATLAB software without using a parallel computation (3.1 GHz Intel Core i5, 8GB memory). When it comes to the optical reconstruction, the processing of a raw image to an amplitude or phase-only hologram can be processed almost in real-time, with an optimized software design.

The recording of the reflective object is also demonstrated in Fig. 12, and Visualization 3 and Visualization 4. The white dice with 6 x 6 x 6 mm dimension is placed above the optical post and the post is rotated manually. The center of object to lo distance is about 125 mm. The daily purpose LED spot light (JANSJÖ, Ikea, Sweden) is utilized to illuminate the dice. The band-pass filter (86-655, Edmund Optics, USA) is attached in front of the system to filter the 3000K broadband light into the narrow bandwidth of green light, where the central wavelength is 550 nm and the spectral bandwidth is 25 nm. In this case, the optical power measured at the entrance of the system is about 3.4 μW, after filtering with the band-pass filter. The numerical reconstruction results in Fig. 12 and the related video Visualization 3 contain higher noise level than the results from the transmissive object in Fig. 11, since the system is not fully optimized under such reflective lighting condition. If the wider selection of fgp and zh is possible, then the obtained hologram may have higher contrast in fringe pattern, and its reconstruction result is clearer than the current results. We think that the recording and reconstruction results present a potential of this system as a hologram video camera under a general illuminating condition. And we expect that the optical reconstruction of the real-time captured complex hologram is also available directly or with simple post-processing by using the advanced holographic display technologies [12–19].

 figure: Fig. 12

Fig. 12 The recording and reconstruction result of the dice object fixed on the putty-like adhesive. (a) reference view, (b) the phase-only hologram data, (c) numerical reconstruction results of (b). Figures (b) and (c) are cropped in 500 x 500 pixels. The phase-angle only hologram and the digitally reconstructed videos of the rotating dice are presented on Visualization 3 and Visualization 4, respectively.

Download Full Size | PDF

4. Summary and conclusion

In summary, the compact and video recording capable GPSIDH system is developed and analyzed in this study based on our previous system. The system is configured with the band-pass filter, convex lens, polarizer, GP lens, and the polarized image sensor. The GP lens serves as a common-path polarization selective wavefront modulator. The polarized image sensor, collaborated with the polarizer and the GP lens, the parallel phase-shifting is achieved, which realizes the real-time hologram acquisition. The demonstration of the system performance under the compact form-factor within 4 cm is presented. The real-time hologram acquisition and its digitally reconstructed video is demonstrated in the speed of 25 frames per second, under the general incoherent light sources. Even with the compact optical path length of the system, the better hologram can be obtained by introducing an optimized selection of the focal length of the GP lens and the distance between the GP lens and sensor. We believe that the proposed system would be a great counterpart of the advanced future holographicdisplay.

Funding

National Research Foundation of Korea (NRF) 2018R1A2B6005260; Korea Creative Content Agency (KOCCA) R2017060005

References

1. T. S. Bernard Kress, “A review of head-mounted displays (hmd) technologies and applications for consumer electronics,” Proc. SPIE 8720, 8720 (2013).

2. C. Jang, C.-K. Lee, J. Jeong, G. Li, S. Lee, J. Yeom, K. Hong, and B. Lee, “Recent progress in see-through three-dimensional displays using holographic optical elements [invited],” Appl. Opt. 55, A71–A85 (2016). [CrossRef]   [PubMed]  

3. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3d: Tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30, 95 (2011). [CrossRef]  

4. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3d: Augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36, 190 (2017). [CrossRef]  

5. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: Realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35, 60 (2016). [CrossRef]  

6. J. Kim and M. S. Banks, “Resolution of temporal-multiplexing and spatial-multiplexing stereoscopic televisions,” Curr. Opt. Photon. 1, 34–44 (2017). [CrossRef]  

7. M. Park and H.-J. Choi, “A method to compensate a luminance distortion of a time-multiplexing spatially interlaced stereoscopic three-dimensional display,” Curr. Opt. Photon. 2, 436–442 (2018).

8. S. Yoon, H. Baek, S.-W. Min, S. gi Park, M.-K. Park, S.-H. Yoo, H.-R. Kim, and B. Lee, “Implementation of active-type lamina 3d display system,” Opt. Express 23, 15848–15856 (2015). [CrossRef]   [PubMed]  

9. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” in ACM SIGGRAPH 2015 Emerging Technologies, (ACM, New York, NY, USA, 2015), SIGGRAPH ’15, pp. 10:1.

10. K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble display,” Optica 4, 298–302 (2017). [CrossRef]  

11. K.-M. Jeong, H.-S. Kim, S.-I. Hong, S.-K. Lee, N.-Y. Jo, Y.-S. Kim, H.-G. Lim, and J.-H. Park, “Acceleration of integral imaging based incoherent fourier hologram capture using graphic processing unit,” Opt. Express 20, 23735–23743 (2012). [CrossRef]   [PubMed]  

12. P. St-Hilaire, S. A. Benton, M. E. Lucente, and P. M. Hubel, “Color images with the mit holographic video display,” Proc.SPIE 1667, 1667 (1992).

13. R. Häussler, A. Schwerdtner, and N. Leister, “Large holographic displays as an alternative to stereoscopic displays,” Proc.SPIE 6803, 6803 (2008).

14. M. Stanley, M. A. Smith, A. P. Smith, P. J. Watson, S. D. Coomber, C. D. Cameron, C. W. Slinger, and A. Wood, “3d electronic holography display system using a 100 mega-pixel spatial light modulator,” Proc.SPIE 5249, 5249 (2004).

15. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19, 9147–9156 (2011). [CrossRef]  

16. T. Kozacki, “Holographic display with tilted spatial light modulator,” Appl. Opt. 50, 3579–3588 (2011). [CrossRef]   [PubMed]  

17. M. Kujawinska, T. Kozacki, C. Falldorf, T. Meeser, B. M. Hennelly, P. Garbat, W. Zaperty, M. Niemelä, G. Finke, M. Kowiel, and T. Naughton, “Multiwavefront digital holographic television,” Opt. Express 22, 2324–2336 (2014). [CrossRef]   [PubMed]  

18. Y. Lim, K. Hong, H. Kim, H.-E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999–25009 (2016). [CrossRef]   [PubMed]  

19. J. Cho, S. Kim, S. Park, B. Lee, and H. Kim, “Dc-free on-axis holographic display using a phase-only spatial light modulator,” Opt. Lett. 43, 3397–3400 (2018). [CrossRef]   [PubMed]  

20. T. Poon, Digital Holography and Three-Dimensional Display: Principles and Applications (SpringerUS, 2006). [CrossRef]  

21. S. Benton and V. Bove, Holographic Imaging (Wiley, 2008). [CrossRef]  

22. K. Choi, J. Yim, S. Yoo, and S.-W. Min, “Self-interference digital holography with a geometric-phase hologram lens,” Opt. Lett. 42, 3940–3943 (2017). [CrossRef]   [PubMed]  

23. K. Choi, J. Yim, and S.-W. Min, “Achromatic phase shifting self-interference incoherent digital holography using linear polarizer and geometric phase lens,” Opt. Express 26, 16212–16225 (2018). [CrossRef]   [PubMed]  

24. M. K. Kim, “Incoherent digital holographic adaptive optics,” Appl. Opt. 52, A117–A130 (2013). [CrossRef]   [PubMed]  

25. M. K. Kim, “Full color natural light holographic camera,” Optics Express 21, 9636–9642 (2013). [CrossRef]   [PubMed]  

26. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Optics Letters 32, 912–914 (2007). [CrossRef]   [PubMed]  

27. G. Brooker, N. Siegel, J. Rosen, N. Hashimoto, M. Kurihara, and A. Tanabe, “In-line finch super resolution digital holographic fluorescence microscopy using a high efficiency transmission liquid crystal grin lens,” Opt. Lett. 38, 5264–5267 (2013). [CrossRef]   [PubMed]  

28. N. Siegel, V. Lupashin, B. Storrie, and G. Brooker, “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10, 802–808 (2016). [CrossRef]  

29. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometric-phase holograms with arbitrary wavefronts,” Optica 2, 958–964 (2015). [CrossRef]  

30. K. Gao, H.-H. Cheng, A. K. Bhowmik, and P. J. Bos, “Thin-film pancharatnam lens with low f-number and high quality,” Opt. Express 23, 26086–26094 (2015). [CrossRef]   [PubMed]  

31. J. Hong and M. K. Kim, “Single-shot self-interference incoherent digital holography using off-axis configuration,” Opt. Lett. 38, 5196–5199 (2013). [CrossRef]   [PubMed]  

32. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42, 383–386 (2017). [CrossRef]   [PubMed]  

33. C. M. Nguyen, D. Muhammad, and H.-S. Kwon, “Spatially incoherent common-path off-axis color digital holography,” Appl. Opt. 57, 1504–1509 (2018). [CrossRef]   [PubMed]  

34. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, “Single-shot phase-shifting incoherent digital holography,” J. Opt. 19, 065705 (2017). [CrossRef]  

35. T. Nobukawa, T. Muroi, Y. Katano, N. Kinoshita, and N. Ishii, “Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings,” Opt. Lett. 43, 1698–1701 (2018). [CrossRef]   [PubMed]  

36. T.-C. Poon and J.-P. Liu, Introduction to modern digital holography: with MATLAB (Cambridge University, 2014).

37. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]  

38. S. Pancharatnam, “Generalized theory of interference, and its applications,” Proc. Indian Acad. Sci. - Sect. A 44, 247–262 (1956). [CrossRef]  

39. P. Hariharan and P. Ciddor, “An achromatic phase-shifter operating on the geometric phase,” Opt. Commun. 110, 13 – 17 (1994). [CrossRef]  

40. S. Helen, M. Kothiyal, and R. Sirohi, “Achromatic phase shifting by a rotating polarizer,” Opt. Commun. 154, 249 – 254 (1998). [CrossRef]  

41. G. Pedrini, H. Li, A. Faridian, and W. Osten, “Digital holography of self-luminous objects by using a mach–zehnder setup,” Opt. Lett. 37, 713–715 (2012). [CrossRef]   [PubMed]  

42. S. Mochida, M. Shinomura, T. Hirakawa, T. Fukuda, Y. Awatsuji, K. Nishio, and O. Matoba, “Single-shot incoherent digital holography using parallel phase-shifting radial shearing interferometry,” Proc.S PIE 10711, 10711 (2018).

43. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with fresnel elements,” Opt. Express 18, 962–972 (2010). [CrossRef]   [PubMed]  

44. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

45. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

46. A. V. Oosterom and J. Strackee, “The solid angle of a plane triangle,” IEEE Transactions on Biomed. Eng. BME-30, 125–126 (1983). [CrossRef]  

47. M. Berry, “Pancharatnam, virtuoso of the poincaré sphere: an appreciation,” Curr. Sci. 67, 220–223 (1994).

48. E. Wolf, Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007).

49. W. Yu-Hong, M. Tian-Long, C. Hao, J. Zhu-Qing, and W. Da-Yong, “Effect of wavefront properties on numerical aperture of fresnel hologram in incoherent holographic microscopy,” Chin. Phys. Lett. 31, 044203 (2014). [CrossRef]  

50. J. Rosen and R. Kelner, “Modified lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems,” Opt. Express 22, 29048–29066 (2014). [CrossRef]   [PubMed]  

51. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of fresnel incoherent correlation holography (finch) using dual diffractive lenses on a spatial light modulator (slm),” Opt. Express 20, 9109–9121 (2012). [CrossRef]   [PubMed]  

52. C. Yousefzadeh, A. Jamali, C. McGinty, and P. J. Bos, “ächromatic limits" of pancharatnam phase lenses,” Appl. Opt. 57, 1151–1158 (2018). [CrossRef]   [PubMed]  

53. A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th International Conference on Pattern Recognition, (2010), pp. 2366–2369.

Supplementary Material (4)

NameDescription
Visualization 1       Video of the recorded hologram in a phase-angle representation of two resolution targets aligned along the axial direction.
Visualization 2       Digitally reconstructed hologram video of the two resolution targets. First five seconds, the focusing plane is on the NBS target (digits with 4.0), and last five seconds, the USAF target (digits moving vertically) is on the focus.
Visualization 3       Video of the recorded hologram in a phase-angle representation of the rotating white dice.
Visualization 4       Digitally reconstructed hologram video of the rotating white dice.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (a) the illustration of the GPSIDH system, (b) schematic diagram of the system with the parameter and field labels. Red and blue dashed lines after the GP lens indicate the diverging and converging rays, respectively, each of which modulated by the negative and positive focal length of the GP lens. BPF, band-pass filter (used if necessary); lo, objective lens with focal length fo; P 1 , 2, first and last polarizer; lgp, GP lens with focal length of ± f g p; zo, object distance from lo; d, lo to lgp distance; and zh, lgp to sensor distance.
Fig. 2
Fig. 2 Poincaré sphere representation of the proposed system. (a) the second polarizer is in 45° state, (b) the second polarizer is rotated to the vertically aligned state. The geometric phase difference δ between (a) and (b) is 90°. The coordinate labels, S 1 , 2 , 3 are the Stokes parameters.
Fig. 3
Fig. 3 Illustration of the structure of polarized image sensor, and parallel phase-shifting method
Fig. 4
Fig. 4 (a) schematic optical diagram of the analyzed system, (b, c) conceptual illustration of the reconstructed image quality of hologram related to rh, where (b) has a large rh, and (c) has a small rh.
Fig. 5
Fig. 5 Plot of Δ O P L [ n ] to r s [ n ], when f o = 100 mm, d = 18 mm, z h = 20 mm, c l = (525 nm)2/32 nm =9.76 μ m. The legend denotes fgp. The solid horizontal arrows indicate the rh defined by each fgp.
Fig. 6
Fig. 6 (a) rh to zh relation, (b) system NA to zh relation. The legend denotes fgp.
Fig. 7
Fig. 7 (a) illustration of the custom-made GP lens profile recording system with component labels, (b) polarization optical microscopic image of the fabricated GP lens sample. The axis with label P and A indicate polarizer and analyzer axis, respectively.
Fig. 8
Fig. 8 Photograph of compact GPSIDH video camera prototype
Fig. 9
Fig. 9 Reconstructed results from the various systems with f g p = 50 mm, 100 mm, 164 mm from the top to bottom rows, and z h = 18 mm, 23 mm, 28 mm from the left to right columns. The digits inside the figure indicate the PSNR values compared to the system results with f g p = 164 mm and the equivalent zh.
Fig. 10
Fig. 10 (a) reconstruction results of the system with f g p = 164 mm when zh is changed from 18 mm to 63 mm, (b) chart of visibility values of group 2 element 2 of the USAF resolution target and the magnification factors of each sub-figures in (a).
Fig. 11
Fig. 11 (a) schematic illustration of the target configuration, (b) phase angle-only image of 91th frame of the recorded hologram video, (c,d) numerical reconstruction results of the hologram, where the best of focus is changed between the NBS and USAF targets. The images on (b-d) are cropped in 500 x 500 pixels. Also note Visualization 1 and Visualization 2, which is the hologram of two targets, while the digits on USAF target shifts vertically by the author’s manual control. On Visualization 1, the real-time captured hologram is represented as a phase-angle only video hologram, where the level of intensity from 0 to 255 represents the phase-angle from 0 to 2 π. On Visualization 2, the digitally reconstructed video hologram is presented.The first five seconds, the reconstruction plane is on the USAF 1951 target. The last five seconds, the plane is moved to focus on the NBS 1963A target. Visualization 2 is cropped to 500 x 500 pixels. BC: beam combiner.
Fig. 12
Fig. 12 The recording and reconstruction result of the dice object fixed on the putty-like adhesive. (a) reference view, (b) the phase-only hologram data, (c) numerical reconstruction results of (b). Figures (b) and (c) are cropped in 500 x 500 pixels. The phase-angle only hologram and the digitally reconstructed videos of the rotating dice are presented on Visualization 3 and Visualization 4, respectively.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I h ( x h , y h ; x o , y o , z o ) = | C 2 ( r o ¯ ) Q [ 1 z o ] L [ r o ¯ z o ] Q [ 1 f o ] * Q [ 1 d ] ( Q [ 1 f g p ] e j δ / 2 + Q [ 1 f g p ] e j δ / 2 ) * Q [ 1 z h ] | 2 .
I h ( x h , y h ; x o , y o , z o ) = C 3 + C 4 ( r o ¯ ) Q [ z r e c 1 ] L [ M p z r e c r o ¯ ] e i δ + C 4   ' ( r o ¯ ) Q [ z r e c 1 ] L [ M n z r e c r o ¯ ] e i δ ,
1 z r e c = 1 z h z n 1 z h z p ,
M p , n = z i z o z p , n d z i ,
H ( x h , y h ) = I o ( x o , y o , z o ) I h ( x h , y h ; x o , y o , z o ) d x o d y o d z o
tan  ( Ω A B C / 2 ) = a ^ ( b ^ × c ^ ) | a ^ | | b ^ | | c ^ | + ( a ^ b ^ ) c ^ + ( a ^ c ^ ) b ^ + ( b ^ c ^ ) a ^
C H [ p , q ] = ( H 3 [ p , q ] H 1 [ p , q ] ) j ( H 4 [ p , q ] H 2 [ p , q ] ) .
Δ O P L [ n ] = | O P L [ n ] p o s O P L [ n ] n e g | = | [ f o 2 + ( f g p r s [ n ] f g p z h ) 2 + z h 1 + ( r s [ n ] f g p z h ) 2 ] [ f o 2 + ( f g p r s [ n ] f g p + z h ) 2 + z h 1 + ( r s [ n ] f g p + z h ) 2 ] | ,
r o = f g p f g p z h r h .
N A = r o f o = r h f o f g p f g p z h .
f s 2 × ( r h / 2 ) / λ z r e c
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.