Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Polarization-resolved dual-view holographic system for 3D inspection of scattering particles

Open Access Open Access

Abstract

A novel dual-view polarization-resolved pulsed holographic system for particle measurements is presented. Both dual-view configuration and polarization-resolved registration are well suited for particle holography. Dual-view registration improves the accuracy in the detection of 3D position and velocities, and polarization-resolved registration provides polarization information about individual particles. The necessary calibrations are presented, and aberrations are compensated for by mapping the positions in the two views to positions in a global coordinate system. The system is demonstrated on a sample consisting of 7 μm spherical polystyrene particles dissolved in water in a cuvette. The system is tested with different polarizations of the illumination. It is found that the dual view improves the accuracy significantly in particle tracking. It is also found that by having polarization-resolved holograms, it is possible to separate naturally occurring sub-micrometer particles from the larger, 7 μm seeding particles.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Digital holography has emerged as an excellent tool for tracking small particles with its ability to encode 3D information into 2D measurements. Applications include various areas, e.g., tracking 3D trajectories and cell shapes [13], pipe flow dynamics [4,5], colloidal particles [610], and investigations of jets [11]. An inherent problem in the reconstruction of particle holograms is that the axial resolution is worse than the lateral one due to the limiting numerical aperture (NA). The lateral resolution is inversely proportional to NA, while the axial resolution is inversely proportional to NA2. Various techniques and metrics have been developed to determine the true axial position. A straightforward method is to track the intensity along the centroid of a reconstructed particle and fit it to a Gaussian model [8]. The disadvantage of tracking intensity is that there is an offset between the estimated and true positions that needs to be compensated for [1214]. Some of the most common metrics are compiled in a review regarding particle tracking by Memmolo et al. [1]. These methods include weighted spectral analysis, total variance, gradient, and Laplacian methods, which can estimate the true axial position in microscopic applications. Another approach to finding the true particle positions are tomographic registration and reconstruction. It has been demonstrated that the use of two perpendicular cameras and a joint reconstruction method drastically can improve the axial resolution to become equal to the lateral one [11,15]. Previous applications include studies of rotating red blood cells in a micro-fluid channel [16] and investigations of dense particle fields [17]. When using two different views, the complexity in reconstruction increases, since mapping to a global coordinate system is needed. In general there can be a rigid rotation and translation between the two views but also laterally varying aberrations for each camera. In [17], the authors’ emphasis is on the need for proper calibrations to compensate for distortion and aberrations for the joint reconstruction to work, especially when measuring smaller particles and for systems with low magnifications.

There are two main types of holographic configurations for particle detection: in-line, where the illumination beam also is the reference wave, and off-axis, where a separate beam with an off-axis tilt is used as the reference wave. By using a separate reference beam, it is possible to separate the interference content from the twin image and the directly transmitted content, which is not possible in the in-line configuration. However this advantage comes at the cost of reduced bandwidth. A further advantage of the off-axis configuration is that it is possible to multiplex several reference waves in a single-shot recording by having a different tilt for each wave. One application is polarization-resolved holography in which the amplitude and phase at each polarization component are registered in different lobes [1820]. The two reference waves are linearly polarized and perpendicular to each other. By ensuring this, the interference by the object light and the reference waves captures the full polarization characteristics of the light. Applications where polarization-resolved holography has been used to determine the complete Jones matrix of samples have been reported [2125]. These applications have primarily focused on capturing polarization-resolved holograms of surfaces. Particle holography should be well suited for polarization holography, since the scattered light, in general, is polarization dependent, and scattering theory is traditionally described in terms of polarization components with respect to the scattering plane. It is therefore quite straightforward to combine particle scattering theory with polarization-resolved holography. Particle morphology, such as shape and size, influences how the scattered light is polarized [26]. This feature has been used to distinguish between spherical and ellipsoid particles using interferometic particle imaging, which is a non-holographic method that registers the different polarization components on separate detectors [27]. In off-axis holography, it is possible to acquire both components in a single shot by using multiplexed reference waves.

In this paper, we present a novel system for holographic particle measurements. It combines dual-view and polarization-resolved holography to record more information from the scattered light. Two identical polarization-resolved imaging systems are placed perpendicular to each other and image the same volume. The system then records, in total, four complex amplitude fields on each trigger, one in each polarization on each camera. By using this approach, the axial resolution is improved on the lateral one, and the full polarization characteristics from the scatter are recorded. Scattering models for microparticles are, in general, polarization dependent. Recording both polarization components in a single shot yields a better representation of the particle scattering. The paper is structured as follows: in Section 2, the theory for both polarization-resolved and dual-view holography is presented. In Section 3, the experimental setup is presented in more detail, and the calibrations for both position and polarization are presented. Results from particle measurements are then presented in Section 4, and finally, some conclusions are made in Section 5.

2. POLARIZATION-RESOLVED DUAL-VIEW HOLOGRAPHY

The system demonstrated in this paper combines polarization-resolved and dual-view holography. The two different techniques are described individually in the following subsections. Consider the geometry in Fig. 1. A particle is illuminated by the field E¯i, and the scattered field E¯s=E¯+E¯ is detected by two perpendicular cameras. For each camera, the parallel and perpendicular notation is with respect to the global xy plane. The scattering plane, spanned by the illumination and observation directions, is also shown for the two cameras. These two planes are also perpendicular to each other, and the scattering problem is described with respect to the scattering plane. The illumination is assumed to be linearly polarized at an angle α with respect to the scattering plane for camera 1. Note that the illumination polarization will be different with respect to each scattering plane. For example, if α=0, the polarization is parallel to the scattering plane for camera 1, while it is perpendicular to the scattering plane for camera 2. These relative differences affect how the scattered polarization will behave. The scattered field is, in general, generated in all directions, and the imaging systems then detect different parts of the scattered field.

 figure: Fig. 1.

Fig. 1. General scattering problem. A particle is illuminated from below by the field E¯i and generates a scattered field E¯s. E¯i is linearly polarized at an angle α. The scattering planes for each camera are also shown, orange for camera 1 and blue for camera 2.

Download Full Size | PDF

A. Polarization-Resolved Digital Holography

Consider the scattered field

E¯s=(|Es|exp(iφs)|Es|exp(iφs))
recorded on one of the cameras. In this equation, each component is expressed in complex form in terms of amplitude |E| and phase φ. The notation of parallel and perpendicular is with respect to the xy plane. Depending on the camera directions, these components relate differently to the scattering plane of each camera. To record a polarization-resolved single-shot hologram, two separate reference waves with orthogonal polarization are added. The reference waves each have polarization parallel and perpendicular to the scattering plane and are off-axis configured. Figure 2(a) shows the aperture plane viewed from the detector where the reference waves are linearly polarized point sources. In Jones notation, the two reference waves are expressed as
R¯=(|R|exp(iφR)0),
R¯=(0|R|exp(iφR)),
where the total reference wave is R¯=R¯+R¯. The intensity, I, on the detector is then the interference between the scattered light E¯s and the reference wave R¯, which can be expanded as follows:
I=|E¯s+R¯|2=|Es|2+|Es|2+|R|2+|R|2+EsR*+Es*R+EsR*+Es*R,
where a complex vector component is denoted Ai=|Ai|exp(iϕi). The first four terms are the intensities of the object and reference waves, respectively, and the last four terms are interference terms between the object and reference waves. Note that components with different polarizations do not interfere. In off-axis holography, the reference wave has an off-axis tilt towards the detector, causing the interference terms and the intensity terms to be separated. The configuration of the reference waves is shown in Fig. 2(a), and the corresponding angular spectrum of a hologram, obtained by taking the Fourier transform, is shown in Fig. 2(b). By choosing this configuration, the reference waves do not have a polarization component along the optical axis, even when transferred through a lens. This, in combination with the low NA of the system, means that the polarization can, therefore, be expressed in a 2D notation. The terms that correspond to the scattered light modulated by the conjugate of the reference waves are extracted so that two complex amplitudes with orthogonal polarization are acquired. These are the lobes marked in Fig. 2(b). After the transformation back to the spatial domain, the complex amplitudes in the two chosen lobes can be written as
U=EsR*=|Es||R|exp[i(φsφR)],
U=EsR*=|Eo||R|exp[i(φsφR)].
The amplitudes of the reference waves, |R| and |R|, may differ with respect to each other but also spatially over the detector. If assumed stationary these variations can be measured by individually recording the reference waves. The phase terms φR and φR depend on the absolute phase of the reference wave as well as the off-axis tilt of respective waves. The latter can be compensated for by centering the extracted lobes in the Fourier domain. Theses compensations combined yield the reconstructed complex amplitude that can be written as
E^s=UR^|R^|2=|E^s|exp[i(φsα)],
E^s=UR^|R^|2=|E^s|exp[i(φsα)],
where R^ and R^ are estimates of the true complex amplitudes of the reference waves, and α and α are unknown phase constants due to the difference in the optical path between the scattered light and the reference waves. This means that, apart from a phase constant, the amplitude and phase of the scattered light is fully reconstructed. With the complete description of the light, it is possible to refocus the image numerically. In this paper, the angular spectrum method is used, which is easily carried out in the Fourier domain [28]. Propagation along the optical axis of a field recorded at the detector plane zD is numerically propagated a distance Δz as follows:
E^(x,y,zD+Δz)=F2D1{F2D{E^(x,y,zD)}exp[i2πfzΔz]},
where fz are the spatial frequencies in the z direction, F2D is the 2D Fourier transform, and E^(x,y,zD) is the complex amplitude of one of the polarization components. By propagating and storing images at a set of distances, a 3D volume of the imaged scene can be rendered around the plane zD. The 3D reconstruction is performed for both polarization components to yield a polarization-resolved volume. The reconstruction assumes that only single-scattered light is recorded, which is a valid approximation when a sparse particle field is observed. To estimate the axial position, the intensity is extracted along the optical axis and fitted to a Gaussian model for each particle. From it, the maximum value is chosen to be a first estimate of the particles’ axial positions. A set of particle positions (x,y,z) is then obtained for a single camera. Further, for each particle found, the polarization ratio angle can be determined, defined as
β=arctan(|E^(x,y,z)||E^(x,y,z)|),
where x, y, and z are the estimated positions for each particle. The idea of expressing the polarization by an angle is that it can more easily be related to the illumination polarization. The angle ranges between 0° and 90°, where 0° means linear polarization in the perpendicular direction and 90° means linear polarization in the parallel direction. The reason for using only the intensity for the different polarization components, and not also the phase, is because it can be hard to obtain phase stability for the reference wave. This instability means that α and α vary with time and it is, hence, not possible to estimate the full state of polarization.

 figure: Fig. 2.

Fig. 2. Polarization-resolved reference waves. (a) Configuration of the reference waves in the aperture plane; the arrows indicate the polarization orientation of each reference wave. (b) Corresponding spatial frequency content of a polarization-resolved hologram. In this example, a square aperture is used, thus the square lobes in the spatial frequency domain.

Download Full Size | PDF

B. Dual-View Holography

A concept sketch of the particle elongation in the reconstruction is shown in Fig. 3. Two holographic systems image the particles. The 3D reconstruction of the intensity is cigar-shaped, where the elongation is along the optical axis. This elongation is due to the limitation the NA imposes on the system. The uncertainty is also increased in the positioning along the optical axis. By having two perpendicular cameras, the combined information does not suffer from the elongation and increased uncertainty, as illustrated in Fig. 3. The two views also make it possible to record the polarization ratio angle β from two different directions. From each camera, a set of particle positions is obtained. Consider the sets (x1(1),y1(1),z1(1),β1(1)) and (x2(2),y2(2),z2(2),β2(2)) generated from camera 1 and camera 2, respectively. The superscripts denote which coordinate system the set is expressed in. Hence, to start with, the coordinates are expressed in the local coordinate system for each camera. The coordinate system for camera 1 is chosen as the global coordinate system, and the coordinates detected by camera 2 need mapping to this coordinate system. This transformation is done by a mapping function g as follows:

g:(x2(2),y2(2),z2(2))(x2(1),y2(1),z2(1)),
where the coordinates from camera 2 are now expressed in the coordinate system of camera 1, also chosen to be the global coordinate system. Ideally, the mapping function g is only a rotation and a translation; however, in practice, there might also be aberrations present. Let (x˜1(1),y˜1(1),z˜1(1)) and (x˜2(2),y˜2(2),z˜2(2)) be the positions with the aberrations present. Two additional mapping functions are then needed to compensate for aberrations as follows:
f1:(x˜1(1),y˜1(1),z˜1(1))(x1(1),y1(1),z1(1)),
f2:(x˜2(2),y˜2(2),z˜2(2))(x2(2),y2(2),z2(2)),
which maps the coordinates to an aberration-free domain. The translation from mapping g can also be incorporated into f1 and f2, reducing g to a rotation. For now, the mapping functions f1 and f2 can be handled as general functions; in Section 3.B, they are sampled for the experimental setup used. Given two sets of coordinates expressed in the same coordinate system (x1(1),y1(1),z1(1)) and (x2(1),y2(1),z2(1)), a pairing is performed. Previously reported methods have used the reconstructed data to truncate the elongated particles [15]. This approach requires the reconstructed volumes from both cameras to be interpolated on a unified grid. We instead position the particles on each camera first and then unify the views. To find the same particle in both views, a kind of particle correlation (PC) is performed. A PC score is computed between the two cameras as
PC=exp([(x1(1)x2(1)σx)2+(y1(1)y2(1)σy)2+(z1(1)z2(1)σz)2]),
where σx,σy, and σz are typical lengths in each direction, proportional to the uncertainty in that direction. The values of the typical lengths are σy<σx=σz. The reason for these choices is that the y direction is a lateral coordinate on both cameras, and hence, the uncertainty in this direction should be small. The two other directions are the depth direction in one of the two cameras, respectively. By keeping σx and σz large, it allows the calibration to find particles that are far away from each other in the depth directions. This choice is useful in the calibration, since there might be a large misalignment. For each particle detected on camera 1, the PC score is computed for all particles detected by camera 2, and the particle with the highest score is chosen to be the match for that particle. A threshold is also set to remove pairings where the maximum PC score is below it. When two particles have been paired, the lateral positions from each view are chosen to compute the 3D positions. The global x position is hence set to be the local x position from camera 1; the global z position is set to be the mapped coordinate z2(1), and the global y coordinate is chosen to be the mean of the two y positions from each camera.

 figure: Fig. 3.

Fig. 3. Orientation of the local coordinate systems and a visualization of how the particles are elongated along the optical axis of each camera.

Download Full Size | PDF

For the spatial calibration, the optimization process follows the flowchart shown in Fig. 4(a). From a set of frames, the first frame from each camera is processed, and particle positions are estimated. From the two local positions, a global estimate is produced as previously described. For the calibration, σyσx=σz in Eq. (14), since there could be a large misalignment between the two cameras. From the estimated global positions, the particles detected in two consecutive frames are kept. From these particles, the difference compared to the local positions is calculated for each particle. If it is not detected in the previous frame, it is discarded. The process is repeated until the last frame is processed, then the differences are fitted to a third-order polynomial model to find the correction functions for the two cameras, E1(x,y,z) and E2(x,y,z), respectively. For the evaluation, the process is shown by the flowchart in Fig. 4(b). The positions are estimated on each camera in the same way as in the calibration. In the evaluation of particle positions from measurements, the correction functions are then subtracted from the particle positions before the global estimation to form new aberration-free coordinates as follows:

z1(1)(x,y)=z˜1(1)E1(x,y,z),
z2(2)(x,y)=z˜2(2)E2(x,y,z).
Once these corrections are done, the mapping g is just a 90° rotation around the y axis. By correcting the positions, it is possible to decrease the typical lengths along the axial directions, z and x, in the global estimation. They are, however, still larger than in the y direction. From the global positions, individual particles are tracked.

 figure: Fig. 4.

Fig. 4. Flowcharts for (a) calibration process and (b) evaluation process. The results E1 and E2 are used in the evaluation of measurements as shown.

Download Full Size | PDF

3. EXPERIMENTS

A. Experimental Setup

The experimental arrangement consists of the dual-view system shown in Fig. 5. A side-scattering configuration is used to make the two views as equal as possible. Linearly polarized laser pulses are emitted from a twin-cavity, injection-seeded, Nd:YAG laser (Spectron SL804T). The light is split into an illumination part and a reference part, respectively, by the 50/50 beam splitter BS1. A λ/2 plate controls the polarization of the illumination light so that the linear polarization has the angle α to the scattering plane of camera 1. The light is then directed by a mirror M1 and expanded five times by the telescope T before illuminating the sample from below through mirror M2, as depicted in the inset (a) in Fig. 5. The reference part from beam splitter BS1 is first reduced in intensity by a beam splitter, then split into four different beams using 50/50 beam splitters BS2, BS3, and BS4, respectively. The four beams are launched into separate polarization-maintaining fibers (Thorlabs PM-S405-XP) through fiber couplers FC1–FC4 and are used as reference waves in the imaging system. For the fibers to be polarization maintaining, the light needs to be launched along either the fast or slow axis of the fiber. To ensure that this is the case, the polarization is adjusted by the λ/2 plates before the fiber couplers.

 figure: Fig. 5.

Fig. 5. Sketch of the experimental setup. Sample cuvette is imaged by the dual-view polarization-resolved holographic system. M, mirrors; L, lenses; T, 5× telescope lens; BS, beam splitters; FC, fiber couplers; A, apertures; and BD, beam dump. Inset (a) shows the side view of the sample cell volume and (b) shows the configuration of the fiber ends in the aperture plate.

Download Full Size | PDF

The sample consists of a glass cuvette containing purified water that is seeded with 7 μm polystyrene spherical particles. The cuvette has the dimensions 26×26×70mm. When particles are illuminated, they generate a scattered field in all directions. The side-scattered light is then imaged by two identical telecentric systems consisting of the front lenses L1 and L2, respectively, with focal length f1=60mm, apertures A1 and A2 in the back focal plane of these lenses, and rear lenses L3 and L4 with focal length f2=75mm. The front focal planes of lenses L3 and L4 are at the aperture plane and the back focal plane at the CCD detector (Imperx, Bobcat 2520M). The system has a magnification of Mf2/f11.25 and a NA of NA=0.023. The total field of view is 5.7×5.7mm for each camera. The NA limits both lateral and axial resolutions. For this system, the lateral resolution becomes λ/(2NA)11.5μm, and the axial resolution is 2λ/NA22000μm. The tips of the polarization-maintaining fibers are connected to the aperture plates, as shown in inset (b). The light is linearly polarized, and the tips are rotated so polarization components are aligned vertically and horizontally, respectively, for each system. On the detector, the scattered light interferes with the reference waves to produce a hologram as described by Eq. (4). An effort is made to match the optical paths of the illumination and the reference waves to ensure good coherence.

B. Calibrations

To get accurate measurements from the system, it needs to be ensured that a single particle can be found on both cameras and be correctly mapped to a global coordinate system. It is also necessary to make sure that the estimated polarization does correspond to the correct polarization in the imaged light. To do this, two types of calibrations are performed, the first to ensure that the reference waves are linearly polarized and correctly oriented so that the detection is made as described by Eqs. (2)–(4). For the reference waves to be linearly polarized at the end of the polarization-maintaining fiber, it is required that they are launched along either the slow or fast axis of the fiber. This is obtained by rotating the λ/2 plates before the fiber couplers until the output is linearly polarized. Thereafter, the fiber tips are rotated in the aperture planes until the reference waves on each camera are aligned horizontally and vertically, respectively (see inset in Fig. 5). To validate that the system is polarization resolved, the cuvette is replaced by a mirror and a λ/2 plate that is rotated with 5 deg increments to control the illumination polarization α. Five measurements are taken for each step, and the polarization ratio angle is calculated using the mean value in the image. The resulting mean polarization angle β is shown in Fig. 6 for both cavities in the laser, along with the theoretical values. The estimated standard deviation is 0.6°. The measured β follows the theoretical values well. At the maximum and minimum, the measured value deviates slightly because of the bias-level in the readout.

 figure: Fig. 6.

Fig. 6. Polarization calibration. Resulting polarization angle β for the two cavities as a function of the illumination polarization α. (a), (b) Result for camera 1; (c), (d) result for camera 2. The standard deviation is 0.6°.

Download Full Size | PDF

The second calibration consists of determining the mapping functions g, f1, and f2 in Eqs. (11)–(13) for the system. To do so, the cuvette is filled with purified water containing a very low concentration of seeding particles. A set of 200 holograms is captured by both cameras, and the particles are positioned on each camera separately. In total, about 1200 particle positions, distributed over the whole measurement volume, are obtained on each camera over the set of 200 recordings. This is followed by the optimization process described in Fig. 4(a). For this calibration, the PC threshold in the global estimation is set to 0.1. The g mapping is found to be a 90° rotation followed by a translation of [405,127,142]μm. This is the average translation over the whole imaged volume. After subtracting this mean translation from the data, the mapping of f1 and f2 can be estimated. From the polynomial fit, it is found that there are mainly two types of terms. The first is a linear error in depth direction due to a small mismatch of the magnification and the refractive index of the water in the cuvette. These errors relate the reconstructed distance to the distance in the object space as described by Eq. (9). The second type of error is the curvature of field aberration for each camera. The resulting E1(x,y) and E2(x,y) are shown in Fig. 7. It is clear that a field of curvature is present in the recordings and that the total difference in depth over the detector, due to this aberration, is about 1500 μm on both cameras. For these calibrations, the polarization of illumination is linear at 45 deg, and only the polarization component parallel to the scattering plane on each camera is used in the positioning. Then only the seeding particles are detected, and the dense sub-micrometer particles are ignored.

 figure: Fig. 7.

Fig. 7. Estimated field of curvature on camera 1 and camera 2, respectively, from the correction functions E1 and E2, respectively. The lateral field of view is 5.4 mm for each camera.

Download Full Size | PDF

C. Measurements

Some simple measurements are made to demonstrate the performance of the system and how the particle tracking is affected by the illumination polarization. Another set of purified water seeded with 7 μm particles is filled into the cuvette. The mix is stirred before a set of 200 holograms is captured by each camera. The sampling frequency is 5 Hz, making the total time 40 s for each measurement. In total, three different sets of holograms with different polarization orientations for the illumination light are acquired. The directions of the illumination polarization are α=0°, α=45°, and α=90° for the three measurements. For each recording, the particle positions and polarization ratio angle β are evaluated from these measurements. The positions are then linked into particle tracks.

4. RESULTS AND DISCUSSION

The holograms from the measurements are reconstructed as outlined in previous sections. In Fig. 8 the results reconstructed from 200 frames are shown for detection with only camera 1 and camera 2, respectively, and for the combined global detection. The results are presented for the three different illumination polarizations α=0°, α=90°, and α=45°. The particle positions are connected between frames using a simple nearest-neighbor tracker [29]. The reconstructed volume is limited by the size of the detectors, making the total volume 5.4×5.4×5.4mm. It is seen that camera 2 detects more particles than camera 1 for α=0°. For α=90, it is the other way around, more particles are detected on camera 1 than on camera 2. The reason for this is how the illumination polarization aligns with respect to the scattering plane for each camera. When α=0°, the polarization is parallel to the scattering plane for camera 1, while for camera 2, the polarization is perpendicular. In addition to the seeded microparticles in the sample, there naturally exist sub-micrometer particles in the water. These particles will scatter light polarized perpendicular to the scattering plane much stronger and, hence, these particles are detected by only one of the cameras. It is not possible to reconstruct the tracks well using only the coordinates from camera 2 for α=0° or from camera 1 for α=90°, since the particle concentration is too dense when including the sub-micrometer particles. Therefore, the resulting tracks are inaccurate. When combining the coordinates from the two views, only points that are present in both cameras are kept, so all the sub-micrometer particles are removed. Comparing the particle tracks from camera 1 and the global coordinates, it is clear that it is the same tracks in both plots. It is observed that when using the unified coordinates, the uncertainty along the optical axis is reduced, and the rippling tracks are replaced with smooth ones. To quantify the improvement, each track is filtered with a moving average filter with size 3 and then compared to the original track. It is found that the standard deviation is reduced, on average, from 160 μm for detection in a single view to 43 μm in the global coordinates. At first, the dual-view setup may seam a bit ambiguous, since a single-view holographic system also obtains 3D positions. However, the depth accuracy can still be poor. For systems with large NA and high magnification, this is less of a problem, since the accuracy is proportional to NA2. However, for a system with low NA, as in this system, the accuracy in depth is severely worse than in lateral directions. This is typical for systems with a large field of view and low magnification.

 figure: Fig. 8.

Fig. 8. Particle tracking using both global and single camera coordinates for different illumination polarizations. Each row is for a certain α, while each column is for detection with either camera 1 or camera 2 or global coordinates. All tracks for each row are estimated from the same set of holograms. It is observed that far more particles are detected in (b) and (d) than in (a) and (e), respectively.

Download Full Size | PDF

The polarization ratio angle β for the tracked particles is calculated and shown in Fig. 9. The figure is structured in a way similar to Fig. 8, where each row corresponds to a different illumination polarization. The first column shows the β1 detected on camera 1, the second column β2 from camera 2, and the third and fourth columns show the β1 and β2 for the global positions, respectively. As expected, when the illumination polarization is rotated 90°, the detected polarization changes from around zero to about 90° for camera 1 and from around 90° to almost zero for camera 2, as shown in the first two rows. The difference between the number of particles detected on the two cameras also becomes apparent. As a note, it is also clear that the particles preserve the polarization with respect to the scattering plane. This becomes even clearer when looking at the results when using illumination angle α=45°, as shown in the third row in Fig. 8. When both components are present in the illumination, the sub-micrometer particles scatter towards both cameras, and more particles are detected. These particles scatter much more in one of the directions and therefore, the results on both cameras are around 80°. It is then obvious that the choice of polarization in the illumination greatly affects the results when the recording is polarization resolved. It is also possible to keep α=45° and use only one of the polarization components in the reconstruction to ensure that sub-micrometer particles are removed. On the other hand, it would be possible to do flow measurements without seeding particles utilizing the perpendicular component only. This is, however, outside the scope of this paper. It is clear that there is a relation between the detected β distribution and the size of the particles. The relationship between the size and detected polarization is dependent on how light is scattered from an individual particle. To relate the detected polarization to size is the scope for future work. Another topic for future work is to utilize the benefit of having two cameras that detect β from two different directions. The idea for this system is that it should be able to characterize general particles, not only spheres. For a general particle, the ability to record β from multiple directions could be related to, e.g., shape.

 figure: Fig. 9.

Fig. 9. Polarization ratio angle β for the particles in Fig. 8. The first column is all the particles recorded by camera 1, the second column is all particles recorded by camera 2, and the third and fourth columns are particles recorded by camera 1 and camera 2, respectively, that are successfully mapped to the global coordinate system. Each row corresponds to a specific illumination polarization; (a)–(d) α=0°, (e)–(h) α=90°, and (i)–(l) α=45°.

Download Full Size | PDF

Depending on the application, different α may be desirable. If the goal is to measure only larger particles, like the seeding particles used in this paper, α should be set to 45°, and only the component parallel to the scattering plane for each camera should be used for the positioning. The smaller particles scatter much weaker in this component and will be excluded in the reconstruction. This configuration is used for the calibrations in this paper. For spherical particles in general, α can be kept at 45° to have both polarization components present. In the reconstruction, different choices can be made on which polarization component to use. Spherical particles do not generate cross-polarization in the scattered light [26]. So having α=45°and recording both components in a single shot is not different from making two recordings with α=0° and α=90°. Comparisons between the polarization components could then be made to select what particles to track. Smaller particles could then be identified as the particle detected in only one of the polarization components. Non-spherical particles do generate cross-polarization in the scattered light, and for detection of such particles, the illumination polarization could be set to either α=0° or α=90°, respectively. Detection of cross-polarization components would then indicate a non-spherical particle [27].

It is clear from the experiments that the calibrations are of great importance to obtain good results. In practice, it is virtually impossible to accurately adjust the image plane to be perfectly aligned in a global coordinate system. There will also, in general, always be some kind of aberration present in an imaging system that needs to be compensated for. In the present system, the curvature of field imposes the most significant aberration, up to 1500 μm over the field of view. The spatial calibration is performed with sparse particles, since this is a convenient way to generate good-quality side-scattered light in our setup. A calibration using a needle tip was also performed, but the quality was insufficient. Holograms formed from particle scattering have intrinsic aberrations dependent on the scattering direction [30]. By performing the calibration using particles, these intrinsic aberrations are included in the calibration. That polarization calibrations for the reference waves are needed to ensure correct polarization is obvious. Without this, it would not be possible to conclude what content is encoded in each lobe. From the result in Fig. 6, the standard deviation is below 1 deg, which indicates that the polarization ratio angle β is very stable and repeatable.

It is desirable that the α and α in Eq. (7) and Eq. (8) are constants. In practice, this can be hard to obtain, especially when using optical fibers for the reference waves, since tensions and small movements may induce stress into the fibers and thereby slightly change the refractive index of the fiber. Therefore, the phase difference between the two polarization components is inaccurate and temporally varying. This means that the complete state of polarization cannot be estimated. This is, however, not a problem, since the scattering theory for particles focuses mainly on the distribution of intensities in the two components. Improving the setup to have a constant α and α could be the scope for future work. One practical consideration when working with the system is that the emitted pulses from the Nd:YAG laser may vary in energy and in coherence. The variation in energy does not affect the results, since the changes in the two polarization components change proportionally, and the ratio between them is unchanged. However, the change in coherence between pulses due to poor seeding needs to be handled by assuring that the optical path length of the two reference waves on each camera is matched. If this is not done correctly, a pulse with poor coherence length will have better interference in one of the lobes. This would appear as a false change in polarization in the scattered light on the detector. This was observed when testing the system. If the reference waves are adjusted correctly, the change in interference is proportional in each interference lobe. Additionally, the optical path length of the illumination light should ideally be matched with the optical path length of the reference waves. If perfectly matched, a change in coherence would not affect the interference on the detector; this is, however, not always possible in a practical setup, but since both reference waves are matched, the ratio is unchanged.

5. CONCLUSION

In this paper, a novel system for particle tracking is presented. The system consists of two polarization-resolved digital holographic systems that image the same volume from two perpendicular directions. The system is calibrated to ensure correct polarization measurements and correct mapping of particle positions. The calibrations show that it is possible to handle misalignment in magnification as well as curvature of field aberrations and still be able to unify the two views. It is found that by using a dual-view configuration, it is possible to decrease the uncertainties along the optical axis, which is typical for low-NA systems. The system also makes it possible to record the polarization ratio angle β from two views. From these calibrations, it can be assured that the measured polarization is, in fact, correct and that the unified position is created from the same particle from both views. These recordings show that when the illumination polarization is aligned either parallel or perpendicular to the scattering plane, the polarization is preserved with respect to the scattering plane, as expected from Mie scattering theory. This system is well suited for imaging particles, since it records a better representation of the scattered light as compared to conventional digital holographic systems. It is shown that by utilizing the polarization, it is possible to distinguish between the seeding particles and sub-micrometer particles, well below the resolving power of the system. The results can further be combined with scattering theory to distinguish between different types of particles based on their polarization response and how they act under different flow conditions.

Funding

Vetenskapsrådet (VR) (621-2014-4906).

Acknowledgment

Portions of this work were presented at the OSA Digital Holography and 3-D imaging Topical meeting in 2019, Th2A.5, Polarization Resolved Dual-View Holographic System for Investigation of Microparticles.

REFERENCES

1. P. Memmolo, L. Miccio, M. Paturzo, G. D. Caprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photon. 7, 713–755 (2015). [CrossRef]  

2. J. Zakrisson, S. Schedin, and M. Andersson, “Cell shape identification using digital holographic microscopy,” Appl. Opt. 54, 7442–7448 (2015). [CrossRef]  

3. F. Merola, P. Memmolo, L. Miccio, R. Savoia, M. Mugnano, A. Fontana, G. D’Ippolito, A. Sardo, A. Iolascon, A. Gambale, and P. Ferraro, “Tomographic flow cytometry by digital holography,” Light Sci. Appl. 6, e16241 (2017). [CrossRef]  

4. K. T. Chan and Y. J. Li, “Pipe flow measurement by using a side-scattering holographic particle imaging technique,” Opt. Laser Technol. 30, 7–14 (1998). [CrossRef]  

5. H. J. Byeon, K. W. Seo, and S. J. Lee, “Precise measurement of three-dimensional positions of transparent ellipsoidal particles using digital holographic microscopy,” Appl. Opt. 54, 2106–2112 (2015). [CrossRef]  

6. S.-H. Lee, Y. Roichman, G.-R. Yi, S.-H. Kim, S.-M. Yang, A. van Blaaderen, P. van Oostrum, and D. G. Grier, “Characterizing and tracking single colloidal particles with video holographic microscopy,” Opt. Express 15, 18275–18282 (2007). [CrossRef]  

7. J. Fung, K. E. Martin, R. W. Perry, D. M. Kaz, R. McGorty, and V. N. Manoharan, “Measuring translational, rotational, and vibrational dynamics in colloids with digital holographic microscopy,” Opt. Express 19, 8051–8065 (2011). [CrossRef]  

8. Y. S. Bae, J. I. Song, and D. Y. Kim, “Volumetric reconstruction of Brownian motion of a micrometer-size bead in water,” Opt. Commun. 309, 291–297 (2013). [CrossRef]  

9. A. Wang, T. G. Dimiduk, J. Fung, S. Razavi, I. Kretzschmar, K. Chaudhary, and V. N. Manoharan, “Using the discrete dipole approximation and holographic microscopy to measure rotational dynamics of non-spherical colloidal particles,” J. Quant. Spectrosc. Radiat. Transfer 146, 499–509 (2014). [CrossRef]  

10. N. Verrier, C. Fournier, and T. Fournel, “3D tracking the Brownian motion of colloidal particles using digital holographic microscopy and joint reconstruction,” Appl. Opt. 54, 4996–5002 (2015). [CrossRef]  

11. N. A. Buchmann, C. Atkinson, and J. Soria, “Ultra-high-speed tomographic digital holographic velocimetry in supersonic particle-laden jet flows,” Meas. Sci. Technol. 24, 024005 (2013). [CrossRef]  

12. F. C. Cheong, B. J. Krishnatreya, and D. G. Grier, “Strategies for three-dimensional particle tracking with holographic video microscopy,” Opt. Express 18, 13563–13573 (2010). [CrossRef]  

13. J. Öhman and M. Sjödahl, “Off-axis digital holographic particle positioning based on polarization-sensitive wavefront curvature estimation,” Appl. Opt. 55, 7503–7510 (2016). [CrossRef]  

14. J. Öhman and M. Sjödahl, “Improved particle position accuracy from off-axis holograms using a Chebyshev model,” Appl. Opt. 57, A157–A163 (2018). [CrossRef]  

15. J. Soria and C. Atkinson, “Towards 3C-3D digital holographic fluid velocity vector field measurement—tomographic digital holographic PIV (Tomo-HPIV),” Meas. Sci. Technol. 19, 074002 (2008). [CrossRef]  

16. H. Byeon, T. Go, and S. J. Lee, “Digital stereo-holographic microscopy for studying three-dimensional particle dynamics,” Opt. Lasers Eng. 105, 6–13 (2018). [CrossRef]  

17. J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26, 16708–16725 (2018). [CrossRef]  

18. T. Colomb, P. Dahlgren, D. Beghuin, E. Cuche, P. Marquet, and C. Depeursinge, “Polarization imaging by use of digital holography,” Appl. Opt. 41, 27–37 (2002). [CrossRef]  

19. T. Colomb, E. Cuche, F. Montfort, P. Marquet, and C. Depeursinge, “Jones vector imaging by use of digital holography: simulation and experimentation,” Opt. Commun. 231, 137–147 (2004). [CrossRef]  

20. D. Khodadad, E. Amer, P. Gren, E. Melander, E. Hällstig, and M. Sjödahl, “Single-shot dual-polarization holography: measurement of the polarization state of a magnetic sample,” Proc. SPIE 9660, 96601E (2015). [CrossRef]  

21. Y. Kim, J. Jeong, M. W. Kim, and Y. Park, “Polarization holographic microscopy for extracting spatio-temporally resolved Jones matrix,” Opt. Express 20, 9948–9955 (2012). [CrossRef]  

22. W. Yang, A. B. Kostinski, and R. A. Shaw, “Phase signature for particle detection with digital in-line holography,” Opt. Lett. 31, 1399–1401 (2006). [CrossRef]  

23. X. Liu, Y. Yang, L. Han, and C.-S. Guo, “Fiber-based lensless polarization holography for measuring Jones matrix parameters of polarization-sensitive materials,” Opt. Express 25, 7288–7299 (2017). [CrossRef]  

24. L. Han, Z.-J. Cheng, Y. Yang, B.-Y. Wang, Q.-Y. Yue, and C.-S. Guo, “Double-channel angular-multiplexing polarization holography with common-path and off-axis configuration,” Opt. Express 25, 21877–21886 (2017). [CrossRef]  

25. Q.-Y. Yue, Z.-J. Cheng, L. Han, Y. Yang, and C.-S. Guo, “One-shot time-resolved holographic polarization microscopy for imaging laser-induced ultrafast phenomena,” Opt. Express 25, 14182–14191 (2017). [CrossRef]  

26. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).

27. H. Zhang, M. Zhai, J. Sun, Y. Zhou, D. Jia, T. Liu, and Y. Zhang, “Discrimination between spheres and spheroids in a detection system for single particles based on polarization characteristics,” J. Quant. Spectrosc. Radiat. Transfer 187, 62–75 (2017). [CrossRef]  

28. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).

29. J.-Y. Tinevez, “Simple tracker,” https://se.mathworks.com/matlabcentral/fileexchange/34040-simple-tracker.

30. Y. Pu and H. Meng, “Intrinsic aberrations due to Mie scattering in particle holography,” J. Opt. Soc. Am. A 20, 1920–1932 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. General scattering problem. A particle is illuminated from below by the field E ¯ i and generates a scattered field E ¯ s . E ¯ i is linearly polarized at an angle α . The scattering planes for each camera are also shown, orange for camera 1 and blue for camera 2.
Fig. 2.
Fig. 2. Polarization-resolved reference waves. (a) Configuration of the reference waves in the aperture plane; the arrows indicate the polarization orientation of each reference wave. (b) Corresponding spatial frequency content of a polarization-resolved hologram. In this example, a square aperture is used, thus the square lobes in the spatial frequency domain.
Fig. 3.
Fig. 3. Orientation of the local coordinate systems and a visualization of how the particles are elongated along the optical axis of each camera.
Fig. 4.
Fig. 4. Flowcharts for (a) calibration process and (b) evaluation process. The results E 1 and E 2 are used in the evaluation of measurements as shown.
Fig. 5.
Fig. 5. Sketch of the experimental setup. Sample cuvette is imaged by the dual-view polarization-resolved holographic system. M, mirrors; L, lenses; T, 5 × telescope lens; BS, beam splitters; FC, fiber couplers; A, apertures; and BD, beam dump. Inset (a) shows the side view of the sample cell volume and (b) shows the configuration of the fiber ends in the aperture plate.
Fig. 6.
Fig. 6. Polarization calibration. Resulting polarization angle β for the two cavities as a function of the illumination polarization α . (a), (b) Result for camera 1; (c), (d) result for camera 2. The standard deviation is 0.6°.
Fig. 7.
Fig. 7. Estimated field of curvature on camera 1 and camera 2, respectively, from the correction functions E 1 and E 2 , respectively. The lateral field of view is 5.4 mm for each camera.
Fig. 8.
Fig. 8. Particle tracking using both global and single camera coordinates for different illumination polarizations. Each row is for a certain α , while each column is for detection with either camera 1 or camera 2 or global coordinates. All tracks for each row are estimated from the same set of holograms. It is observed that far more particles are detected in (b) and (d) than in (a) and (e), respectively.
Fig. 9.
Fig. 9. Polarization ratio angle β for the particles in Fig. 8. The first column is all the particles recorded by camera 1, the second column is all particles recorded by camera 2, and the third and fourth columns are particles recorded by camera 1 and camera 2, respectively, that are successfully mapped to the global coordinate system. Each row corresponds to a specific illumination polarization; (a)–(d)  α = 0 ° , (e)–(h)  α = 90 ° , and (i)–(l)  α = 45 ° .

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

E ¯ s = ( | E s | exp ( i φ s ) | E s | exp ( i φ s ) )
R ¯ = ( | R | exp ( i φ R ) 0 ) ,
R ¯ = ( 0 | R | exp ( i φ R ) ) ,
I = | E ¯ s + R ¯ | 2 = | E s | 2 + | E s | 2 + | R | 2 + | R | 2 + E s R * + E s * R + E s R * + E s * R ,
U = E s R * = | E s | | R | exp [ i ( φ s φ R ) ] ,
U = E s R * = | E o | | R | exp [ i ( φ s φ R ) ] .
E ^ s = U R ^ | R ^ | 2 = | E ^ s | exp [ i ( φ s α ) ] ,
E ^ s = U R ^ | R ^ | 2 = | E ^ s | exp [ i ( φ s α ) ] ,
E ^ ( x , y , z D + Δ z ) = F 2 D 1 { F 2 D { E ^ ( x , y , z D ) } exp [ i 2 π f z Δ z ] } ,
β = arctan ( | E ^ ( x , y , z ) | | E ^ ( x , y , z ) | ) ,
g : ( x 2 ( 2 ) , y 2 ( 2 ) , z 2 ( 2 ) ) ( x 2 ( 1 ) , y 2 ( 1 ) , z 2 ( 1 ) ) ,
f 1 : ( x ˜ 1 ( 1 ) , y ˜ 1 ( 1 ) , z ˜ 1 ( 1 ) ) ( x 1 ( 1 ) , y 1 ( 1 ) , z 1 ( 1 ) ) ,
f 2 : ( x ˜ 2 ( 2 ) , y ˜ 2 ( 2 ) , z ˜ 2 ( 2 ) ) ( x 2 ( 2 ) , y 2 ( 2 ) , z 2 ( 2 ) ) ,
PC = exp ( [ ( x 1 ( 1 ) x 2 ( 1 ) σ x ) 2 + ( y 1 ( 1 ) y 2 ( 1 ) σ y ) 2 + ( z 1 ( 1 ) z 2 ( 1 ) σ z ) 2 ] ) ,
z 1 ( 1 ) ( x , y ) = z ˜ 1 ( 1 ) E 1 ( x , y , z ) ,
z 2 ( 2 ) ( x , y ) = z ˜ 2 ( 2 ) E 2 ( x , y , z ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.