Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Centroid-position-based autofocusing technique for Raman spectroscopy

Open Access Open Access

Abstract

In Raman spectroscopy, it is crucial to focus the laser on the sample in order to guarantee the intensity and repeatability of the characteristic peaks, which is known as autofocus. In this paper, we propose a novel low-cost scheme based on the subtle placement of the laser source and the image sensor. We confirm the feasibility of monitoring the focus status through the centroid position of the laser spot’s image (CPSI) in theory. Both the simulation and experimental results illustrate that the distance-ordinate function is similar in shape to the logarithm, which not only helps to shorten the autofocus time but also achieves the sub-decimeter measuring range and micrometer resolution near the focal point. Meanwhile, we discuss in detail how to obtain the desired performance by adjusting the extrinsic camera parameters and the way to overcome the disturbance of the noise, ambient light and non-normal incidence. An autofocus-free handheld Raman spectrograph utilizes this method to autofocus the alcohol in the centrifuge tube successfully and the spectral reproducibility is improved. Our results may pave the way to a novel autofocus approach for Raman mapping in vivo.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Raman spectroscopy has been widely used in numerous scientific studies and commercial applications ranging from the characterization of materials to the identification of minerals [1–3]. In the last twenty years, the methodology of Raman spectra has evolved from the verification of the material fingerprint into something combined with mathematical statistics [4, 5]. The above technological innovation exploits a new field in the early diagnosis of colorectal cancer, gastric cancer and liver cancer based on the spectra of nucleic acids, proteins and lipids of tissue [6–10]. Recently, there is a very promising tendency to visualize Raman spectra as a multi-dimensional mapping which offers critical information on the surface profile and the large-area distribution of matter [11–13]. In clinical diagnosis, the morphology of the gastric mucosa observed by the endoscopy and the distribution of radioactive elements detected by the emission computed tomography are used for disease screening [14, 15].

A constant scanning distance equals to the focal length is vital to Raman mapping since it makes sense to consider the amplitude of the characteristic peaks in different scanned regions only if the focus status does not influence the intensity. For the weak Raman signal, it is favorable to collect the peak value at the focal point [16]. Beyond that, the constant distance simplifies the 3D reconstruction of the sample surface, because the profile of the scanned area can be attained by adding the trajectory of the laser source with the vector whose norm equals to the scanning distance [17]. Finally, the smallest size of the focused spot would improve the spatial resolution of Raman mapping.

While several available means have been put forward to address the problem in autofocus, to the best of our knowledge, most autofocus approaches have been reported based on the microscope and little is known about the way to deal with more troublesome situations [18–21]. The astigmatic method and the centroid method use the sensor to capture the image of the spot whose shape changes with the focus status. Besides the beam splitter, these two methods require an astigmatic lens and a knife, respectively, which reduces the total transmittance and increases the structural complexity [22, 23]. While they achieve good results when the incident light is normal to the area of interest, neither method mentions the non-normal case. The sharpness-based method and the intensity-based method spend lots of time on optimizing step length to search the signal peak and are unsuitable for autofocus in vivo [24–26]. The former is because the solid color background of human tissue would affect the performance of the sharpness-based method. The latter is because not all areas have the Raman signal for tumor marker. Special attention should be paid to alignment error when we use the astigmatic method that needs the focal point of the laser source to locate in the camera’s focal plane and the sharpness-based method that requires the focused spot to appear at the center of the quadrant photodiode [27, 28].

Here, we present a novel CPSI method to achieve autofocus for the handheld Raman spectrograph derived from two simplified mathematical models: the pinhole camera model and the Gaussian beam model [29, 30]. We finish the work restraining the interference factors (noise, non-normal incidence and ambient light) to obtain a decent reproducibility and local linearity in the experiment. Due to the logarithm shaped relation between the sampling distance and the centroid’s ordinate, the working range is at least 8000 times (4 cm) as much as the resolution (5 μm) near the focal point. The experimental results also indicate the effect of non-normal incidence on our method is inevitable but is minimal when the spot is in focus. The working range and the tolerance to non-normal incidence are two important indicators for Raman mapping in vivo. Considering the volume and shape of a normal adult’s stomach, the laser source needs to move several centimeters to focus and it’s hard to make the incident light always normal to the surface. Finally, the experiment of autofocus on the alcohol in the centrifuge tube further demonstrates our system can address the problem in autofocus on curved objects and the repeatability of the characteristic peaks is improved after autofocus.

2. Principle

All the coordinate systems in this paper are right-handed. Let (x,y,z) depict the position of a target in the world coordinate system whose origin is the laser source, as shown in Fig. 1. The direction where the beam propagates forward is defined as the x-axis positive direction. Its z-axis positive direction keeps consistency with the direction of the vector normal to the lower surface of the laser source.

 figure: Fig. 1

Fig. 1 Front and left view of the proposed prototype. The arrows have specified the positive direction of yaw, pitch, and roll angles. v denotes the direction of the spot’s movement.

Download Full Size | PDF

Let (x*,y*,z*) depict the spatial distribution of the target in the camera’s field of view (FOV). We specify the camera’s optical axis as the positive x*-axis. The z*-axis and the y*-axis are parallel to the width and the length of the image sensor, respectively. The angle between the positive x*-axis and the positive x-axis should be acute for the sake of the concise. The relationship between (x,y,z) and (x*,y*,z*) is given by [31]

(xyz)=R(δ,α,γ)(x*y*z*)+T(σx,σy,σz),
where R(δ,α,γ) represents the camera’s orientation whose roll, pitch and yaw angles are δ, α and γ, T(σx,σy,σz) describing the camera’s translation is composed of σx, σy and σz.

Let (i,j) depict the position of a pixel in the picture coordinate system whose origin is the pixel in the first column of the first row. The positive i-axis and the positive z*-axis should be in the same direction, the j-axis and the y*-axis also. The relationship between (i,j) and (x*,y*,z*) is given by

(ij)=(a(z*+x*tan θi)2x*tan θib(y*+x*tan θj)2x*tan θj),z*|x*tan θi|,y*|x*tan θj|,
where a is the number of rows and b is the number of columns, θi and θj are the half of the camera’s angle of view measured vertically and horizontally.

It’s still impossible to determine where the target is by the picture. While the transformation from the world coordinate system to the picture coordinate system is clear, there are infinite solutions for (x*,y*,z*) by giving a group of (i,j) in Eq. (2). Thus, the complete three-dimensional spatial information about the target is unable to be extracted from a two-dimensional picture unless the target conforms to a certain space limitation. For those devices concern the focus status, the spot itself must locate on the path of the incident light. As a result of the sub-millimeter waist radius, the irradiated surface can be treated as a plane. The boundary of the spot in the world coordinate system is determined by the path and the plane. Thus, we have

(xyz)=(k2k11010100)(r(x)sin ξr(x)cos ξx0),ξ(0,2π),
where ξ is the parameter describing the boundary, (1,k1,k2) is the normal vector of the surface, x0 is the distance from the origin to the surface whose electric field magnitude is largest, r(x) is the beam radius given by
r(x)=ω02(1+(xf)2z02),
where ω0 is the waist radius, f is the distance from the center of the beam’s waist to the origin, z0 is a constant depends on ω0 and the excitation wavelength.

The solution for (x,y,z) can be obtained by giving a group of (i,j,ξ) based on Eqs. (1)(4) when the model ignores the effects of δ, γ and σz.

(x0+k1y+k2zyz)=(cos α0sin α010sin α0cos α)(x*x*tan θj(2jb1)x*tan θi(2ia1))+(σx0σz),
where x*=σz[tan ξtan θj(2jb1)tan θicos α(2ia1)sin α]1,ξ±0.5π; x*=[r(x)sin ξσz][tan θicos α(2ia1)+sin α]1,ξ=±0.5π.

Here, we will prove the feasibility of monitoring the focus status through the centroid’s ordinate. Given that the laser beam is normal to the irradiated surface and forms a spot whose boundary is depicted by ξ. Considering if f(ξ)=tan ξtanθj(2jξb1),ξ[0,0.5π)(0.5π,π]), then f(ξ)=f(ξ+π), we can rewrite Eq. (5) as

σxx0σzkN,3N4N1tan θicos α(2iξa1)+sin α=kN,3N4N1cos αsin αtan θi(2iξa1),
where ξ=kπ2N,NN*.

The relation between the sampling distance x0 and the mean of iξ can be obtained by averaging Eq. (6).

x0=σz[tan αtan θi(2i¯a)a]atan α+tan θi(2i¯a)+σx,
where i¯ is the mean of iξ excluding the case of k=N and k=3N. It approaches the value of the centroid’s ordinate.

3. Simulation

We use MATLAB (version: R2018a-academic use) to simulate the mapping from the laser spot in the world coordinate system to its image based on Eq. (5). The image of the circular spot observed by the tilted camera displays an ellipse whose semi-minor axis is the i-axis, as shown in Fig. 2(a). The above deformation leads to a sub-pixel difference Δi between the centroid of the spot’s image and the image of the spot’s center. Nevertheless, the simulation results indicate the difference Δi in the interval [xmin, xmax] is too negligible to be considered, as shown in Fig. 2(b). Hence, we only discuss the relation between the distance and CPSI. Ideally, xmin and xmax are given by

xmin={ σzrtan (α+θi)+σx,α<πθi, ,απθi,
xmax={ σzrtan (αθi)+σx,α>θi, +,αθi.

 figure: Fig. 2

Fig. 2 (a) Schematic of the mapping from the laser spot to its image. (b) Δi vs. distance. The simulation parameters are α = 70°, θi = 50 °, θj = 50 °, σx = 35 mm, σz =25 mm, k1 = 0, k2 = 0, ω0 = 0.1 mm, z0 = 50 mm, a = 480, b = 640.

Download Full Size | PDF

In practice, it is hard to achieve a situation where the incident light is perfectly normal to the irradiated surface, especially outside the laboratory. For this reason, we would like to figure out the effect of non-normal incidence on CPSI. Only if k1=0 (k2=0), arctan k2 (arctan k1) depicts the angle between the irradiated surface and the XY-plane (YZ-plane). Figures 3(a) and 3(b) show the sign of k1 does not affect the centroid’s ordinate but impact the abscissa. Generally, the larger absolute values of k1 and k2 means a larger rotation angle and deformation, as shown in Figs. 3(a)–3(c). Based on the simulation results, regardless of how k2 varies, the centroid’s abscissa is always 320. Because of the longitudinal compression, the difference Δi caused by the positive k2 is greater than the negative counterpart, as shown in Fig. 3(c). Note that although both Δi and Δj are proportional to the beam radius, their values are still less than one pixel.

The performance of the proposed method depends on the extrinsic parameters including δ, α, γ, σx, σy and σz. The reason why we only select α, σx and σz to build our autofocus model is that the effects of other parameters can be estimated by the simplified form. If the yaw angle works, the trajectory of the laser spot in the picture should rotate -δ relative to the original. The role of γ (σy) in the centroid’s abscissa and the ordinate is equivalent to the role of α (σz) in the centroid’s ordinate and abscissa. [xmin,σx] ([σx,xmax]) is defined as the non-working range (working range), because the camera is unable to observe the back of the irradiated surface. The linear range is from σx to the point where the slope is no less than 80% of the slope at σx. j is symmetric about 0.5b and its mean is equal to 320, due to y=rcos ξ=x*tan θj(2jb1),0ξ2π, as shown in Eq. (5). That is, the centroid of the spot’s image moves along the center axis of the picture. Figures 3(d)–3(f) clearly depict the ordinate-distance curves are similar in shape to the logarithmic curve. Table 1 provides a reference to tune the extrinsic parameters for the desired working distance and resolution.

 figure: Fig. 3

Fig. 3 Default simulation parameters are α = 70°, θi = 50 °, θj = 50 °, σx = 35 mm, σz = 25 mm, k1 = 0, k2 = 0, ω0 = 0.1 mm, z0 = 50 mm, a = 480, b=640. Δ denotes the difference from the group with the default simulation parameters. (a) Effect of k1 on the centroid’s ordinate. (b) Effect of k1 on the centroid’s abscissa. (c) Effect of k2 on the centroid’s ordinate. Effect of (d) α, (e) σx, and (f) σz on the centroid’s ordinate.

Download Full Size | PDF

Tables Icon

Table 1. Impact of Extrinsic Parameters on Performance

Except for the good linearity, the largest sensitivity as shown in Figs. 3(d)–3(f) and the fewer deviations as shown in Figs. 3(a) and 3(c) in the linear range elucidate an optimal linear range should cover the focal point. Therefore, the image of the focused laser spot should locate on a certain side of the j=0.5b axis instead of the middle.

4. Experiments and results

4.1. Setup

Our experimental setup contains five parts: a handheld Raman spectrograph (785 nm, Raman Plus, BW TEK Inc.), two microcontrollers (Raspberry Pi 3 Model B Rev 1.2), a camera module (Raspberry Pi Camera Module v2 without the IR short-pass filter), a magnetic grid ruler (5 μm resolution, MSR502, SIKO Inc.) and a linear actuator, as shown in Fig. 4. The linear actuator includes a variable reluctance stepper (SST59D2206, SHINANO Inc.), a motor driver (1600 PPR, TD556S, Leadshine Technology Co.) and a standard ball screw with 4 mm lead.

 figure: Fig. 4

Fig. 4 (a) Schematic and (b) optical photo of the CPSI autofocus prototype. LF is a premium longpass filter (FELH0800,THORLABS).

Download Full Size | PDF

Before data acquisition, We need camera calibration to eliminate the radial and tangential distortion neglected by the pinhole camera model, if necessary [32]. By manually adjusting the extrinsic parameters, the trajectory of the spot’s image should overlap the j=0.5b axis and the image of the focused spot should locate on the top area of the picture. The optimal choice of the angle between the camera and LF depends on the sharpness of the spot’s image.

We adopt the magnetic grid ruler to measure the position of the worktable driven by the ball crew and set a sound threshold to binarize the grayscale image. Edge detection used to find the region of interest is the preliminary work before identifying which pixels belong to the spot based on the criterion that the spot’s image is adjacent to the j=0.5b axis and its size ranges from maximum to minimum. CPSI is calculated by the first moment of i and j. Its decimal place has a noticeable impact on the sensitivity and repeatability of the system. In the linear range, the value of the centroid is accurate to two decimal places.

4.2. Denoising

From Eq. (7) and Fig. 2(b), it is seen that if we obtain σx, σz and α by fitting the ordinate-distance curve beforehand, the centroid’s ordinate holds true for the measurement of the focus status. In the initial trial, we collect 20 sets of data when the ball screw moves forward and backward, respectively. While the experimental curve reflects the centroid’s ordinate is related to the sampling distance tightly and well fitted by Eq. (7), the sense is that the data have a fluctuant value of the centroid, as shown in Fig. 5(a). Figure 5(b) shows that even the spot is stationary, the centroid of its image would vary slightly over time. This phenomenon is neither an accident nor a result of mechanical deformation and is caused by exposure time, geometrical fluctuation and temperature [33, 34]. Because Fig. 5(c) gives that the centroid’s abscissa is insensitive to the deformation, but its value appears to follow the same normal distribution as the value of the ordinate which is sensitive to the change in displacement. The pixels closer to the centroid alter less than the pixels near the boundary, as shown in Fig. 5(d), thus we propose a denoising scheme by increasing the binarization threshold and using the LF & slit to limit the incident light. In this case, the confidence interval (CI) calculated by the Student’s t-distribution is a reasonable indicator estimates the mean of a normally distributed population. To match the length of the confidence interval with the centroid’s ordinate, we recommend the significance level α=0.05 and the number of samples N=100, as shown in Fig. 5(c).

 figure: Fig. 5

Fig. 5 (a) Initial data and their fitted curve. The effect of the hysteresis on the magnetic grid ruler makes the exact backward data around 100 μm larger than the displayed. Consequently, the actual standard deviation is less than the value in the figure. (b) Centroid’s ordinate vs. ordinal. (c) Scatter diagram and histogram of the centroid at different distances. (d) Quarter of the spot’s grayscale image after reversing. The image is captured in a fixed position at different moments. (e) Centroid’s ordinate after denoising.

Download Full Size | PDF

4.3. Compensating

In essence, this autofocus method bases on the detection of light, we have to take the effect of ambient light into consideration. If possible, the camera should be isolated in an enclosed space where only the light which passes through the LF and slit can be detected by the camera sensor. We acquire four sets of data under various lighting conditions, as shown in Figs. 6(a)–6(d). Whether the environment is dark or bright, around 95% of the samples are within two standard deviations of the mean, which is a typical feature of the normal distribution. The results show that ambient light indeed has an influence on the centroid’s ordinate, but it’s not as proportional as what we think. When the surroundings are completely dark, the value of the centroid’s ordinate tends to be greater than the situation under illumination. The reason behind this is that the superposition of the laser beam and ambient light makes the beam radius (dz,top and dz,bott) larger but the tilt observation causes di,top<di,bott, as shown in Fig. 6(e). Note that the beam radius here refers to the irradiated surface which can map into pixels whose gray values are above the threshold.

We attempt CNNs to assess the intensity of ambient light, but not all results are positive. However, AlexNet CNNs recognize the focus status successfully under different lighting condition [35]. This is due to the preference of CNNs for textures and shapes instead of colors. We utilize a 25 × 25 mask to extract the designated area from the raw RGB pictures captured in different position under different illumination and mark them as the focus, the under-focus and the over-focus. 1800 data of the training group are shuffled and feed to our CNNs, as shown in Fig. 6(f). After 300 iterations with 0.001 learning rate, The mean loss of the training group is reduced to 2.2e-3 in less than 3 minutes, as shown in Fig. 6(g). The trained CNNs are applied to the testing group containing 5400 data and estimate their category, as shown in Fig. 6(h). Since the focused spot is merely 50 μm away from the defocused spot, the predicted accuracy is able to reach 100%. As the separation distance decreases, the accuracy also drops. The 30 μm interval is about 98% accuracy and the 10 μm is no less than 95%.

 figure: Fig. 6

Fig. 6 Centroid of the focused spot collected (a) in dark or illuminated by (b) one Led, (c) two LEDs and (d) three LEDs. (g) Mean of the loss vs. time. (h) Classification of the focus, over-focus and under-focus.

Download Full Size | PDF

4.4. Calibration

There is no doubt that the laser spot locates anywhere in the camera’s FOV only corresponds to a unique area in the picture. Therefore, if we want to precisely detect the real-time distance from the laser source to the irradiated surface, the ordinate-distance curve should be known definitely by the calibration. Figures 7(a)–7(c) show the influence of the extrinsic parameters on the experimental ordinate-distance curves is almost as the results we simulated. The abscissas of these curves begin at the position where the spot appears at the top of FOV and end at the terminal away 25 mm from the start. Both the simulation and experiment confirm the sensitivity within the linear range could be improved by increasing |α| or decreasing |σz|, as shown in Figs. 7(a) and 7(c). However, the increment of the 2 mm change in σz outnumbers the increment of the 10° change in α. A larger pitch angle provides us with the more linear ordinate-distance curve. σx serves the translation of the ordinate-distance curve, as shown in Fig. 7(b), which illuminates if the measuring range and linear range have satisfied our requirements, we could improve the sensitivity within the linear range by increasing σx.

 figure: Fig. 7

Fig. 7 Effect of (a) α, (b) σx, (c) σz, (d) k1 and (e) k2. (f) Experimental results of the ordinate-distance curve after calibration. Fitted extrinsic parameters: α = 39.56°, σx = -23.106mm, σz = - 6.97 mm, θi = 30°. Theoretical extrinsic parameters: α = 40°, σx = -25 mm, σz = - 11 mm, θi = 30 °. (g) Diagram of the centroid compensation at the focal point.

Download Full Size | PDF

The intensity of the deviation caused by k1 and k2 is relatively low at the focal point, as shown in Figs. 7(d) and 7(e), due to the smallest beam radius and the centroid compensation as shown in Fig. 7(g).The top deformation ΔZtop and the bottom deformation ΔZbot caused by non-normal incidence at the focal point are very tiny and opposite in sign, so their effects on the centroid’s ordinate are offset by themselves. The deviation where k2 gives rise is greater than k1

and its value relates to the sign of arctan k2. Figure 7(e) shows that the ordinate-distance curve obtained in the experiment can be well fitted by Eq. (7) but there are some differences from the theoretical curve for the effect of δ, γ and σy.

The working distance of the presented autofocus method is about 4 cm and the detection threshold in the linear range ([3.46 mm, 14.46 mm]) is 1 μm. Because the beam begins to diverge severely in the far end behind the focus, the pixels whose ordinates are less than 116 aren’t used. This logarithm-shaped curve brings a significant advantage that it compresses the volume of pixels storing information on the non-linear working distance and preserves more pixels to subdivide the section encompassing the focal point. What’s more, once we complete the calibration for the extrinsic parameters, the monotonic ordinate-distance function removes alignment error and maintains stable, which signifies our microcontroller needn’t waste time searching for the extremum.

4.5. Demonstration

The available CPSI autofocus system consists of two stages: coarse adjustment and precise adjustment, as shown in Fig. 8(a). In the coarse adjustment, the centroid’s ordinate is calculated by the binary image and compared with the calibration curve after denoising. If the position of the spot is out of focus, the number of pulses driving the motor can be easily calculated on account of the known distance from the focal point. If not, the trained CNNs and the linear range curve work together to implement the precise adjustment. The 25×25 image fed to CNNs is derived from the original image cut by the mask. The output of CNNs determines whether the compensation of ambient light is needed or not and the modified linear range curve gives the real-time position signal. Once Both of them agree the sampling distance is acceptable, the spectrograph stars collecting. Our home-made software used to control the above steps is displaying the centroid’s ordinate instantly, as shown in Fig. 8(b).

For proving the adaptability of our method to the non-planar surface, the testing material is 95% ethyl alcohol in a cylindrical centrifuge tube. Figure 8(c) helps us to distinguish the spectrum of the alcohol from the spectrum of the tube. The beam can’t form a clear spot on the surface of the colorless and translucent tube but the part covered with the opacity can. Therefore we adjust the position of the spot on the opaque surface and then rotate the tube to allow the laser to pass through it. An interesting phenomenon is that the Raman signal peak of the alcohol doesn’t appear at the focal point calibrated previously but behind it as shown in 8(d), because of the thickness of the tube. When the sampling distance approaches 12.71 mm, the characteristic peaks of the tube increase gradually, because the laser spot forms on the surface of the tube. The defocus status impacts the intensities of the characteristic peaks significantly, whether the status is over-focusing or under-focusing. Our method only revises the default position of the focused spot to autofocus on the alcohol in the cylindrical centrifuge tube.

 figure: Fig. 8

Fig. 8 (a) Flow chart of the centroid-position-based method. (b) Software interface. (c) Spectral difference between alcohol and alcohol + centrifuge tube. (d) Average spectra of the ethyl alcohol and tube in different focus statuses. This experiment isrepeated 10 times to collect data.

Download Full Size | PDF

The cosine similarity and the standard deviation are used to describe the similarity of the spectral shape and intensity, as shown in Eqs. (8) and (9). Tab. 2 represents the different focus statuses don’t lead to the change in the spectral shape but affect the intensity of the characteristic peaks dramatically.

SMS=110n=110Fnμm||Fn||||μm||,
SMI=1N1n=110(Fnμm)(Fnμm)T,

Here, Fn and μm are one-dimensional column vectors composed of the spectral intensity and the mean intensity of the focus status belongs to m. The subscript n is the order of the experiment. The subscript m is used to represent which focus status they belong to. The superscript T means the transpose of the vector. N is the total number of data.

Tables Icon

Table 2. SMS and SMI with the various focus statuses

5. Conclusion

In this paper, we formulate the relation between the focus status and CPSI based on the pinhole camera model and the Gaussian beam model. The feasibility and reliability of this method has been demonstrated through theoretical simulation and experimental test. In the experiment, we overcome the problem in the fluctuant value of the centroid’s ordinate to improve the sensitivity greatly. The step response of our method to ambient light simplifies the process of compensation. The calibration eliminates the negative influence of alignment error and provides the real ordinate-distance curve. The self-consistent results of the simulation curves and the calibration curves verify the validity of our model. Lastly, our home-made system implements autofocus on the alcohol in the cylindrical centrifuge tube successfully and improves the intensity and repeatability of the signal.

Although our intention of this study is to deploy the autofocusing technique in more complicated situations such as the handheld spectrograph and the endoscope, it’s a universal solution for the equipment sensitive to the focus status. The presented method costs less but it has achieved the detection threshold of micrometer order. It’s worth noting that the working range of this scheme is at least 8000 times as large as the resolution in the linear range and the accuracy can tolerate the impact of non-normal incidence. The known ordinate-distance curve shortens the autofocus time and reduces the requirement for the precise installation. This method is suitable for the modification of the existing instruments and maintains their original transmittance.

Funding

National Basic Research Program of China (973 Program) (2017YFA0205304, 2015CB931802); Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument (15DZ2252000).

References

1. A. Culka and J. Jehlička, “A database of raman spectra of precious gemstones and minerals used as cut gems obtained using portable sequentially shifted excitation raman spectrometer,” J. Raman Spectrosc. 50, 262–280 (2019). [CrossRef]  

2. Y. Jin, A. P. Kotula, C. R. Snyder, A. R. Hight Walker, K. B. Migler, and Y. J. Lee, “Raman identification of multiple melting peaks of polyethylene,” Macromolecules 50, 6174–6183 (2017). [CrossRef]  

3. Z. Huang, A. Zhang, and D. Cui, “Nanomaterial-based sers sensing technology for biomedical application,” J. Mater. Chem. B 7, 3755–3774 (2019). [CrossRef]  

4. N. Stone, C. Kendall, N. Shepherd, P. Crow, and H. Barr, “Near-infrared raman spectroscopy for the classification of epithelial pre-cancers and cancers,” J. Raman Spectrosc. 33, 564–573 (2002). [CrossRef]  

5. H. J. Butler, L. Ashton, B. Bird, G. Cinque, K. Curtis, J. Dorney, K. Esmonde-White, N. J. Fullwood, B. Gardner, P. L. Martin-Hirsch, M. J. Walsh, M. R. McAinsh, N. Stone, and F. L. Martin, “Using raman spectroscopy to characterize biological materials,” Nat. Protoc. 11, 664–687 (2016). [CrossRef]   [PubMed]  

6. S. Li, G. Chen, Y. Zhang, Z. Guo, Z. Liu, J. Xu, X. Li, and L. Lin, “Identification and characterization of colorectal cancer using raman spectroscopy and feature selection techniques,” Opt. Express 22, 25895–25908 (2014). [CrossRef]   [PubMed]  

7. S. Feng, R. Chen, J. Lin, J. Pan, Y. Wu, Y. Li, J. Chen, and H. Zeng, “Gastric cancer detection based on blood plasma surface-enhanced raman spectroscopy excited by polarized laser light,” Biosens. Bioelectron. 26, 3167–3174 (2011). [CrossRef]   [PubMed]  

8. S. Yan, S. Cui, K. Ke, B. Zhao, X. Liu, S. Yue, and P. Wang, “Hyperspectral stimulated raman scattering microscopy unravels aberrant accumulation of saturated fat in human liver cancer,” Anal. Chem. 90, 6362–6366 (2018). [CrossRef]   [PubMed]  

9. Y. Chen, S. Cheng, A. Zhang, J. Song, J. Chang, K. Wang, G. Gao, Y. Zhang, S. Li, H. Liu, G. Alfranca, M. A. Aslam, B. Chu, C. Wang, F. Pan, L. Ma, J. M. de la Fuente, J. Ni, and D. Cui, “Salivary analysis based on surface enhanced raman scattering sensors distinguishes early and advanced gastric cancer patients from healthy persons,” J. Biomed. Nanotechnol. 14, 1773–1784 (2018). [CrossRef]   [PubMed]  

10. Y. Chen, Y. Zhang, F. Pan, J. Liu, K. Wang, C. Zhang, S. Cheng, L. Lu, W. Zhang, Z. Zhang, X. Zhi, Q. Zhang, G. Alfranca, J. M. de la Fuente, D. Chen, and D. Cui, “Breath analysis based on surface-enhanced raman scattering sensors distinguishes early and advanced gastric cancer patients from healthy persons,” ACS Nano 10, 8169–8179 (2016). [CrossRef]   [PubMed]  

11. C. J. Corden, D. W. Shipp, P. Matousek, and I. Notingher, “Fast raman spectral mapping of highly fluorescing samples by time-gated spectral multiplexed detection,” Opt. Lett. 43, 5733–5736 (2018). [CrossRef]   [PubMed]  

12. S. Polisetti, N. F. Baig, N. Morales-Soto, J. D. Shrout, and P. W. Bohn, “Spatial mapping of pyocyanin in pseudomonas aeruginosa bacterial communities using surface enhanced raman scattering,” Appl. Spectrosc. 71, 215–223 (2017). [CrossRef]  

13. P. Samyn, D. Vandamme, P. Adriaensens, and R. Carleer, “Surface chemistry of oil-filled organic nanoparticle coated papers analyzed using micro-raman mapping,” Appl. Spectrosc. 73, 67–77 (2019).

14. T. Nakayoshi, H. Tajiri, K. Matsuda, M. Kaise, M. Ikegami, and H. Sasaki, “Magnifying endoscopy combined with narrow band imaging system for early gastric cancer: Correlation of vascular pattern with histopathology (including video),” Endoscopy 36, 1080–1084 (2004). [CrossRef]   [PubMed]  

15. H. Yoon and D. H. Lee, “New approaches to gastric cancer staging: Beyond endoscopic ultrasound, computed tomography and positron emission tomography,” World J. Gastroenterol. 20, 13783–13790 (2014). [CrossRef]   [PubMed]  

16. Z. Zhou, C. Li, T. He, C. Lan, P. Sun, Y. Zheng, Y. Yin, and Y. Liu, “Facile large-area autofocusing raman mapping system for 2d material characterization,” Opt. Express 26, 9071–9080 (2018). [CrossRef]   [PubMed]  

17. G. Maculotti, X. Feng, R. Su, M. Galetto, and R. Leach, “Residual flatness and scale calibration for a point autofocus surface topography measuring instrument,” Meas. Sci. Technol. 30, 075005 (2019). [CrossRef]  

18. K. H. Ong, J. C. H. Phang, and J. T. L. Thong, “A robust focusing and astigmatism correction method for the scanning electron microscope,” Scanning 19, 553–563 (1997). [CrossRef]  

19. C. Liu, Y. Lin, and P. Hu, “Design and characterization of precise laser-based autofocusing microscope with reduced geometrical fluctuations,” Microsyst. Technol. 19, 1717–1724 (2013). [CrossRef]  

20. X. Zhang, Z. Liu, M. Jiang, and M. Chang, “Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus,” Opt. Express 22, 31237–31247 (2014). [CrossRef]  

21. X. Xu, Y. Wang, X. Zhang, S. Li, X. Liu, and J. Tang, “A comparison of contrast measurements in passive autofocus systems for low contrast images,” Multimed. Tools Appl. 69, 139–156 (2014). [CrossRef]  

22. H. G. Rhee, D. I. Kim, and Y. W. Lee, “Realization and performance evaluation of high speed autofocusing for direct laser lithography,” Rev. Sci. Instruments 80, 073103 (2009). [CrossRef]  

23. C. Liu, P. Hu, and Y. Lin, “Design and experimental validation of novel optics-based autofocusing microscope,” Appl. Phys. B 109, 259–268 (2012). [CrossRef]  

24. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consumer Electron. 49, 257–262 (2003). [CrossRef]  

25. Z. Sun, B. Song, X. Li, Y. Zou, Y. Wang, Z. Yu, and M. Huang, “A smart optical fiber probe for raman spectrometry and its application,” J. Opt. 46, 62–67 (2017). [CrossRef]  

26. G. Saerens, L. Lang, C. Renaut, F. Timpu, V. Vogler-Neuling, C. Durand, M. Tchernycheva, I. Shtrom, A. Bouravleuv, R. Grange, and M. Timofeeva, “Image-based autofocusing system for nonlinear optical microscopy with broad spectral tuning,” Opt. Express 27, 19915–19930 (2019). [CrossRef]  

27. X. Tang, P. L’Hostis, and Y. Xiao, “An auto-focusing method in a microscopic testbed for optical discs,” J. Res. Natl. Inst. Standards Technol. 105, 565–569 (2000). [CrossRef]  

28. W. Hsu, C. Lee, P. Chen, N. Chen, F. Chen, Z. Yu, C. Kuo, and C. Hwang, “Development of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20, 045902 (2009). [CrossRef]  

29. J. Kannala and S. S. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 1335–1340 (2006). [CrossRef]  

30. A. Naqwi and F. Durst, “Focusing of diode laser beams: a simple mathematical model,” Appl. Opt. 29, 1780–1785 (1990). [CrossRef]   [PubMed]  

31. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Transactions on Pattern Analysis Mach. Intell. 14, 965–980 (1992). [CrossRef]  

32. Y. Wang, Y. Wang, L. Liu, and X. Chen, “Defocused camera calibration with a conventional periodic target based on fourier transform,” Opt. Lett. 44, 3254–3257 (2019). [CrossRef]   [PubMed]  

33. C. Liu and K. Lin, “Numerical and experimental characterization of reducing geometrical fluctuations of laser beam based on rotating optical diffuser,” Opt. Eng. 53, 122408 (2014). [CrossRef]  

34. K. Suzuki and S. Kubota, “Understanding the exposure-time effect on speckle contrast measurement for laser projection with rotating diffuser,” Opt. Rev. 26, 145–151 (2019). [CrossRef]  

35. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Front and left view of the proposed prototype. The arrows have specified the positive direction of yaw, pitch, and roll angles. v denotes the direction of the spot’s movement.
Fig. 2
Fig. 2 (a) Schematic of the mapping from the laser spot to its image. (b) Δi vs. distance. The simulation parameters are α = 70°, θi = 50 °, θj = 50 °, σx = 35 mm, σz =25 mm, k1 = 0, k2 = 0, ω0 = 0.1 mm, z0 = 50 mm, a = 480, b = 640.
Fig. 3
Fig. 3 Default simulation parameters are α = 70°, θi = 50 °, θj = 50 °, σx = 35 mm, σz = 25 mm, k1 = 0, k2 = 0, ω0 = 0.1 mm, z0 = 50 mm, a = 480, b=640. Δ denotes the difference from the group with the default simulation parameters. (a) Effect of k1 on the centroid’s ordinate. (b) Effect of k1 on the centroid’s abscissa. (c) Effect of k2 on the centroid’s ordinate. Effect of (d) α, (e) σx, and (f) σz on the centroid’s ordinate.
Fig. 4
Fig. 4 (a) Schematic and (b) optical photo of the CPSI autofocus prototype. LF is a premium longpass filter (FELH0800,THORLABS).
Fig. 5
Fig. 5 (a) Initial data and their fitted curve. The effect of the hysteresis on the magnetic grid ruler makes the exact backward data around 100 μm larger than the displayed. Consequently, the actual standard deviation is less than the value in the figure. (b) Centroid’s ordinate vs. ordinal. (c) Scatter diagram and histogram of the centroid at different distances. (d) Quarter of the spot’s grayscale image after reversing. The image is captured in a fixed position at different moments. (e) Centroid’s ordinate after denoising.
Fig. 6
Fig. 6 Centroid of the focused spot collected (a) in dark or illuminated by (b) one Led, (c) two LEDs and (d) three LEDs. (g) Mean of the loss vs. time. (h) Classification of the focus, over-focus and under-focus.
Fig. 7
Fig. 7 Effect of (a) α, (b) σx, (c) σz, (d) k1 and (e) k2. (f) Experimental results of the ordinate-distance curve after calibration. Fitted extrinsic parameters: α = 39.56°, σx = -23.106mm, σz = - 6.97 mm, θi = 30°. Theoretical extrinsic parameters: α = 40°, σx = -25 mm, σz = - 11 mm, θi = 30 °. (g) Diagram of the centroid compensation at the focal point.
Fig. 8
Fig. 8 (a) Flow chart of the centroid-position-based method. (b) Software interface. (c) Spectral difference between alcohol and alcohol + centrifuge tube. (d) Average spectra of the ethyl alcohol and tube in different focus statuses. This experiment isrepeated 10 times to collect data.

Tables (2)

Tables Icon

Table 1 Impact of Extrinsic Parameters on Performance

Tables Icon

Table 2 SM S and SM I with the various focus statuses

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

( x y z ) = R ( δ , α , γ ) ( x * y * z * ) + T ( σ x , σ y , σ z ) ,
( i j ) = ( a ( z * + x * tan  θ i ) 2 x * tan  θ i b ( y * + x * tan  θ j ) 2 x * tan  θ j ) , z * | x * tan  θ i | , y * | x * tan  θ j | ,
( x y z ) = ( k 2 k 1 1 0 1 0 1 0 0 ) ( r ( x ) sin  ξ r ( x ) cos  ξ x 0 ) , ξ ( 0 , 2 π ) ,
r ( x ) = ω 0 2 ( 1 + ( x f ) 2 z 0 2 ) ,
( x 0 + k 1 y + k 2 z y z ) = ( cos  α 0 sin  α 0 1 0 sin  α 0 cos  α ) ( x * x * tan  θ j ( 2 j b 1 ) x * tan  θ i ( 2 i a 1 ) ) + ( σ x 0 σ z ) ,
σ x x 0 σ z k N , 3 N 4 N 1 tan  θ i cos  α ( 2 i ξ a 1 ) + sin  α = k N , 3 N 4 N 1 cos  α sin  α tan  θ i ( 2 i ξ a 1 ) ,
x 0 = σ z [ tan  α tan  θ i ( 2 i ¯ a ) a ] atan  α + tan  θ i ( 2 i ¯ a ) + σ x ,
x min = {   σ z r tan  ( α + θ i ) + σ x , α < π θ i ,   , α π θ i ,
x max = {   σ z r tan  ( α θ i ) + σ x , α > θ i ,   + , α θ i .
SM S = 1 10 n = 1 10 F n μ m | | F n | | | | μ m | | ,
SM I = 1 N 1 n = 1 10 ( F n μ m ) ( F n μ m ) T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.