Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Infrared hyperspectral upconversion imaging using spatial object translation

Open Access Open Access

Abstract

In this paper hyperspectral imaging in the mid-infrared wavelength region is realised using nonlinear frequency upconversion. The infrared light is converted to the near-infrared region for detection with a Si-based CCD camera. The object is translated in a predefined grid by motorized actuators and an image is recorded for each position. A sequence of such images is post-processed into a series of monochromatic images in a wavelength range defined by the phasematch condition and numerical aperture of the upconversion system. A standard USAF resolution target and a polystyrene film are used to impart spatial and spectral information unto the source.

© 2015 Optical Society of America

1. Introduction

Mid-infrared (mid-IR) spectroscopy has been a very active field of research for decades. This is because many gasses and organic compounds have strong absorption lines in this region of the electromagnetic spectrum allowing for simple identification. Mid-IR detection usually relies on low bandgap semiconductors (e.g. InSb or HgCdTe) or microbolometers [1]. However, these detectors suffer from large dark noise at ambient temperature thus requiring cooling using e.g. liquid nitrogen. Hyperspectral imaging is a method which integrates spectroscopic and imaging techniques in order to provide both spectral and spatial information from an object. Hyper spectral imaging is widely used in e.g. food analysis and quality control [2,3 ] and for medical applications [4,5 ]. In the mid-IR hyperspectral imaging is traditionally realized by Fourier transform infrared spectroscopy (FTIR) [6].

The detection system presented here is based on frequency upconversion of images [7–13 ]. The technique relies on sum frequency generation in an intra cavity laser configuration for enhanced conversion efficiency. The incoming mid-IR light is converted to the near-IR (NIR) region for simple low noise detection using a Si-based CCD. Hyperspectral imaging has previously been demonstrated using upconversion [10], where the spectral coverage was obtained by ramping the temperature of the nonlinear crystal.

In this paper we present an alternative method for producing hyperspectral images based on upconversion, where the spectral information in each point of the object is obtained by spatial translation of the object within the field of view (FoV) while the phasematch condition of the upconversion detector is held constant [11]. As the upconversion occurs in the Fourier space, spatial positions are translated into angles, each angle phasematching a specific wavelength depending on the angle between the mid-IR signal and the mixing laser field. The upconverted signal is then converted in to a spatial position in the image plane using a second Fourier transforming lens. Thus in the object plane the phasematched wavelength depends on the position relative to the upconverter FoV. Figure 1 shows a schematic illustration of this principle.

 figure: Fig. 1

Fig. 1 Schematic showing the basis of this method of creating monochromatic images. Top shows three different positions of the FoV relative to the object. Bottom shows the three corresponding images where the circular areas represent the FoV of the upconverter and the color gradient the wavelength distribution within it. Note that it is the object that is moved between the captured frames, in the experiments this is done with motorized actuators. Monochromatic images can be created by combining parts from different captured frames with the desired wavelength.

Download Full Size | PDF

Comparing the spatial scanning approach to a system based on temperature scanning [10] several important advantages are found. The main trade-off is increased complexity in the post-processing. In [10] the measurement speed was limited by the temperature ramping. In that case total measurement time was on the order of one hour due to constraints on ramp speed and the need for the temperature to equilibrate before each image was recorded. Using spatial translation of the object, the measurement time depends mainly on the scan rate of the motorized actuators used to translate the object in the FoV of the upconversion unit. For the images presented in the next sections each frame requires an average of six seconds to record where the actual image acquisition time was 50 ms (limited by the brightness of the object), thus a measurement using this method based on 256 images takes on the order of 25 minutes. The acquisition time could be reduced substantially by using faster actuators while the limit for the temperature scanning method is more fundamental in nature since it is due to the thermal properties of the nonlinear material.

In contrast to the temperature scanning method [10] scanning the object position does not alter the upconversion unit during the image acquisition process, allowing for greater stability of the mixing laser even without active power stabilisation. In [10] it was shown how the spectral resolution for a given wavelength depends on the object position within the FoV of the upconverter. However, translating the image without altering the phasematch condition, a given wavelength will be upconverted with a constant spectral resolution. It should be noted, that longer wavelengths will be upconverted with broader spectral acceptance, hence, poorer spectral resolution. From [10] Fig. 7 it is evident that the spectral resolution of the reconstructed images are in the range 9-30 nm.

Finally, by using object translation, the size of the objects that can be imaged is no longer limited by the FoV of the upconversion unit, as it was in the case of temperature scanning, however at the expense of increased acquisition time since more images are required.

2. Setup and post-processing

Figure 2 shows a schematic of the experimental setup used for hyperspectral image acquisition. The mid-IR light is generated by a hot filament source, collected by a lens and transmitted through a polystyrene film (Nicolet 38.1 µm polystyrene film) to give a spectral signature. The polystyrene film is followed by a USAF resolution target to impose spatial features, then a Ge window to keep out ambient light. The mid-IR signal is then focused into the upconversion module by a CaF2 lens of focal length 40 mm, where it is mixed with a 40 W intracavity field of a 1064 nm Nd:YVO4 laser using a 20 mm long 5% MgO-doped periodically poled LiNbO3 (PP:LN) crystal. Periodic poling allows phasematching anywhere in the transparency range of the crystal. The PP:LN crystal contains five channels of different poling pitch, mounted on a translation stage for easy channel selection. For the images shown in this paper a poling pitch of 22 µm was used. The upconversion module is described in greater detail in [7]. The upconverted NIR light passes through a longpass 750 nm and short pass 850 nm and 1000 nm filters and a 4f lens configuration, consisting of two lenses of focal length 75 mm, before being imaged onto the camera by an objective lens of focal length 12 mm. The resolution target is mounted on a translation stage moved in the xy-plane by two Thorlabs z612(b) motorized actuators.

 figure: Fig. 2

Fig. 2 Schematic of the experimental setup used for this paper. The IR light is provided by a hot filament, it then passes through the polystyrene film and the resolution target before entering the upconversion module. The resulting NIR light is then captured by a CCD camera.

Download Full Size | PDF

The spectral coverage is determined by the phasematch condition within the FoV of the upconversion unit. In order to obtain the full spectral information in all object points this method relies on scanning the object in the FoV. Hence, the resolution target is mounted on a motorized stage capable of translation in the xy-plane. The measurement is computer controlled such that for selected motor positions images are acquired. The single image acquisition time is 50 ms. However; the total image translation time is on the order of 6 seconds. This relatively long translation time is caused by setting low acceleration on the translation stages for optimal accuracy. In this case, a total of 256 images were recorded resulting in a total measurement time of 25 minutes while the motors were scanned in a 1 cm2 area.

The post processing of the measurement into a hyperspectral image can be described as stitching relevant parts of each image into a single frame at a certain wavelength. First, the upconverted wavelength at each pixel position on the camera is calculated. Because the upconversion process is dependent on the signal angle relative to the mixing laser the wavelength at a certain pixel depends on its distance from the image center point as well as the azimuthal angle. The pixel position can then be translated into an output angle at the crystal facet and thus an internal angle inside the crystal. Via the phasematch condition of the PP:LN crystal this angle corresponds to a certain mid-IR wavelength which is then associated with the camera pixel in question. The system does not rely on spectral calibration in order to accurately determine the wavelength at a given pixel.

The recorded images are scaled by the nonlinear upconversion process. However, the magnification is not uniform, since it depends on the ratio of the upconverted wavelength to the incident wavelength [8]. Thus points further from the center is magnified less, this must be corrected before the images can be overlaid as the image of the object would otherwise not have the same size in each frame. The recorded motor positions are then used to translate the images and corresponding wavelength arrays such that the object pattern in each image is overlaid. In this way two 3D arrays are created, one contains the recorded images translated such that the object occupies the same position in the hyperspectral image plane and the other containing the upconverted wavelengths corresponding to the first array. These two arrays are then interpolated such that a single pixel spectrum can be created for each position. These spectra are then sampled in order to create the monochromatic images at desired wavelengths within the measured range. A schematic illustrating this process is shown in Fig. 3 .

 figure: Fig. 3

Fig. 3 Schematic showing the post processing turning the recorded images (top) into monochromatic images (bottom right). Here only three images are shown while, in our case, an actual measurement consists of 256 frames. Using the motor positions at the acquisition time the recorded images are translated such that the spatial features are overlaid (bottom left). The calculated wavelength distribution is then used in order to generate spectra at each pixel position. These are then sampled to create the monochromatic images.

Download Full Size | PDF

Several factors are of importance for the image processing. Accurately determining the center of the images is vital since the calculated wavelengths depend on the pixel position relative to the center. An incorrectly determined center results in significant errors in the obtained spectra and thus in the hyperspectral images. The spectral and spatial uniformity of the stationary light source is also critical since different parts of an image are sampled as the same wavelength. This effect is reduced by ratioing the recorded images with a reference image of the naked light source, see Fig. 4 . Removing the spectral content and irregularities of the source eliminates significant errors in the final reconstructed images.

 figure: Fig. 4

Fig. 4 Examples of measured images without resolution target. Note the characteristic ring pattern associated with the spectrum of polystyrene. The irregularities and spectral information of the source is largely removed in the ratioed image.

Download Full Size | PDF

Due to the non-collinear interaction and acceptance parameters of the nonlinear frequency conversion process the spectral resolution degrades with the radial position in an image. This is reflected in the spectral resolution changing between monochrome frames but being constant within a single frame. This is in contrast to [10]; herein the spectral resolution varied radially within a monochrome image but in the same way in each one.

The point spread function (PSF) of the imaging system can be calculated from the results in [8], in which the PSF is given approximately as exp[-2(rπw/(λ1 f))2], where r is the radial distance to the beam axis, w is the mixing laser spot-size, λ1 is the mid-IR wavelength and f is the focal length of the lens focusing the light into the nonlinear crystal. Thus in this case, with the optics used in the present experiments, the PSF is a Gaussian with a 1/e 2 width of 231µm.

Since the recorded images are essentially made up of rings of different wavelengths, coverage of these rings determines the accuracy with which the monochromatic images can be reconstructed. If the resolution target is translated in a regular square grid there will be certain wavelengths where large periodic areas of undersampling is present [11]. Therefore the measurements are realized with a quasi-random grid based on the Halton sequence with two and three as the base prime numbers [14]. This ensures that the entire field of view is covered while also limiting areas of undersampling.

3. Experimental results

The motorized actuators are controlled by the same LabVIEW program used for recording the images and controlling the nonlinear crystal temperature. A scanning range is defined as well as the number of images to be recorded, after which the measurement is automated. An example of a recorded image is shown in Fig. 5 . The ring pattern resulting from the absorption spectrum of polystyrene can be seen superimposed on the spatial features of the resolution target. Each frame is captured using an integration time of 50 ms with the lowest possible gain setting on the camera in order to avoid unnecessary noise. The temperature of the nonlinear crystal during these measurements was 90 °C. The temperature can be tuned in order to select the desired wavelength range.

 figure: Fig. 5

Fig. 5 Example of a recorded image of the USAF resolution target group 0, in which the largest lines are 500 µm. A measurement consists of 256 such images in our case. Note that the pattern of the resolution target is superimposed on the concentric circles of the polystyrene spectrum. Movies showing a section of the measurements can be seen in Visualization 1 and Visualization 2 for the quasi-random and regular grid respectively. This image has slightly enhanced contrast to be more easily viewable.

Download Full Size | PDF

Examples of reconstructed monochromatic images are shown in Fig. 6 . As mentioned above the post processing creates a defined number of images at chosen wavelengths within the detected range. However, in general, the quality of the images decreases as the wavelength increases, since the detected wavelength depends on the radial direction. During the post- processing the images are translated according to motor position data in order to overlay the spatial features. A small discrepancy in this translation becomes more severe at longer wavelengths since the rings become thinner with increasing wavelength.

 figure: Fig. 6

Fig. 6 Examples of monochromatic images obtained. (Left) shows an image created from frames acquired using a quasi-random grid based on the Halton sequence. (Right) is based on a measurement using a regular square grid. Movies of the series can be found in Visualization 3 and Visualization 4 for the quasi-random and regular grid respectively. The bandwidth of these monochromatic images is on the order of 10 nm in the beginning of the series. Here the spectral range covered by the reconstructed images is 3.24-3.41 µm. In order to properly observe the differences between the sampling strategies, it is suggested to view the accompanying visualization.

Download Full Size | PDF

3. Conclusion

In this work, a novel approach for hyperspectral imaging in the mid-IR spectral region has been demonstrated. The images are created from a series of frames captured using nonlinear frequency upconversion and detection by Si-based CCD. The object is translated through the FoV of the upconversion camera by two motorized actuators during the measurement in order to attain data at many different wavelengths for each object position. In the present work, the area covered by the translation is 1 cm2, however this is limited only by the motorized actuators and measurement time, larger images could be acquired by scanning a larger area.

This technique is particularly well suited for use in microscopes, since built-in xy translation stages are common. In this case, a pre-existing camera could thus be substituted by the upconversion based detection system in order to provide mid-IR hyperspectral imaging capabilities, given an appropriate illumination source. Measurement time in the current system is on the order of 25 min, this is largely limited by the speed at which the motorized actuators can translate the object since camera integration time is only 50 ms/frame corresponding to 12.8 seconds for 256 images. A high-speed microscope xy-stage might decrease the time between each frame by a factor of 100 or more, this would make the entire measurement take on the order of half a minute. Thus great improvements in measurement time is possible with faster motors and an accurate synchronization scheme or other means of acquiring images from different object positions, such as a MEMS based scanning mirror.

Acknowledgments

This work was funded by Innovation Fund Denmark, grant number 4135-00031B.

References and links

1. A. Rogalski, “History of infrared detectors,” Opto-Electron. Rev. 20(3), 279–308 (2012). [CrossRef]  

2. A. A. Gowen, C. P. O’Donnell, P. J. Cullen, G. Downey, and J. M. Frias, “Hyperspectral imaging – an emerging process analytical tool for food quality and safety control,” Trends Food Sci. Technol. 18(12), 590–598 (2007). [CrossRef]  

3. D. Lorente, N. Aleixos, J. Gómez-Sanchis, S. Cubero, O. L. García-Navarrete, and J. Blasco, “Recent advances and applications of hyperspectral Imaging for fruit and vegetable quality assessment,” Food Bioprocess Technol. 5(4), 1121–1142 (2012). [CrossRef]  

4. R. K. Reddy, M. J. Walsh, M. V. Schulmerich, P. S. Carney, and R. Bhargava, “High-definition infrared spectroscopic imaging,” Appl. Spectrosc. 67(1), 93–105 (2013). [CrossRef]   [PubMed]  

5. B. S. Sorg, B. J. Moeller, O. Donovan, Y. Cao, and M. W. Dewhirst, “Hyperspectral imaging of hemoglobin saturation in tumor microvasculature and tumor hypoxia development,” J. Biomed. Opt. 10(4), 044004 (2005). [CrossRef]   [PubMed]  

6. P. R. Griffiths and J. A. de Haseth, Fourier Transform Infrared Spectrometry, 2nd ed. (Wiley, 2007).

7. J. S. Dam, P. Tidemand-Lichtenberg, and C. Pedersen, “Room-temperature mid-infrared single-photon spectral imaging,” Nat. Photonics 6(11), 788–793 (2012). [CrossRef]  

8. J. S. Dam, C. Pedersen, and P. Tidemand-Lichtenberg, “Theory for upconversion of incoherent images,” Opt. Express 20(2), 1475–1482 (2012). [CrossRef]   [PubMed]  

9. J. S. Dam, C. Pedersen, and P. Tidemand-Lichtenberg, “High-resolution two-dimensional image upconversion of incoherent light,” Opt. Lett. 35(22), 3796–3798 (2010). [CrossRef]   [PubMed]  

10. L. M. Kehlet, P. Tidemand-Lichtenberg, J. S. Dam, and C. Pedersen, “Infrared upconversion hyperspectral imaging,” Opt. Lett. 40(6), 938–941 (2015). [CrossRef]   [PubMed]  

11. N. Sanders, J. S. Dam, P. Tidemand-Lichtenberg, and C. Pedersen, “Near diffraction limited mid-IR spectromicroscopy, using frequency upconversion,” Proc. SPIE 8964, 89641L (2014). [CrossRef]  

12. A. J. Torregrosa, H. Maestre, and J. Capmany, “Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm,” Opt. Lett. 40(22), 5315–5318 (2015). [CrossRef]   [PubMed]  

13. Q. Zhou, K. Huang, H. Pan, E. Wu, and H. Zeng, “Ultrasensitive mid-infrared up-conversion imaging at few-photon level,” Appl. Phys. Lett. 102(24), 241110 (2013). [CrossRef]  

14. J. H. Halton, “On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals,” Numer. Math. 2(1), 84–90 (1960). [CrossRef]  

Supplementary Material (4)

NameDescription
Visualization 1: MP4 (185 KB)      Video showing some of a measurement series using a quasi-random Halton grid.
Visualization 2: MP4 (146 KB)      Video showing some of a measurement series using a regular square grid.
Visualization 3: MP4 (205 KB)      Monochromatic images reconstructed from a measurement using a quasi-random Halton grid.
Visualization 4: MP4 (144 KB)      Monochromatic images reconstructed from a measurement using a regular square grid.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic showing the basis of this method of creating monochromatic images. Top shows three different positions of the FoV relative to the object. Bottom shows the three corresponding images where the circular areas represent the FoV of the upconverter and the color gradient the wavelength distribution within it. Note that it is the object that is moved between the captured frames, in the experiments this is done with motorized actuators. Monochromatic images can be created by combining parts from different captured frames with the desired wavelength.
Fig. 2
Fig. 2 Schematic of the experimental setup used for this paper. The IR light is provided by a hot filament, it then passes through the polystyrene film and the resolution target before entering the upconversion module. The resulting NIR light is then captured by a CCD camera.
Fig. 3
Fig. 3 Schematic showing the post processing turning the recorded images (top) into monochromatic images (bottom right). Here only three images are shown while, in our case, an actual measurement consists of 256 frames. Using the motor positions at the acquisition time the recorded images are translated such that the spatial features are overlaid (bottom left). The calculated wavelength distribution is then used in order to generate spectra at each pixel position. These are then sampled to create the monochromatic images.
Fig. 4
Fig. 4 Examples of measured images without resolution target. Note the characteristic ring pattern associated with the spectrum of polystyrene. The irregularities and spectral information of the source is largely removed in the ratioed image.
Fig. 5
Fig. 5 Example of a recorded image of the USAF resolution target group 0, in which the largest lines are 500 µm. A measurement consists of 256 such images in our case. Note that the pattern of the resolution target is superimposed on the concentric circles of the polystyrene spectrum. Movies showing a section of the measurements can be seen in Visualization 1 and Visualization 2 for the quasi-random and regular grid respectively. This image has slightly enhanced contrast to be more easily viewable.
Fig. 6
Fig. 6 Examples of monochromatic images obtained. (Left) shows an image created from frames acquired using a quasi-random grid based on the Halton sequence. (Right) is based on a measurement using a regular square grid. Movies of the series can be found in Visualization 3 and Visualization 4 for the quasi-random and regular grid respectively. The bandwidth of these monochromatic images is on the order of 10 nm in the beginning of the series. Here the spectral range covered by the reconstructed images is 3.24-3.41 µm. In order to properly observe the differences between the sampling strategies, it is suggested to view the accompanying visualization.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.