Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Along-track scanning using a liquid crystal compressive hyperspectral imager

Open Access Open Access

Abstract

In various applications, such as remote sensing and quality inspection, hyperspectral (HS) imaging is performed by spatially scanning an object. In this work, we present a new compressive hyperspectral imaging method that performs along-track scanning. The method relies on the compressive sensing miniature ultra-spectral imaging (CS-MUSI) system, which uses a single liquid crystal (LC) cell for spectral encoding and provides a more efficient way of HS data acquisition, compared to classical spatial scanning based systems. The experimental results show that a compression ratio of about 1:10 can be reached. Owing to the inherent compression, the captured data is preprepared for efficient storage and transmission.

© 2016 Optical Society of America

1. Introduction

Hyperspectral (HS) images may provide important information in the field of remote sensing [1–3] with applications in agriculture [4], surveillance [5] and mineralogy [6], amongst others. Hyperspectral images are usually arranged in a three-dimensional (3D) array (cube) [Fig. 1(a)]. Two dimensions (2D) represent the spatial domain (x-y axis) and the additional dimension represents the spectral domain (λ axis). Typically, the spectral domain contains a few hundreds of bands, where each band is assigned to a different wavelength. For example, the airborne visible/infrared imaging spectrometer (AVIRIS) system [7] has 224 adjacent spectral bands.

 figure: Fig. 1

Fig. 1 (a) Hyperspectral cube data, (b) captured with two exposures of spatial scanning (push broom) and (c) with two exposures using CS-MUSI spectral scanning process. The figures show heuristically the parts of the datacube sampled at two different exposures. The spectrally encoded data is integrated pixelwise.

Download Full Size | PDF

Hyperspectral datacubes typically contain from 108 to 1010 entries. This imposes challenges on the scanning, storage and digital processing of the data. However, various studies [8,9] have shown that typical HS cubes are highly redundant and, therefore, are very compressible. This has motivated the application of Compressive Sensing (CS) for the task of HS image acquisition.

Compressive sensing [10,11] is a relatively new framework for the efficient sensing and reconstruction of sparse or compressible signals. Owing to the high compressibility of the HS cube, the application of CS methods for HS imaging is potentially highly beneficial. Recently, a number of different CS HS imaging systems have been presented [8,12–18].

Let us denote the HS 3D cube by F, the measured 3D cube by G and the recovered HS 3D cube by F˜. We assume the cubes’ spatial size is Nx×Ny pixels and there are Nλ spectral bands, so that F,F˜Nx,Ny,Nλ. In this work, we use a CS HS method that compresses the spectral domain, so that GNx,Ny,Mλ, where Mλ<Nλ. For linear systems, the operator that maps FGcan be represented by the sensing matrix Φ so that:

g=Φf,
where fNxNyNλ and gNxNyMλ are vectors obtained by lexicographic reordering of F and G. Conventional sensing systems are designed so that, ideally, Φ is represented by the identity matrix and Nλ=Mλ. Following CS theory, the number of measurements can be reduced, i.e., Mλ<Nλ, if the signal is sparse and a proper sensing mechanism is used. Obviously, the CS sensing matrix can no longer be isomorphic, but has many off diagonal elements, meaning that each measurement is the result of multiplexing multiple input elements.

In order to reconstruct the signal vector f from the measurement vector g, various CS algorithms were developed [19–26]. Most of those algorithms solve the 12 minimization problem:

minf˜{  gΦf˜22+τf˜1 },
where f˜NxNyNλ is the lexicographic reordering of F˜, τ is a regularization parameter and p is the p norm. In a more general case in which f has a compressed form in some mathematical domain other than the native representation domain, Eq. (2) can be slightly modified accordingly [24].

In this work, we present an along-track scanning implementation for a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system [27]. In Section 2, we describe the system design of CS-MUSI. In Section 3, we describe the model of our along-track scanning technique. In Section 4, we describe the along-track scanning experiment and present experimental results. Finally, in Section 5, we present our conclusion.

2. CS-MUSI system design

The innovative CS-MUSI system was first presented in [27] and was developed to work as a staring system that acquires a spectral cube by taking multiple spectrally encoded shots. The CS-MUSI system uses a single LC phase retarder in order to encode the spectral domain [28]. In each shot, the spectral transmission is similar for all the pixels in the image and the voltage applied on the LC controls the spectral modulation. The voltage V applied on the LC controls the LC birefringence Δn(V) which defines the spectral transmission response [28]:

T(λ)1212cos(dπΔn(V)λ),
where d is the cell gap and λ is the wavelength. For example, in Fig. 1(c) we can see the parts of the datacube modulated at two different exposures. Different spectral multiplexing is obtained by applying appropriate voltages on the LC. The spectrally multiplexed data is ultimately integrated by a 2D array detector.

The CS-MUSI system uses an in house manufactured LC with a clear aperture of about 1cm2. A schematic design of the CS-MUSI system is shown in Fig. 2. The optical design contains two achromatic doublet lenses. The first lens is used as an objective lens and the second lens as a relay lens. The purpose of the second lens is to relay the first intermediate image (LC plane) to the second image plane with ~1:1 magnification. The second image plane is the sensor (CCD/CMOS) plane. Two crossed polarizers were placed on either side of the LC cell [28].

 figure: Fig. 2

Fig. 2 CS-MUSI camera optical design with the LC located at the field stop.

Download Full Size | PDF

We used an in-house manufactured LC cell with a cell gap of approximately 50µm. The LC cell was fabricated of two flat glass plates coated with Indium Tin Oxide (ITO) and a polymer alignment layer, and two linear polarizers. The cavity is filled with LC material E44 (Merck).

Compared to resonant spectroscopic imaging, such as the Fourier transform spectroscopy system, the CS-MUSI system needs about an order of magnitude less measurments, uses a much simpler, robust and smaller setup. Compared to CS single shot spectral imaging systems (e.g., [12]), CS-MUSI offers significantly better spectral fidelity and volume reduction.

3. HS imaging with scanning CS-MUSI

As mentioned in Section 2, the CS-MUSI system is a staring system, so that each spectrally encoded shot covers the same field-of-view (FOV). In this paper, we present a technique for scanning along-track using the CS-MUSI system. The scanning geometry is shown in Fig. 3. “Push broom”-like spatial scanning is employed [Fig. 1(b)] [3], which makes the CS-MUSI applicable for remote sensing missions. The 3D cube is captured by an along-track scanning process. In the spectral scanning process, at each exposure the 2D detector captures a combination of different wavelengths [Fig. 1(c)], as with the staring CS-MUSI system [27]. The spatial scanning process is similar to the push broom scanning technique, but, instead of collecting a single strip from the scene (together with all its spectral components) at each exposure, t, the spectrally multiplexed reflected light for an area f(t) of size Lx×Ly is captured (see Fig. 3). The 2D array detector moves along the motion direction and the distance the system passes determines the swath width in the along-track direction. In each exposure, the FOV of the system is slightly translated in the moving direction. The data is captured by simultaneous spatial (due the platform motion) and spectral (due the LC voltage change) scanning; therefore, in every measurement we sense a different set of multiplexed wavelengths from a shifted region. Even though we add a spatial scanning process, the data encoding remains only in the spectral domain and there is no spatial multiplexing.

 figure: Fig. 3

Fig. 3 Along-track scanning with the CS-MUSI system. At every exposure the CS-MUSI camera captures a scene, f(t), with a different spectral multiplexing depending on the voltage applied on the LC, vt.

Download Full Size | PDF

Let us denote the spectrum distribution of the entire scene that the CS-MUSI system captures by f(x,y,λ) (Fig. 3). The spectral transmission function of the LC [29] for a given voltage vt is denoted by ϕvt(λ), and is the same for all light passing through the LC cell; therefore, it is the same for all pixels in a given exposure. With the along-track scanning CS-MUSI system, the 2D detector array measures a dx shifted scene in the along-track direction in each exposure. Therefore, each exposure covers a new strip of size dx×Ly (shaded area in Fig. 3). The detector measures the intensity of light that is spectrally modulated. The image at the t'th exposure is given by:

gt(x,y)=ϕvt(λ)rect(xLx,yLy)f(x+(t1)dx,y,λ)dλ,
where dx is the scene shift in the along-track direction between two consecutive exposures g(t) and g(t+1) (Fig. 3). In Eq. (4), we assume that the scene intensity remains approximately the same while the angular viewing angle changes due to the along-track motion. This is a reasonable assumption if the scene reflectance is Lambertian or the variation of the viewing angle within the FOV is sufficiently small. Since the sensing process is discrete it is convenient to represent the model in a discrete form. The discrete form of the detector’s t'th measurement of the pixel (n,m) becomes
gn,m(t)=ϕvt(λ)rect(xLx,yLy)f(x+(t1)dx,y,λ)rect(xΔn,yΔm)dλ,
where Δ is the pixel instantaneous FOV. If we denote by ϕvtNλ the spectral discretized transmission function of the LC with the applied voltage vt, ϕvt(λ), Eq. (5) can be rewritten as
gn,m(t)=kϕvt,kfn+l,m,k,
where fn,m,k=f(Δn,Δm,Δλk,), Δλ is the width of the spectral band, l=(t1)Ns is the origin in the horizontal direction of f(t), and Ns=dx/Δ is the horizontal pixel shift (in the object plane) between two consecutive exposures where x denotes the smallest integer not less than x. At each given location t = 1,2,… along the scanning track the CS-MUSI system captures a 2D measurement, g(t), of the respective area f(t):
f(t)=(fl,0fl+1,0fl+Lx/Δ,0fl,1fl+1,1fl+Lx/Δ,1fl,Ly/Δfl+1,Ly/Δfl+Lx/Δ,Ly/Δ),
where fn,m is the spectral vector of the point (n,m) of the scene.

Let us denote by {pi}i=1M the set of exposure indices, t, appropriate to the exposures that capture the pixel at some given location (n,m)(e.g., the black point in Fig. 3):

fn,m{gn1,m(p1)gn2,m(p2)gnM,m(pM)},
where M=Lx/dx is the number of times the (n,m) point was measured, {pi}i=1M are the indices of the exposures i = 1,…,M that capture the point fn,m, given by the nearest integer of (nLx)/dx+i for (nLx)/dx+i>0, and pi=1 otherwise. ni represents the pixel position at the detector in the pi'th exposure, given by ni=ndx(pi1)if niLx/Δ, andni=Lx/Δ otherwise. The exposure index pi is associated with the respective voltage vpi applied on the LC, which determines the spectral encoding. By registering the voltages we can express in a matrix form,Φn,m, the system transmission for the (n,m) point of the scene:
Φn,m=[ϕvp1ϕvp2ϕvpM]T,
where ϕvpi is a vector that represents the spectral transmission function of the LC for a given voltage vpi. A representative sensing matrix for a single point is illustrated in the left image of Fig. 4. The figure presents the spectral transmission for different voltages applied on the LC.

 figure: Fig. 4

Fig. 4 Left: Image of the sensing matrix Φn,m showing the spectral transmission as a function of voltage/exposure. Each row in the matrix represents the spectral transmission for voltage vp. Center and Right: Rearranged matrices for two different pixels in two different columns of the scene (vertical strips in the object grid).

Download Full Size | PDF

Unlike the staring CS-MUSI system where the sensing matrix is the same for all points in the imaged scene, the along-track scanning CS-MUSI system has different sensing matrices, which depend on the pixel sensed and on the appropriate set of exposures. From Eqs. (4)-(8), we can see that the exposure index, t, does not affect the off-track direction. Therefore, all pixels in a vertical strip of size dx×Ly share the same sensing model. For example, the gray vertical strip in Fig. 3 belongs to the imaged patchesf(1),f(2),,f(D), and will appear in exposures 1,…,D. For these exposures, the LC operated under voltages v1,v2,,vD, yielding an appropriate set of spectral encoding Φn1D,m for any point n1D in the D'th vertical strip, where nis=(s1)dx+i1 denotes the i=1,2,,Ms column at strip s.

Since each vertical strip has its own sensing matrix it is convenient to perform the reconstruction strip-wise, rather than pixel-wise. The sensing matrix for the s'th vertical strip can be written as a block matrix built from the appropriate matrices Φn1s,mMλ×Nλ, (9) of each point in the strip:

Φs=1NyNs×NyNsΦn1s,m=[[Φn1s,m]0Mλ×Nλ0Mλ×Nλ0Mλ×Nλ[Φn1s,m]0Mλ×Nλ0Mλ×Nλ0Mλ×Nλ[Φn1s,m]],
where denotes the Kronecker product, 1 is the identity matrix, s is the vertical strip number and Ny is the total number of pixels in one column.

Following the CS approach (section 1), in order to be able to reconstruct a vertical strip, the strip needs to be captured with M=Mλ different spectrally encoded measurements, that is, the strip has to appear in Mλ different measurements g(t). For example, for the gray vertical strip from Fig. 3, D=Mλ.

With the staring CS-MUSI system [27], since the FOV of the camera is the same for all the measurements, it is sufficient to provide the LC with a set of voltages that contains Mλ different voltages, such as, for example, the first 100 voltages on the left side of Fig. 5. However, with the along-track scanning setup (Fig. 3) the set of the LC voltages needs to be operated repetitively (Fig. 5), vt=vt+Mλ, in order to ensure that all the vertical strips are captured with a full set of the same Mλ modulations.

 figure: Fig. 5

Fig. 5 Set of voltages applied to the LC as a function of exposure number, t, applied in a perodic way (left) and set of voltages as a function of exposure number, t, and the strip exposure number [i from Eq. (8)] for a specific vertical strip (right). In this example, there are Mλ=100 different voltages.

Download Full Size | PDF

As a result of the cyclic process, the voltage applied to the LC for each exposure can be expressed using the modulo operator vt=vmod(vt,Mλ). Owing to the periodicity of the voltage applied vt=vt+Mλ, we get Φn1s,m=Φn1s+Mλ,m and, therefore, Φs=Φs+Mλ, where t is the exposure number and s is the vertical strip number. In practice, this yields a circulant rotation of the sensing matrix, as illustrated in Fig. 4, showing three different sensing matrices for three different points in three different vertical strips.

The above analysis assumed uniform horizontal velocity and no vertical motion. Unfortunately, due to possible mechanical vibrations, the successive shots may not be aligned along a common spatial grid. Therefore, a digital registration process needs to be applied on the captured data before solving Eq. (2). By estimating the amount of motion of each exposure, we can reorganize the 3D cube on a common spatial grid that will enable recovering the 3D data column by column. The motion estimation is executed using a block based motion estimation algorithm [30]. In this method, each image is divided into blocks and then compared with a block from the previous image (reference image). We performed the block matching with Full Search (FS) on a pre-selected area on interpolated images [30]. The cost function used is correlation between two blocks:

Corr(gblock-int(i),gblock-int(j))=cov(gblock-int(i),gblock-int(j))cov(gblock-int(i))cov(gblock-int(j)),
where gblock-int(i)&gblock-int(j) are bicubic interpolated [31] block images of the measurements g(i) and g(j), and cov denotes the covariance. After identifying all the sub-pixel shifts, the HS CS images are organized in one 3D cube and presented in a matrix form, G, where each layer in the matrix represents the same scene, as illustrated in Fig. 6.

 figure: Fig. 6

Fig. 6 Captured 2D images, g(t), before and after registration process.

Download Full Size | PDF

After obtaining the registered 3D cube, G from Fig. 6, the 3D HS cube F can be reconstructed strip by strip, Gs, using Eq. (2) for a given Φs. Other reconstruction strategies than the strip-wise used here, can be applied as well [32].

4. Experiment

We carried out a lab experiment that mimics HS imaging with a push broom-like scanning CS-MUSI. Instead of imaging with a moving CS-MUSI system (Fig. 3), we kept the imaging platform static and we moved the object laterally with respect to the imager. The object was placed on a moving stage 129.5cm from the optical system. The cross swath width was 7.5cm. The imager used a CMOS UI-3240CP-C-HQ detector of 1280 × 1024 pixels and had a FOV of 3.3 deg and an instantaneous FOV of 0.005 degrees.

The acquisition was done by spectrally imaging three arrays of LED light sources (Thorlabs LIU001, LIU002, and LIU003). The size of the object was 7.5cm × 7.5cm. A figure below shows the LED arrays as imaged with a RGB color camera. A signal generator generated a set of 100 voltages between 0V and 10V, repeated three times (Fig. 5). The reconstruction was done on an Intel(R) i7-4710 2.5GHz processor with a 16GB memory computer and using Matlab software.

As explained in section 3, before solving Eq. (2) it is necessary to register the vertical shifts along a common grid. This was done by a sub-pixel motion detection algorithm based on block matching that detects shifts up to the size of one tenth of a pixel. In Fig. 7, we display the motion estimation results. In the shots in the scanning direction (x axis in Fig. 7) an average movement, Ns, of 2.5 pixels was found. The small deviations are due to non-perfect synchronization between the systems and a small error in time operation of each system. In the cross-track direction (y axis in Fig. 7) there was an unintentional movement of less than a pixel.

 figure: Fig. 7

Fig. 7 Displacement between consecutive shots as a function of exposure number; where the upper graph is in the scanning direction (horizontal translation) and the lower graph is in the axis vertical to the scanning (vertical translation).

Download Full Size | PDF

In order to solve Eq. (2) we used the SpaRSA solver [21,33] which is a solver designed for separable sensing operators. The sparsifying operator was the orthogonal wavelet filter Daubechies-4 (db4). In Fig. 8(b), we show a reconstruction of 640 × 700 pixels from the center of the scene. In [27], we have shown that in the spectral range of 410nm to 800nm it is possible to reconstruct around a 1000 bands with a compression ratio of ~8-12 to 1. In this example we reconstructed 1171 spectral bands, in the range 410nm to 800nm. The image reconstruction was performed column by column, each time with the appropriate sensing matrix, as we explained in section 3. The image was reconstructed from 100 samples for each vertical strip, which gives a reconstruction ratio of about 8.5%.

 figure: Fig. 8

Fig. 8 (a) Original image (b) RGB representation of the HS cube in the scanning CS-MUSI experiment.

Download Full Size | PDF

Figure 9 shows five reconstructed images at five different wavelengths captured with a CS-MUSI camera in the along-track scanning mode.

 figure: Fig. 9

Fig. 9 Reconstruction results at five different wavelengths; 460, 488, 520, 590 and 630nm.

Download Full Size | PDF

5. Conclusion

In conclusion, in this paper we presented a compressive ultra-spectral imaging acquisition technique appropriate for systems mounted on moving platforms. The technique uses the CS-MUSI [27] system with a push broom-like scanning process. The along-track scanning process enables capturing compressed spectral images. Appropriate post acquisition digital processing is used to account for distortion induced by the scanning process. In a laboratory experiment mimicking along-track scanning imaging conditions, we have demonstrated acquisition of 640 × 700 × 1171 3D spectral cubes with only 640 × 700 × 100 measurements, thus achieving optical compression with a factor of almost × 12. This compression enables efficient storage and exploitation of transmission bandwidth.

Acknowledgments

This work was supported by the Ministry of Science and Technology (grant No. MOST 3-11483). We wishes to thank Prof. Ibrahim Abdulhalim's research group (Department of Electro-Optical Engineering and The Ilse Katz Institute for Nanoscale Science and Technology, Ben-Gurion University) for providing the liquid crystal cell.

References and links

1. M. T. Eismann, Hyperspectral Remote Sensing (SPIE, 2012).

2. T. A. Warner, G. M. Foody, and M. D. Nellis, The SAGE Handbook of Remote Sensing (Sage, 2009).

3. J. R. Schott, Remote Sensing (Oxford University, 2007).

4. H. Cetin, J. Pafford, and T. Mueller, “Precision agriculture using hyperspectral remote sensing and GIS,” in Proceedings of 2nd International Conference on Recent Advances in Space Technologies, RAST 2005 (IEEE, 2005), pp. 70–77. [CrossRef]  

5. P. W. Yuen and M. Richardson, “An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition,” Imaging Sci. J. 58(5), 241–253 (2010). [CrossRef]  

6. B. Xiaojia, M. Fang, W. Bin, L. Jiaguang, and W. Dong, “Hyperion hyperspectral remote sensing application in altered mineral mapping in East Kunlun of the Qinghai-Tibet Plateau,” in 2010 International Conference on Challenges in Environmental Science and Computer Engineering (CESCE) (IEEE, 2010), pp. 519–523. [CrossRef]  

7. G. Vane, R. O. Green, T. G. Chrien, H. T. Enmark, E. G. Hansen, and W. M. Porter, “The airborne visible/infrared imaging spectrometer (AVIRIS),” Remote Sens. Environ. 44(2-3), 127–143 (1993). [CrossRef]  

8. C. Li, T. Sun, K. F. Kelly, and Y. Zhang, “A compressive sensing and unmixing scheme for hyperspectral data processing,” IEEE Trans. Image Process. 21(3), 1200–1210 (2012). [CrossRef]   [PubMed]  

9. Y. August, C. Vachman, and A. Stern, “Spatial versus spectral compression ratio in compressive sensing of hyperspectral imaging,” in SPIE Defense, Security, and Sensing (ISOP, 2013), pp. 87170G.

10. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

11. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

12. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]   [PubMed]  

13. T. Sun and K. Kelly, “Compressive sensing hyperspectral imager,” in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuA5.

14. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). [CrossRef]   [PubMed]  

15. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]   [PubMed]  

16. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014). [CrossRef]  

17. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 233 (2014). [CrossRef]  

18. R. M. Willett, M. F. Duarte, M. Davenport, and R. G. Baraniuk, “Sparsity and structure in hyperspectral imaging: Sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31(1), 116–126 (2014). [CrossRef]  

19. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). [CrossRef]   [PubMed]  

20. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

21. S. J. Wright, R. D. Nowak, and M. A. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process. 57(7), 2479–2493 (2009). [CrossRef]  

22. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, 2010).

23. S. Becker, J. Bobin, and E. J. Candès, “NESTA: a fast and accurate first-order method for sparse recovery,” SIAM J. Imaging Sci. 4(1), 1–39 (2011). [CrossRef]  

24. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012).

25. S. Qaisar, R. M. Bilal, W. Iqbal, M. Naureen, and S. Lee, “Compressive sensing: From theory to applications, a survey,” J. Commun. Netw. (Seoul) 15(5), 443–456 (2013). [CrossRef]  

26. T. Goldstein, C. Studer, and R. Baraniuk, “A field guide to forward-backward splitting with a FASTA implementation,” arXiv preprint arXiv:1411.3406 (2014).

27. I. August, Y. Oiknine, M. AbuLeil, I. Abdulhalim, and A. Stern, “Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder,” Sci. Rep. 6, 23524 (2016).

28. Y. August and A. Stern, “Compressive sensing spectrometry based on liquid crystal devices,” Opt. Lett. 38(23), 4996–4999 (2013). [CrossRef]   [PubMed]  

29. A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, 1984).

30. S. Usama, M. Montaser, and O. Ahmed, “A complexity and quality evaluation of block based motion estimation algorithms,” Acta Polytech. 45, 1-13 (2005).

31. R. Carlson and F. Fritsch, “Monotone piecewise bicubic interpolation,” SIAM J. Numer. Anal. 22(2), 386–400 (1985). [CrossRef]  

32. Y. Oiknine, I. August, L. Revah, and A. Stern, “Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system,” Proc. SPIE 9857, 9857 (2016).

33. Y. Oiknine, Y. August, and A. Stern, “Reconstruction algorithms for compressive hyperspectral imaging systems with separable spatial and spectral operators,” in SPIE Optical Engineering Applications (ISOP, 2014), paper 921703.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Hyperspectral cube data, (b) captured with two exposures of spatial scanning (push broom) and (c) with two exposures using CS-MUSI spectral scanning process. The figures show heuristically the parts of the datacube sampled at two different exposures. The spectrally encoded data is integrated pixelwise.
Fig. 2
Fig. 2 CS-MUSI camera optical design with the LC located at the field stop.
Fig. 3
Fig. 3 Along-track scanning with the CS-MUSI system. At every exposure the CS-MUSI camera captures a scene, f ( t ) , with a different spectral multiplexing depending on the voltage applied on the LC, v t .
Fig. 4
Fig. 4 Left: Image of the sensing matrix Φ n,m showing the spectral transmission as a function of voltage/exposure. Each row in the matrix represents the spectral transmission for voltage v p . Center and Right: Rearranged matrices for two different pixels in two different columns of the scene (vertical strips in the object grid).
Fig. 5
Fig. 5 Set of voltages applied to the LC as a function of exposure number, t, applied in a perodic way (left) and set of voltages as a function of exposure number, t, and the strip exposure number [i from Eq. (8)] for a specific vertical strip (right). In this example, there are M λ =100 different voltages.
Fig. 6
Fig. 6 Captured 2D images, g ( t ) , before and after registration process.
Fig. 7
Fig. 7 Displacement between consecutive shots as a function of exposure number; where the upper graph is in the scanning direction (horizontal translation) and the lower graph is in the axis vertical to the scanning (vertical translation).
Fig. 8
Fig. 8 (a) Original image (b) RGB representation of the HS cube in the scanning CS-MUSI experiment.
Fig. 9
Fig. 9 Reconstruction results at five different wavelengths; 460, 488, 520, 590 and 630nm.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

g=Φf,
min f ˜ {    gΦ f ˜ 2 2 +τ f ˜ 1   },
T( λ ) 1 2 1 2 cos( dπΔn(V) λ ),
g t ( x,y )= ϕ v t ( λ )rect( x L x , y L y )f( x+( t1 ) d x ,y,λ )dλ ,
g n,m ( t ) = ϕ v t ( λ )rect( x L x , y L y )f( x+( t1 ) d x ,y,λ )rect( x Δ n, y Δ m )dλ ,
g n,m ( t ) = k ϕ v t ,k f n+l,m,k ,
f ( t ) =( f l,0 f l+1,0 f l+ L x /Δ ,0 f l,1 f l+1,1 f l+ L x /Δ ,1 f l, L y /Δ f l+1, L y /Δ f l+ L x /Δ , L y /Δ ),
f n,m { g n 1 ,m ( p 1 ) g n 2 ,m ( p 2 ) g n M ,m ( p M ) },
Φ n,m = [ ϕ v p 1 ϕ v p 2 ϕ v p M ] T ,
Φ s = 1 N y N s × N y N s Φ n 1 s ,m =[ [ Φ n 1 s ,m ] 0 M λ × N λ 0 M λ × N λ 0 M λ × N λ [ Φ n 1 s ,m ] 0 M λ × N λ 0 M λ × N λ 0 M λ × N λ [ Φ n 1 s ,m ] ],
Corr( g block-int ( i ) , g block-int ( j ) )= cov( g block-int ( i ) , g block-int ( j ) ) cov( g block-int ( i ) )cov( g block-int ( j ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.