Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth-independent internal fingerprint based on optical coherence tomography

Open Access Open Access

Abstract

Optical coherence tomography (OCT) was used for imaging three-dimensional fingerprint to overcome the effects of different skin states and fake fingerprint. However, the OCT-based fingerprint features depend on the depth of fingertip skin which is still challenging for biometric recognition and encryption. In this work, we presented a new approach of maximum intensity projection (MIP) image of the epidermal-dermal junction (DEJ) to extract the internal fingerprint that is independent of the depth of fingertip skin. To begin with, the surface and DEJ were segmented based on the deep learning algorithm. Then the internal fingerprint was extracted by the MIP image of DEJ which has a more accurate structural similarity by quantitative analysis. The experimental results showed that internal fingerprint acquired by MIP of DEJ can be applied for scar-simulation fingertip and encryption since it is not sensitive to the states of surface skin and independent of the depth.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

It is well known that human fingerprint could be extracted for biometric recognition [1,2] and encryption [35] since fingerprints are located in the surface of the fingertip, is unique to each person and the oldest biometric sign of identity [6]. Characteristic surface fingerprint features are generally categorized into three levels. Level 1 features are the location and arrangement of the fingerprint ridges. Level 2 features are minutiae such as ridge bifurcations and endings. All dimensional attributes of the fingerprint ridges represent the Level 3 features include ridge path deviation, width, shape, pores, edge contour, incipient ridge, breaks, creases scars, and other permanent details [1].

Surface fingerprints are captured using fingerprint scanners, which is based on a variety of physical mechanisms including optical, pressure and acoustic methods [710], and the image quality of fingerprint is determined by skin states(dry/wet), damage and distortion [11]. Some work was based on polarization resolved in finger light to obtain moisture-insensitive fingerprint [12]. But this technique did not solve the problem caused by the damaged fingertip skin such as scars. Thus, a robust and damage resistant means of fingerprint acquisition is needed, and the subsurface fingerprint is an attractive solution. Fortunately, the papillary junction is located at the dermal-epidermal junction (DEJ) and has the same topography as the surface. The internal fingerprint was proposed to differentiate from surface fingerprint and was found in the range of 220-550 $\mu m$ below the surface [13].

Optical coherence tomography (OCT) is a noninvasive imaging technique for highly scattering biological tissues, which can image up to 1 mm beneath the skin surface. Thus, there has been great interest to use OCT for internal fingerprint recognition and structures below the finger surface [1324], which is a damage resistant mean to acquire internal fingerprint. There are mainly two methods to reconstruct en face fingerprint images based on OCT. One is a single en face 2-D image at a specific depth [18]. The other is an en face image of the finger averaged over the several successive depths [20,21,25] or summed over the several successive depths [22]. However, the depth of DEJ is not at the fixed position for all people, and the features of the internal fingerprint can change with depth since it varies from person to person and from finger to finger. Thus, it is a challenge for extracting a reliable and robust internal fingerprint.

In order to address this issue, we proposed maximum intensity projection (MIP) images of surface and DEJ region to extract the surface and internal fingerprints, respectively, which are constructed by the projection of en face OCT images of surface and DEJ region in alignment with the z-axis. MIP image for extraction of the fingerprint is independent of the depth. Firstly, the surface and DEJ was segmented by CNN, then ridge and valley were constructed by interpolation. then MIP images of surface and DEJ regions were constructed to extract the external and internal fingerprints. Finally, the MIP-based internal fingerprint was applied for fingerprint extraction of scarred fingertip and encryption.

2. Methods

2.1 OCT system and sample

In this work, the composition of the home-built spectral domain OCT is similar to the previous work [26]. The main difference is that the nonpolarized light in the sample and reference arms and only one reference arm used in the OCT system, as shown in Fig. 1(a). The longitudinal and lateral resolutions in the air of 8.9 $\mu m$ and 18.2 $\mu m$, respectively. Especially, at the tip of the probe, a 3mm thick glass window W2 was placed for the contact with the finger as it did like fingerprint scanning with conventional fingerprint scanners, and the glass tilts at a small angle ($<5^{\circ }$) for reducing the noise. Another identical cover glass W1 was placed in front of a focusing lens to compensate for the dispersion mismatch. For the construction of a 3D image, 400 B-mode OCT images are acquired, in increments of 25-$\mu m$ of the position of the light beam.

 figure: Fig. 1.

Fig. 1. (a) OCT system for fingerprint, (b) Typical three-dimensional OCT image of fingertip skin (c) cross-sectional OCT image of fingertip skin, and (d)-(g) en face OCT image at the different depth to demonstrate the fingerprints at the different depths labelling with from I to IV in Fig.(c).

Download Full Size | PDF

The human fingertip skin was chosen for the study. Fig. 1(b) shows the typical 3D fingertip skin. Surface and internal fingerprints are located in regions labelling in Fig. 1(c) and (d) to (g) demonstrate that the fingerprints change with depth.

2.2 Surface and internal fingerprint extraction

Figure 2 shows the flowchart to extract the surface and internal fingerprints. As shown in Fig. 2(a), the ridge portion is composed of the structure from the tops of papillae ridges to the bottoms of papillae valleys. Fig. 2(b) demonstrated that the boundaries of surface and DEJ were determined by a convolutional neural network (CNN). The boundaries of both surface and the DEJ were segmented employing U-net, whose setting could be seen in our previous work [27]. The ridge tops and papillae valleys were located by searching local maxima and local minima as shown in Fig. 2(c). Then the envelop curves of local maxima and minima of the ridge boundary were determined by interpolation for the region of ridge portion in DEJ as shown in Fig. 2(d). Finally, the region between the enveloping curves of local maxima and minima was chosen as the interest of region (ROI) for the internal fingerprint as shown in Fig. 2(e).

 figure: Fig. 2.

Fig. 2. Flowchart of surface and internal fingerprints

Download Full Size | PDF

A maximum intensity projection (MIP) image of DEJ ridge portion in 3D OCT image of fingertip skin was used to extract the internal fingerprints as shown in Fig. 3(d). The projection induces the internal fingerprints which are independent of the depth. In order to quantitatively measure the similarity of internal fingerprint and the depth-dependent fingerprints, we studied the MIP of the surface boundary of fingertip skin to construct the surface fingerprint as a reference as shown in Fig. 3(c).

 figure: Fig. 3.

Fig. 3. (a) Fingerprint acquired by conventional optical scanner, (b) magnification of ROI in (a), (c) projection of surface boundary to acquire surface fingerprint, and (d) projection of DEJ boundary to acquire internal fingerprint.

Download Full Size | PDF

2.3 Evaluation of image quality and robustness for depth-dependent fingerprint

Similarity indices including peak signal to noise ratio (PSNR) [28], normalized cross-correlation (NCC) [29] and structural similarity index metrix (SSIM) [30] are used to evaluate image quality and robustness during image coding, denoising, and segmentation. The similarity indices for images could account for both intensity variations and geometric distortions. Thus, these three parameters are applied for the investigation of image quality and robustness for depth-resolved fingerprint.

PSNR is defined by [28]:

$$PSNR=10log10[\frac{I_{max} ^2}{MSE}]$$
$$MSE = \frac{1}{N} \sum_{n=N}[I(n)-I'(n)]^{2}$$
where $I_{max}$ is a maximum intensity value, $N$ is the number of pixels in an image, and $MSE$ is the mean squared error between two images. $I(n)$ and $I^{'}(n)$ represent the n$^{th}$ pixel of a reference fingerprint image and the depth-dependent fingerprint.

NCC is given by [29]:

$$NCC = \frac{{\textstyle \sum_{n=N}^{}}I\left (n \right ) \cdot I{}' \left (n \right )}{ \left [ \sqrt{\sum I\left ( n \right )^{2} } \cdot \sqrt{\sum {I}' \left ( n \right )^{2} } \right ]}$$

And SSIM is defined as:

$$SSMI(I,I^{'})=\frac{(2\mu _{I}\mu _{I^{'}} + C_{1} )\cdot(2\sigma _{II^{'}} + C_{2}) }{((\mu _{I}^{2} +\mu _{I^{'}}^{2} + C_{1} )\cdot((\sigma _{I}^{2} +\sigma _{I^{'}}^{2} + C_{1}) }$$
where $C_1$ and $C_2$ are two small positive constants, and the other variables are as follows.
$$\mu_{I}= {\textstyle \sum_{n=N}^{}}I(n)/N$$
$$\sigma_{I}^{2} ={\textstyle \sum_{n=N}^{}}[I(n)-\mu_{I}]^2/N$$
$$\mu_{I^{'}}= {\textstyle \sum_{n=N}^{}}I^{'}(n)/N$$
$$\sigma_{I^{'}}^{2} ={\textstyle \sum_{n=N}^{}}[I^{'}(n)-\mu_{I^{'}}]^2/N$$
$$\sigma_{II^{'}} ={\textstyle \sum_{n=N}^{}}[I(n)-\mu_{I}][I^{'}(n)-\mu_{I^{'}}]/N$$

3. Results

3.1 Similarity analysis of MIP-based fingerprint

Figure 3(a) shows the fingerprint scanned by conventional optical fingerprint scanner (ZK-6D20SK2/PLUS). A ridge in the conventional fingerprint is defined as a single curved segment and a valley is a region between two adjacent ridges. In general, black lines represent ridges and white lines indicate valleys. Fig. 3(b) demonstrates that a fingerprint image consistent of ridges and valleys, and the pores align along the ridges. However, in the OCT surface fingerprint, as shown in Fig. 3(c), the white lines mean ridges, and black lines are valleys, which is the opposite of the conventional fingerprint. Comparing with surface fingerprint, Fig. 3(d) indicates that the contrast of ridges and valleys decreases and the pores are hard to be seen along the ridges. Thus, the surface fingerprint using projection of surface boundary has featured at the three levels and is chosen as the reference image to analyze the robustness of the depth-dependent fingerprint.

Three parameters including SSIM, PSNR and NCC were employed to quantitatively measure the similarity of depth-dependent fingerprints by considering projection of surface boundary of fingertip skin as a reference and the obtained scores are presented in Fig. 4(a). The values of PSNR are in the range from 3.5 to 7, and the values of NCC are in the range from 0.75 to 0.95. The peaks of PSNR and NCC curve are located at the boundary of DEJ, and the minimums are at the subsurface of $\sim 300 \mu m$, which position has rich information of sweat pores and nonobvious texture. But the values of SSIM changed slightly with depth increasing. Fig. 4(b) demonstrated that the larger similarity of projection of DEJ than that of fingerprints at any depth, which means the MIP image of internal fingerprint has more accurate structural information to further analysis.

 figure: Fig. 4.

Fig. 4. Three parameters of similarity indices for (a) depth-dependent fingerprints and (b) internal fingerprint.

Download Full Size | PDF

Furthermore, Fig. 5 shows the change in anatomy and image degradation with depth increasing. Fig. 5(d) also demonstrates that the internal fingerprint is uneven, which differs from the surface fingerprint that can be flattened by a glass. The internal fingerprint based on MIP of DEJ boundary can avoid the above problems.

 figure: Fig. 5.

Fig. 5. Depth-dependent fingerprints at the depth of (a) 30$\mu m$ (b) 150$\mu m$ (c) 270$\mu m$ (d)390 $\mu m$(e) 510$\mu m$ (f)630$\mu m$ (g) 750$\mu m$ (h) 870$\mu m$.

Download Full Size | PDF

3.2 MIP-based fingerprint for scarred fingertip

The surface fingerprint usually affects by the scars. A transparent sticker glue was placed onto the volunteer’s thumb surface to mimic a scar. Fig. 6(a) shows the fingerprints obtained by the conventional fingerprint scanner for scarred fingertip and demonstrates that scar changed the furrow pattern. Fig. 6(b)-(d) show the three-dimensional, surface and internal fingerprints based on OCT. Surface fingerprint acquired by OCT also easily affected by scar. However, internal fingerprint based on MIP can remove the effect of a scar and keep the same texture. Thus, internal fingerprint based on MIP can effectively overcome the various states of surface skin.

 figure: Fig. 6.

Fig. 6. Fingerprints of (a) scarred fingertip skin using conventional fingerprint scanner (b) three-dimensional fingerprint of scarred fingertip skin (c) MIP-based surface fingerprint (d) MIP-based internal fingerprint.

Download Full Size | PDF

3.3 Image encryption with MIP-based internal fingerprint

An image encryption scheme is realized by making use of fingerprint information in the intensity domain [5] or the spectral domain [3]. In order to demonstrate the advantage of MIP-based internal fingerprint for image encryption, optical encryption is based on image fusion in the spectral domain. The basic principle based on the discrete cosine transform (DCT) of a plain image of cat, as shown in Fig. 7, and the DCT information can be compressed by using a low-pass filter. It is demonstrated that an encrypted image is formed by multiplying DCT of cat with the key image of MIP-based internal fingerprint. Decrypted image is the reverse of encryption.

 figure: Fig. 7.

Fig. 7. (a) plain image of cat, (b) DCT of plain image of cat (c) encrypted key image of MIP-based internal fingerprint (d) encrypted image of cat using fusion, (e) encrypted image of cat, (f) decrypted key image of internal fingerprint is the same to (e), (g) DCT spectrum and (h) decrypted image of cat by inverse DCT

Download Full Size | PDF

Figure 8 indicated that two fingerprints are in close proximity in axial direction chosen for the encrypted key image and the decrypted key image, respectively. Although two key images are similar, the encrypted image cannot be reconstructed. Thus, it is it is necessary to find a method to extract depth-independent fingerprint. In this study, the MIP-based internal fingerprint meets the requirement, and it is depth-independent and a reliable and robust method.

 figure: Fig. 8.

Fig. 8. the effect of the different key images between encryption and decryption (a) encrypted key image of OCT fingerprint at the depth of 30 um, (b) the corresponding encrypted image (c) decrypted key image of OCT fingerprint at the depth of 60um and (d) the corresponding decrypted image

Download Full Size | PDF

4. Discussions

Gangjun Liu and Zhongping Chen [31] proposed that a MIP image could be used to extract a surface fingerprint from the epidermis layers in an OCT image. However, the surface fingerprint is not robust against spoof attaching and surface skin is prone to cause distortion. On the contrary, MIP-based internal fingerprint in our work can overcome these problems. The innovation is that the ridge portion of DEJ was extracted by the method combining the CNN with interpolation, and the MIP of the ridge portion to construct the depth-independent internal fingerprint. The advantage of the MIP-based internal fingerprint is that it is not sensitive to the states of surface skin.

Compared to surface fingerprint, the MIP-based internal fingerprint image shown in Fig. 3(d) indicates that the pores are hard to be seen along the ridges. The reason is that OCT can not resolve the sweat glands in DEJ and the sweat ducts in cross-sectional OCT image located above the ridge portion. Although MIP-based internal fingerprint is missing sweat pores, it still has three basic ridge patterns (arches, loops and whorls) which are used for traditional fingerprint authentication. Besides, it is not sensitive to the states of surface skin and highly similar to surface fingerprint as demonstrated in section 3.1 so that it is more robust than traditional fingerprint authentication. It could be applied to a scarred fingertip for identity authentication which outperforms traditional fingerprint authentication. The MIP-based internal fingerprint could also be considered for other applications, such as image encryption which was presented above. To complement the missing information of sweat pores, a hybrid fingerprint could also be obtained. Firstly, the sweat ducts in cross-sectional OCT images can be segmented by CNN as shown in Fig. 9(b). Secondly, the project of sweat ducts in 3D space was converted into an en face image to obtain the image of sweat pores as shown in Fig. 9(c). Finally, the hybrid fingerprint could be constructed by fusing the MIP-based internal fingerprint with en face image of sweat pores as shown in Fig. 9(d). This hybrid fingerprint could also be used in some advanced applications.

 figure: Fig. 9.

Fig. 9. (a) cross-sectional OCT image of fingertip skin, (b) sweat ducts segmented in OCT image by CNN, (c) en face pores image, and (d) fusion of MIP-based internal fingerprint with en face pores image

Download Full Size | PDF

Another extracted fingerprint in the Ref. [25] is a hybrid fingerprint blending the surface and internal fingerprints. It is neither a surface fingerprint nor an internal fingerprint. The surface and internal fingerprints were determined by a combination of Sobel edge detection and interpolation. However, Sobel edge detection depended on resolution scale and some threshold (manual set) needed to determine the fingerprint zones in their algorithms. Furthermore, the fingerprint extraction region was demonstrated as shown in Fig. 6 in Ref. [25]. The dashed red line is the internal fingerprint zone. The solid green line is the start of the region and the solid blue line is the end of the region. The regions between the start and the end can not cover the whole internal fingerprint zone. Instead, our algorithm can overcome the problem. The ROI of DEJ was segmented by our method combining the CNN and interpolation, which contains the complete internal fingerprint zone as shown in Fig. 2(c).

5. Conclusion

We have presented a new approach of MIP image of DEJ to extract the internal fingerprints. Firstly, the surface and epidermal-dermal junction were segmented based on the deep learning algorithm. Then the internal fingerprint extracted by MIP image of DEJ has a more accurate structural similarity and more robust operation by quantitative analysis since it is independent of the depth and not sensitive to the states of surface skin. Finally, MIP-based internal fingerprint was used for image encryption and further demonstrate its advantage. Thus, the internal fingerprint acquired by MIP of DEJ boundaries has a potential application for a robust template to further verification or identification of individuals, and combine identity authentication with image decryption.

Funding

National Natural Science Foundation of China (61875038, 81901787); Natural Science Foundation of Fujian Province (2020I0013); The Special Funds of the Central Government Guiding Local Science and Technology Development (2020L3008).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. A. Jain, Y. Chen, and M. Demirkus, “Pores and ridges: Fingerprint matching using level 3 features,” Proc. - Int. Conf. on Pattern Recognit. 4, 477–480 (2006). [CrossRef]  

2. K. P. K. P. S. Aithal, “Literature Review on Fingerprint Level 1 and Level 2 Features Enhancement to Improve Quality of Image,” Int. J. Manag. Technol. Soc. Sci. 2, 8–19 (2017). [CrossRef]  

3. A. Alfalou, C. Brosseau, N. Abdallah, and M. Jridi, “Simultaneous fusion, compression, and encryption of multiple images,” Opt. Express 19(24), 24023 (2011). [CrossRef]  

4. T. Zhao, Q. Ran, L. Yuan, Y. Chi, and J. Ma, “Image encryption using fingerprint as key based on phase retrieval algorithm and public key cryptography,” Opt. Lasers Eng. 72, 12–17 (2015). [CrossRef]  

5. F. G. Hashad, O. Zahran, E. S. M. El-Rabaie, I. F. Elashry, and F. E. Abd El-Samie, “Fusion-based encryption scheme for cancelable fingerprint recognition,” Multimed. Tools Appl. 78(19), 27351–27381 (2019). [CrossRef]  

6. A. K. Jain, A. Ross, and S. Prabhakar, “An Introduction to Biometric Recognition,” IEEE Transactions on Circuits Syst. for Video Technol. 14(1), 4–20 (2004). [CrossRef]  

7. X. Xia and L. O’Gorman, “Innovations in fingerprint capture devices,” Pattern Recognit. 36(2), 361–369 (2003). [CrossRef]  

8. F. Roli, “Performance of fingerprint quality measures depending on sensor technology,” J. Electron. Imaging 17(1), 011008 (2008). [CrossRef]  

9. S. Memon, M. Sepasian, and W. Balachandran, “Review of finger print sensing technologies,” IEEE INMIC 2008: 12th IEEE International Multitopic Conference - Conference Proceedings, pp. 226–231 (2008).

10. X. Jiang, H. Y. Tang, Y. Lu, E. J. Ng, J. M. Tsai, B. E. Boser, and D. A. Horsley, “Ultrasonic Fingerprint Sensor with Transmit Beamforming Based on a PMUT Array Bonded to CMOS Circuitry,” IEEE Transactions on Ultrason. Ferroelectr. Freq. Control. 64(9), 1401–1408 (2017). [CrossRef]  

11. E. K. Yun and S. B. Cho, “Adaptive fingerprint image enhancement with fingerprint image quality analysis,” Image Vis. Comput. 24(1), 101–110 (2006). [CrossRef]  

12. S.-W. Back, Y.-G. Lee, S.-S. Lee, and G.-S. Son, “Moisture-insensitive optical fingerprint scanner based on polarization resolved in-finger scattered light,” Opt. Express 24(17), 19195 (2016). [CrossRef]  

13. K. A. Croussore and R. Splinter, “Optical coherence tomography,” Handb. Phys. Medicine Biol. 22, 31-1–31-9 (2010). [CrossRef]  

14. Y. Cheng and K. V. Larin, “In Vivo two- and three-dimensional imaging of artificial and real fingerprints with optical coherence tomography,” IEEE Photonics Technol. Lett. 19(20), 1634–1636 (2007). [CrossRef]  

15. M. Liu and T. Buma, “Biometric mapping of fingertip eccrine glands with optical coherence tomography,” IEEE Photonics Technol. Lett. 22, 1677–1679 (2010). [CrossRef]  

16. M.-R. Nasiri-Avanaki, A. Meadway, A. Bradu, R. M. Khoshki, A. Hojjatoleslami, and A. G. Podoleanu, “Anti-Spoof Reliable Biometry of Fingerprints Using En-Face Optical Coherence Tomography,” Opt. Photonics J. 01(03), 91–96 (2011). [CrossRef]  

17. A. Zam, R. Dsouza, H. M. Subhash, M. L. O’Connell, J. Enfield, K. Larin, and M. J. Leahy, “Feasibility of correlation mapping optical coherence tomography (cmOCT) for anti-spoof sub-surface fingerprinting,”.

18. E. Auksorius and A. C. Boccara, “Fingerprint imaging from the inside of a finger with full-field optical coherence tomography,” Biomed. Opt. Express 6(11), 4465 (2015). [CrossRef]  

19. J. Aum, J. H. Kim, and J. Jeong, “Live acquisition of internal fingerprint with automated detection of subsurface layers using OCT,” IEEE Photonics Technol. Lett. 28(2), 163–166 (2016). [CrossRef]  

20. X. Yu, Q. Xiong, Y. Luo, N. Wang, L. Wang, H. L. Tey, and L. Liu, “Contrast enhanced subsurface fingerprint detection using high-speed optical coherence tomography,” IEEE Photonics Technol. Lett. 29(1), 70–73 (2017). [CrossRef]  

21. K. B. Raja, E. Auksorius, R. Raghavendra, A. C. Boccara, and C. Busch, “Robust Verification with Subsurface Fingerprint Recognition Using Full Field Optical Coherence Tomography,” IEEE Comput. Soc. Conf. on Comput. Vis. Pattern Recognit. Work. 2017-July, 646–654 (2017). [CrossRef]  

22. F. Liu, G. Liu, Q. Zhao, and L. Shen, “Robust and high-security fingerprint recognition system using optical coherence tomography,” Neurocomputing 402, 14–28 (2020). [CrossRef]  

23. B. Ding, H. Wang, P. Chen, Y. Zhang, Z. Guo, J. Feng, and R. Liang, “Surface and Internal Fingerprint Reconstruction From Optical Coherence Tomography Through Convolutional Neural Network,” IEEE Transactions on Inf. Forensics Secur. 16, 685–700 (2021). [CrossRef]  

24. F. Liu, C. Shen, H. Liu, G. Liu, Y. Liu, Z. Guo, and L. Wang, “A Flexible Touch-Based Fingerprint Acquisition Device and a Benchmark Database Using Optical Coherence Tomography,” IEEE Trans. Instrum. Meas. 69(9), 6518–6529 (2020). [CrossRef]  

25. L. N. Darlow and J. Connan, “Efficient internal and surface fingerprint extraction and blending using optical coherence tomography,” Appl. Opt. 54(31), 9258 (2015). [CrossRef]  

26. Y. He, Z. Li, Y. Zhang, and H. Li, “Single camera spectral domain polarization-sensitive optical coherence tomography based on orthogonal channels by time divided detection,” Opt. Commun. 403, 162–165 (2017). [CrossRef]  

27. Y. Lin, D. Li, W. Liu, Z. Zhong, Z. Li, Y. He, and S. Wu, “A measurement of epidermal thickness of fingertip skin from OCT images using convolutional neural network,” J. Innovative Opt. Health Sci. S1793545821400058 (2020).

28. J.-W. Ryu, “No-reference peak signal to noise ratio estimation based on generalized Gaussian modeling of transform coefficient distributions,” Opt. Eng. 51(2), 027401 (2012). [CrossRef]  

29. Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust Stereo matching using adaptive normalized cross-correlation,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(4), 807–822 (2011). [CrossRef]  

30. M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, and M. K. Markey, “Complex wavelet structural similarity: A new image similarity index,” IEEE Transactions on Image Process. 18(11), 2385–2401 (2009). [CrossRef]  

31. G. Liu and Z. Chen, “Capturing the vital vascular fingerprint with optical coherence tomography,” Appl. Opt. 52(22), 5473 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) OCT system for fingerprint, (b) Typical three-dimensional OCT image of fingertip skin (c) cross-sectional OCT image of fingertip skin, and (d)-(g) en face OCT image at the different depth to demonstrate the fingerprints at the different depths labelling with from I to IV in Fig.(c).
Fig. 2.
Fig. 2. Flowchart of surface and internal fingerprints
Fig. 3.
Fig. 3. (a) Fingerprint acquired by conventional optical scanner, (b) magnification of ROI in (a), (c) projection of surface boundary to acquire surface fingerprint, and (d) projection of DEJ boundary to acquire internal fingerprint.
Fig. 4.
Fig. 4. Three parameters of similarity indices for (a) depth-dependent fingerprints and (b) internal fingerprint.
Fig. 5.
Fig. 5. Depth-dependent fingerprints at the depth of (a) 30 $\mu m$ (b) 150 $\mu m$ (c) 270 $\mu m$ (d)390 $\mu m$ (e) 510 $\mu m$ (f)630 $\mu m$ (g) 750 $\mu m$ (h) 870 $\mu m$ .
Fig. 6.
Fig. 6. Fingerprints of (a) scarred fingertip skin using conventional fingerprint scanner (b) three-dimensional fingerprint of scarred fingertip skin (c) MIP-based surface fingerprint (d) MIP-based internal fingerprint.
Fig. 7.
Fig. 7. (a) plain image of cat, (b) DCT of plain image of cat (c) encrypted key image of MIP-based internal fingerprint (d) encrypted image of cat using fusion, (e) encrypted image of cat, (f) decrypted key image of internal fingerprint is the same to (e), (g) DCT spectrum and (h) decrypted image of cat by inverse DCT
Fig. 8.
Fig. 8. the effect of the different key images between encryption and decryption (a) encrypted key image of OCT fingerprint at the depth of 30 um, (b) the corresponding encrypted image (c) decrypted key image of OCT fingerprint at the depth of 60um and (d) the corresponding decrypted image
Fig. 9.
Fig. 9. (a) cross-sectional OCT image of fingertip skin, (b) sweat ducts segmented in OCT image by CNN, (c) en face pores image, and (d) fusion of MIP-based internal fingerprint with en face pores image

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

P S N R = 10 l o g 10 [ I m a x 2 M S E ]
M S E = 1 N n = N [ I ( n ) I ( n ) ] 2
N C C = n = N I ( n ) I ( n ) [ I ( n ) 2 I ( n ) 2 ]
S S M I ( I , I ) = ( 2 μ I μ I + C 1 ) ( 2 σ I I + C 2 ) ( ( μ I 2 + μ I 2 + C 1 ) ( ( σ I 2 + σ I 2 + C 1 )
μ I = n = N I ( n ) / N
σ I 2 = n = N [ I ( n ) μ I ] 2 / N
μ I = n = N I ( n ) / N
σ I 2 = n = N [ I ( n ) μ I ] 2 / N
σ I I = n = N [ I ( n ) μ I ] [ I ( n ) μ I ] / N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.