Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Remote laser-speckle sensing of heart sounds for health assessment and biometric identification

Open Access Open Access

Abstract

Assessment of heart sounds is the cornerstone of cardiac examination, but it requires a stethoscope, skills and experience, and a direct contact with the patient. We developed a contactless, machine-learning assisted method for heart-sound identification and quantification based on the remote measurement of the reflected laser speckle from the neck skin surface in healthy individuals. We compare the performance of this method to standard digital stethoscope recordings on an example task of heart-beat sound biometric identification. We show that our method outperforms the stethoscope even allowing identification on the test data taken on different days. This method might allow development of devices for remote monitoring of cardiovascular health in different settings.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Cardiovascular diseases (CVDs) are the leading cause of disability and death worldwide, taking an estimate of 17.9 million lives each year [1,2]. Early identification of pathological cardiac conditions might improve well-being and prevent premature deaths. The evaluation of heart sounds is the cornerstone of cardiac examination. Normal heart sounds are low-frequency transient mechanical vibrations generated by the closure of heart valves, and should be distinguished from heart murmurs, typically higher frequency, noise-like sounds that are caused by turbulent blood flow [3,4]. The auscultation of heart sounds with a stethoscope is considered to be a clinical ‘art’ with considerable training required in order to distinguish normal from pathological heart sounds and murmurs [5]. Digital stethoscopes and phonocardiographs have also been developed to provide reliable graphical representations of heart sounds and heart sound diagnostics [610]. In more detail, the first heart sound (S1 - with a tone that can range between 10 and 140 Hz) originates from the closure of the mitral and tricuspid valves and is separated by the systolic pause from the second heart sound (S2 - with a tone that can range between 10 and 400 Hz), caused by the closure of the aortic and pulmonary valves [12]. Some people can have a third heart sound (S3), which can be either normal or sign of a disease, while additional sounds (ie: S4) and high frequency murmurs (can range between 20 to 1000 Hz [13]), if identified, often indicate cardiac pathology [14].

A heart sound is best appreciated with a stethoscope close to its point of origin, however vibrations produced inside the heart, particularly those of low frequency, can still propagate peripherally through the arteries from where they could be measured. Most of the existing methods of remote detection of those sounds based on radar [15] or visible light [1619] acquire only the lowest frequency vibrations, thus providing information only about the heart rate. [1519]. On the other hand, Zalevsky et al. and Bianchi et al. have proposed two different methods for acquisition of sound from the mechanical movement of objects with visible light capable of capturing sounds in 0-1.2 kHz [20] and 0-5 kHz [21,22] frequency range. Both of these approaches rely on the detection of the random speckle pattern that is reflected back to the observer’s camera and that is generated by random multipath interference from the object (e.g. scattering from the skin). The two approaches differ in how they track the changes in the speckle pattern due to subtle mechanical vibrations of the skin by e.g. using cross-correlation between different images or tracking the centre of mass of a single large speckle. Zalevksy et al. have also reported on the use of laser speckle reflected from the wrist to measure the heart rate [19,20].

In this work we assess the feasibility of using these remote detection methods to monitor heart sounds for diagnostic purposes. As the heart sound signal is quite complex it’s hard to quantify the recording quality using conventional criteria such as signal to noise ratio or mean square error. We therefore compare the ability of a machine learning algorithm to perform biometric identification [2326], using the laser-speckle detection method and a digital stethoscope recording dataset (HSCT-11) [27].

Here we report on heart sound information that is acquired contactlessly from a distance of around 1 m by shining a weak laser beam at the frontal region of the subject’s neck and recording the back-reflected speckle pattern with a high frame-rate CMOS camera. A gradient-based technique is applied to track the ‘flow’ of the speckle pattern over time [28], which contains data on heart sound. For the biometric identification algorithm we use wavelet scattering transform feature extraction, paired with a Support Vector Machine (SVM) classifier. The comparison of this algorithm applied to our data and the standard stethoscope dataset shows that our laser-speckle detection method outperforms the latter.

2. Experimental setup

The experimental setup is shown in Fig. 1(a) and Fig. 1(b) shows a photograph of the actual laser/camera system. A laser diode (DJ532-40 Thorlabs) is directed at the neck of the participant creating an illumination spot of $\sim 5$ mm diameter. A camera (Basler acA640-750um, Germany) collects the resulting dynamic speckle pattern at $f_{\text {samp}}$ = 1.5 kHz frame rate with 200 $\times$ 208 pixel resolution. The acquisition frame-rate is chosen to be as high possible whilst still delivering a good signal-to-noise ratio images from the low-power illumination laser (limited to 4 mW). A standard objective with focal length $f=25$ mm, 0.95 f-stop allows to detect the reflected speckle field from 10 cm up to 5 m away from the subject (larger distances were not tested but it has been shown by Bianchi et al. and Zalevsky et al. that the reflected speckles can be detected at distances up to 300 m [2022]). In the current experiment the camera was located at around 1 m distance from the test subject and collected the light at around 20 cm defocus distance from the illumination spot. The test subject is seated in a chair in a natural pose while the device records the time dynamics of the speckle pattern from their skin. The resulting speckle recordings are then post-processed to retrieve the heart sounds. Two example successive frames as captured on the camera are shown in Fig. 1(c): the blue arrow indicates an example feature in the speckle pattern that highlights the shift from one frame to the next.

 figure: Fig. 1.

Fig. 1. (a) and (b) show the experimental set up: a CMOS camera records the speckle pattern reflected from a person’s neck and created by a 4 mW continuous 532 nm laser mounted next to the camera (b). The displacement of the skin surface due to the heart sound causes proportional displacements of the speckle pattern at the far field due to speckle memory effect [11]. The speckle displacement can thus be be tracked to recover the heart sound. (c) shows two consecutive frames of the raw data speckle image recording and (d) shows a map of the local displacement calculated using Farneback algorithm. Red arrow shows the average displacement.

Download Full Size | PDF

3. Data processing

3.1 Retrieval of the raw sound signal from the speckle frame sequence

The first step of the data processing in our method is the retrieval of the raw sound signal from the recorded speckle frame sequence. Since the scattering surface (the subject’s neck) is subject to small deformations due to heart sound pressure waves propagating in the blood vessels, we can rely on the so-called speckle memory effect [11], i.e. the speckle pattern does not change shape upon tilting or vibration of the skin surface but translates proportionally to the tilting angle [29]. We employed a speckle tracking algorithm based on the optical flow [28], implemented in MATLAB (2018b) Computer Vision Toolbox to retrieve the local displacement map at each pixel of the image and then averaged these vectors to calculate the overall displacement amplitude and direction between consecutive frames, see Fig. 1(d). An example of the speckle displacement retrieved from the optical flow is shown in Fig. 2(a), and in Fig. 2(c) we show its scalogram. In Fig. 2(b) we show for comparison a typical heart sound acquired with a digital stethoscope positioned on the chest [30]. Comparison of Fig. 2(b) with Fig. 2(c) indicates that our method gives comparable signal-to-noise ratio across the main 20-700 Hz region. The optical flow also picked up macroscopic movements of the test subjects, therefore giving a large contribution in the 0-20 Hz range, which we eliminated by passing the signals through a 20-700 Hz band-pass filter (Butterworth, order 10). The resulting heart beat signal is shown in Fig. 2(e) and for comparison a stethoscope signal is shown in Fig. 2(d). Out of our pool of 10 subjects, 1 subject, whose heart sound recording we show in Fig. 2(c) and Fig. 2(e), presented not only S1 and S2, but also S3 and S4 sounds. The additional S3 and S4 signals are very clear and only visible after the filtering process described above. As a result of this incidental finding the test subject was referred for further clinical investigation and underlines the potential future of this method for identifing pathological heart sounds, murmurs and rhythm abnormalities in patients with a variety of cardiac pathologies.

 figure: Fig. 2.

Fig. 2. Heart sound recordings obtained with a stethoscope (left) and laser (right). (a) shows the raw-data speckle displacement over time, thus resulting from both heart beat sound and macroscopic human body movements. (b) and (c) show respectively the scalograms in log scale of the sound acquired with a stethoscope from the subject’s chest and with our device from the subject’s neck. (d) and (e) show cropped time traces corresponding to the signals in (b) and (c) respectively.

Download Full Size | PDF

3.2 Database acquisition for biometric identification

We acquired heart sounds from 10 subjects with the experimental setup shown in Fig. 1. For each of the subjects we recorded 4.5 minutes of cardiac activity in one day and 30 sec in the following 1-2 days. In both of these sessions, the laser was pointed towards the base of the subject’s neck without attempting to reproduce the precise location of the laser spot between sessions. Each of the recordings was bandpass-filtered as described above, rescaled to unit amplitude and cut into 2.5 sec segments thus providing a dataset of 108 recordings per person taken in one day and 12 recordings taken on another day.

In order to compare the quality of the data taken with our method to commonly used stethoscope recordings we used an open HSCT-11 dataset [27], containing digital stethoscope (ThinkLabs Rhythm Digital Electronic Stethoscope) recordings taken from 206 people. We have selected, arbitrarily, data from 10 subjects for whom there was at least 2.5 min of cardiac activity recording from this dataset. We segmented these recordings into 3 second pieces thus obtaining a dataset of 45 recordings per person. These recordings, being in WAVE format were already scaled from −1 to 1 and, as the digital stethoscope has its own filtering algorithm, we did not apply any additional filtering to this data.

4. Biometric identification algorithm

The algorithm we used to identify people contained two major steps: feature extraction using wavelet scattering transform, implemented in MATLAB Wavelet Toolbox, and classification using a support vector machine (SVM), implemented in MATLAB Statistics and Machine Learning Toolbox.

4.1 Feature extraction

Feature extraction identifies stable features and disregards signal deformations due to for example additive noise, translations, dilations, etc. We used a wavelet scattering network to extract features of our PCGs [31]. The architecture of our wavelet scattering transform, which resembles the physiological processing method used by the cochlea, uses three layers of wavelet filter banks (Gabor mother wavelet) with M=56, N=30 and P=9 filters/node in each of the layers. The extracted coefficients allow to group signals belonging to the same class closer together by means of a dimensionality reduction technique. This concept is illustrated with an example in Fig. 3. In Fig. 3(a) we show an example of the raw data represented after dimensionality reduction using t-SNE (MATLAB Statistics and Machine Learning Toolbox) [32]. When compared to the same visualisation after passing through the wavelet scattering network shown in Fig. 3(b), we see a clear change from a disorganised to a strongly organised grouping of the data.

 figure: Fig. 3.

Fig. 3. (a) shows on the left the architecture of the scattering transform. The signal is convolved with M=56 filters in the first layer, N=30 in the second and P=9 in the third. The 2D embedding of the raw heart sound data from multiple individuals obtained using T-SNE algorithm is shown in (b) where each color corresponds to a different person. No clear clustering is observed in that case. However, after going through the scattering transform, as demonstrated in (c), the data within the same class is clustered together and the classification problem is simplified.

Download Full Size | PDF

4.2 Classification algorithm

Once features are extracted with the wavelet scattering transform we use a SVM to fit the regions corresponding to different classes within the feature space. We used a third degree polynomial kernel SVM with hinge loss which was found to give optimal results.

5. Results

In Fig. 4 we show the confusion matrices (CM) of the testing data passed through our biometric identification algorithm. We show in (a) the CM of the stethoscope recording dataset, in (b) the CM of our remote detection method tested on the same day recording, in (c) the remote detection method tested on different day recordings. For both methods we trained the algorithm on 30 sec of heart beat recordings (taken in the first day for laser-speckle method). The classification accuracy for remote detection data was 99.1% (tested on 86 2.5 sec recordings) and 91.7% (tested on 12 2.5 sec recordings) for the same day and another day training datasets respectively and 90.6% (tested on 35 3 sec recordings) for the stethoscope data. As can be seen from this figure, our method outperforms digital stethoscope in this task even when the heart sounds are taken on a different day. This indicates the ability of this method to capture fine features of heart sounds and its potential as a tool for monitoring cardiac health.

 figure: Fig. 4.

Fig. 4. Confusion matrices of the biometric identification algorithm for the three testing sets. The algorithm was trained on 30 sec of heart-beat sounds per person for remote and stethoscope methods. (a) shows the CM for 10 arbitrary subjects taken from the HSCT-11 open heart-beat sound dataset, and tested on 35 3 sec recordings (90.6% accuracy). (b) shows the CM for the remote detection method with the test data (86 2.5 sec recordings) taken on the same date as the training (99.1% accuracy). (c) same as (b) but for next day testing data (12 2.5 sec recordings, 91.7% accuracy)

Download Full Size | PDF

6. Conclusions

Heart sounds are a remarkably complex signature of cardiac health and, when captured in full detail, can provide access to a range of diagnostic opportunities including heart health monitoring and even biometric identification. We have developed a contactless optelectronic sensing approach with a data processing pipeline that allows to extract high quality heart sound signals remotely from the neck area, bypassing the need for precordial, contact based auscultation. We compare the data obtained with our method to the standard stethoscope recordings in the biometric identification task, showing that we can achieve better accuracy even when testing data is taken on a different day for a shorter periods of time with respect to the training data. Future work will look into further exploiting the full potential of these optoelectronic approaches, including identification of pathological heart sounds, murmurs and abnormal heart rhythms. The hope is that in the near future related technologies could become part of the lived environment with a route towards continuous health assessment for precision medicine.

Funding

Royal Academy of Engineering; Medical Research Council (2285827); UK MOD University Defence Research Collaboration (UDRC) in Signal Processing; Engineering and Physical Sciences Research Council (EP/S026444/1, EP/T00097X/1, EP/T021020/1, P/S026444/1).

Acknowledgments

This research was approved by the University of Glasgow ethics approval committee, ethical approval application no.300200122.

Disclosures

The authors declare no conflicts of interest

Data availability

Data underlying the results presented in this paper are available in Ref. [33]

References

1. N. J. Pagidipati and T. A. Gaziano, “Estimating deaths from cardiovascular disease: a review of global methodologies of mortality measurement,” Circulation 127(6), 749–756 (2013). [CrossRef]  

2. S. C Smith Jr, A. Collins, R. Ferrari, D. R Holmes Jr, S. Logstrup, D. Vaca McGhie, J. Ralston, R. L Sacco, H. Stam, K. Taubert, D. A Wood, and W. A Zoghbi, “Our time: a call to save preventable death from cardiovascular disease (heart disease and stroke),” Eur. Heart J. 33(23), 2910–2916 (2012). [CrossRef]  

3. A. Subasi, “Biomedical signals,” Chapter 2 in Practical Guide for Biomedical Signals Analysis Using Machine Learning Techniques, A. Subasi, ed. (Academic Press, 2019), pp. 27–87.

4. B. Ergen, Y. Tatar, and H. Gulcur, “Time-frequency analysis of phonocardiogram signals using wavelet transform: A comparative study,” Comput. Methods Biomech. Biomed. Eng. 15(4), 371–381 (2012). [CrossRef]  

5. “Auscultation of the heart: general principles,” Chapter 35 in Evidence-Based Physical Diagnosis (2nd Edition), S. McGee, ed. (W.B. Saunders, 2007), pp. 411–416.

6. E. Delgado-Trejos, A. Quiceno-Manrique, J. Godino-Llorente, M. Blanco-Velasco, and G. Castellanos-Dominguez, “Digital auscultation analysis for heart murmur detection,” Ann. Biomed. Eng. 37(2), 337–353 (2009). [CrossRef]  

7. M. Brusco and H. Nazeran, “Digital phonocardiography: a PDA-based approach,” in The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1 (IEEE, 2004), pp. 2299–2302.

8. T. R. Reed, N. E. Reed, and P. Fritzson, “Heart sound analysis for symptom detection and computer-aided diagnosis,” Simul. Model. Pract. Theory 12(2), 129–146 (2004). [CrossRef]  

9. C. Potes, S. Parvaneh, A. Rahman, and B. Conroy, “Ensemble of feature-based and deep learning-based classifiers for detection of abnormal heart sounds,” in 2016 Computing in Cardiology Conference (CinC), (IEEE, 2016), pp. 621–624.

10. Yaseen, G.-Y. Son, and S. Kwon, “Classification of heart sound signal using multiple features,” Appl. Sci. 8(12), 2344 (2018). [CrossRef]  

11. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

12. M. Waqar, S. Inam, M. A. ur Rehman, M. Ishaq, M. Afzal, N. Tariq, F. Amin, and Qurat-ul-Ain, “Arduino based cost-effective design and development of a digital stethoscope,” in 2019 15th International Conference on Emerging Technologies (ICET), (2019), pp. 1–6.

13. M. H. Crawford, Approach to Cardiac Disease Diagnosis (McGraw-Hill Education, 2017).

14. R. D. Conn and J. H. OKeefe, “Cardiac physical diagnosis in the digital age: an important but increasingly neglected skill (from stethoscopes to microchips),” The Am. Journal Cardiology 104(4), 590–595 (2009). [CrossRef]  

15. C. Will, K. Shi, S. Schellenberger, T. Steigleder, F. Michler, J. Fuchs, R. Weigel, C. Ostgathe, and A. Koelpin, “Radar-based heart sound detection,” Sci. Rep. 8(1), 11551 (2018). [CrossRef]  

16. J. J. Struijk, K. Munck, B. D. Hansen, N. Jacobsen, L. P. Pilgaard, K. Soerensen, and S. E. Schmidt, “Heart-valve sounds obtained with a laser doppler vibrometer,” in 2016 Computing in Cardiology Conference (CinC), (2016), pp. 197–199.

17. U. Morbiducci, L. Scalise, M. De Melis, and M. Grigioni, “Optical vibrocardiography: a novel tool for the optical monitoring of cardiac activity,” Ann. Biomed. Eng. 35(1), 45–58 (2007). [CrossRef]  

18. J. Bai, G. Sileshi, G. Nordehn, S. Burns, and L. Wittmers, “Development of laser-based heart sound detection system,” J. Biomed. Sci. Eng. 05(01), 34–37 (2012). [CrossRef]  

19. M. Golberg, S. Polani, N. Ozana, Y. Beiderman, J. Garcia, J. R.-R. Onses, M. S. Sabater, M. Shatsky, and Z. Zalevsky, “Remote optical stethoscope and optomyography sensing device,” in Nanoscale Imaging, Sensing, and Actuation for Biomedical Applications XIV, vol. 10077A. N. Cartwright, D. V. Nicolau, and D. Fixler, eds., International Society for Optics and Photonics (SPIE, 2017), pp. 181–188.

20. Z. Zalevsky, Y. Beiderman, I. Margalit, S. Gingold, M. Teicher, V. Mico, and J. Garcia, “Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern,” Opt. Express 17(24), 21566–21580 (2009). [CrossRef]  

21. S. Bianchi and E. Giacomozzi, “Long-range detection of acoustic vibrations by speckle tracking,” Appl. Opt. 58(28), 7805–7809 (2019). [CrossRef]  

22. S. Bianchi, “Vibration detection by observation of speckle patterns,” Appl. Opt. 53(5), 931–936 (2014). [CrossRef]  

23. M. Abo-Zahhad, S. M. Ahmed, and S. N. Abbas, “Biometric authentication based on PCG and ECG signals: present status and future directions,” Sig. Image Video Process. 8(4), 739–751 (2014). [CrossRef]  

24. K. Phua, J. Chen, T. H. Dat, and L. Shue, “Heart sound as a biometric,” Pattern Recognit. 41(3), 906–919 (2008). [CrossRef]  

25. N. El-Bendary, H. Al-Qaheri, H. M. Zawbaa, M. Hamed, A. E. Hassanien, Q. Zhao, and A. Abraham, “HSAS: Heart sound authentication system,” in 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC), (IEEE, 2010), pp. 351–356.

26. G. Gautam and D. Kumar, “Biometric system from heart sound using wavelet based feature set,” in 2013 International Conference on Communication and Signal Processing (IEEE, 2013), pp. 551–555.

27. A. Spadaccini and F. Beritelli, “Performance evaluation of heart sounds biometric systems on an open dataset,” in 2013 18th International Conference on Digital Signal Processing (DSP) (IEEE, 2013), pp. 1–5.

28. N. Wu and S. Haruyama, “Real-time audio detection and regeneration of moving sound source based on optical flow algorithm of laser speckle images,” Opt. Express 28(4), 4475–4488 (2020). [CrossRef]  

29. A. C. Prunty and R. K. Snieder, “Demystifying the memory effect: a geometrical approach to understanding speckle correlations,” Eur. Phys. J. Spec. Top. 226(7), 1445–1455 (2017). [CrossRef]  

30. A. Goldberger, L. Amaral, L. Glass, J. Hausdorff, P. C. Ivanov, R. Mark, and H. E. Stanley, “Physiobank, physiotoolkit, and physionet:components of a new research resource for complex physiologic signals,” Circulation 101(23), e215–e220 (2000). [CrossRef]  

31. J. Bruna and S. Mallat, “Invariant scattering convolution networks,” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1872–1886 (2013). [CrossRef]  

32. L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” J. Machine Learn. Res. 9, 2579–2605 (2008).

33. L. Cester, I. Starshynov, Y. Jones, P. Pellicori, J. G. F. Cleland, and D. Faccio, “Remote laser-speckle sensing of heart sounds for health assessment and biometric identification: data,” University of Glasgow2022, http://dx.doi.org/10.5525/gla.researchdata.1238.

Data availability

Data underlying the results presented in this paper are available in Ref. [33]

33. L. Cester, I. Starshynov, Y. Jones, P. Pellicori, J. G. F. Cleland, and D. Faccio, “Remote laser-speckle sensing of heart sounds for health assessment and biometric identification: data,” University of Glasgow2022, http://dx.doi.org/10.5525/gla.researchdata.1238.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) and (b) show the experimental set up: a CMOS camera records the speckle pattern reflected from a person’s neck and created by a 4 mW continuous 532 nm laser mounted next to the camera (b). The displacement of the skin surface due to the heart sound causes proportional displacements of the speckle pattern at the far field due to speckle memory effect [11]. The speckle displacement can thus be be tracked to recover the heart sound. (c) shows two consecutive frames of the raw data speckle image recording and (d) shows a map of the local displacement calculated using Farneback algorithm. Red arrow shows the average displacement.
Fig. 2.
Fig. 2. Heart sound recordings obtained with a stethoscope (left) and laser (right). (a) shows the raw-data speckle displacement over time, thus resulting from both heart beat sound and macroscopic human body movements. (b) and (c) show respectively the scalograms in log scale of the sound acquired with a stethoscope from the subject’s chest and with our device from the subject’s neck. (d) and (e) show cropped time traces corresponding to the signals in (b) and (c) respectively.
Fig. 3.
Fig. 3. (a) shows on the left the architecture of the scattering transform. The signal is convolved with M=56 filters in the first layer, N=30 in the second and P=9 in the third. The 2D embedding of the raw heart sound data from multiple individuals obtained using T-SNE algorithm is shown in (b) where each color corresponds to a different person. No clear clustering is observed in that case. However, after going through the scattering transform, as demonstrated in (c), the data within the same class is clustered together and the classification problem is simplified.
Fig. 4.
Fig. 4. Confusion matrices of the biometric identification algorithm for the three testing sets. The algorithm was trained on 30 sec of heart-beat sounds per person for remote and stethoscope methods. (a) shows the CM for 10 arbitrary subjects taken from the HSCT-11 open heart-beat sound dataset, and tested on 35 3 sec recordings (90.6% accuracy). (b) shows the CM for the remote detection method with the test data (86 2.5 sec recordings) taken on the same date as the training (99.1% accuracy). (c) same as (b) but for next day testing data (12 2.5 sec recordings, 91.7% accuracy)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.