Classifying Heart Sounds Using Images of MFCC and Temporal Features

dc.contributor.author Diogo Marcelo Nogueira en
dc.contributor.author Carlos Ferreira en
dc.contributor.author Alípio Jorge en
dc.date.accessioned 2017-12-19T18:33:54Z
dc.date.available 2017-12-19T18:33:54Z
dc.date.issued 2017 en
dc.description.abstract Phonocardiogram signals contain very useful information about the condition of the heart. It is a method of registration of heart sounds, which can be visually represented on a chart. By analyzing these signals, early detections and diagnosis of heart diseases can be done. Intelligent and automated analysis of the phonocardiogram is therefore very important, to determine whether the patient’s heart works properly or should be referred to an expert for further evaluation. In this work, we use electrocardiograms and phonocardiograms collected simultaneously, from the Physionet challenge database, and we aim to determine whether a phonocardiogram corresponds to a “normal” or “abnormal” physiological state. The main idea is to translate a 1D phonocardiogram signal into a 2D image that represents temporal and Mel-frequency cepstral coefficients features. To do that, we develop a novel approach that uses both features. First we segment the phonocardiogram signals with an algorithm based on a logistic regression hidden semi-Markov model, which uses the electrocardiogram signals as reference. After that, we extract a group of features from the time and frequency domain (Mel-frequency cepstral coefficients) of the phonocardiogram. Then, we combine these features into a two-dimensional time-frequency heat map representation. Lastly, we run a binary classifier to learn a model that discriminates between normal and abnormal phonocardiogram signals. In the experiments, we study the contribution of temporal and Mel-frequency cepstral coefficients features and evaluate three classification algorithms: Support Vector Machines, Convolutional Neural Network, and Random Forest. The best results are achieved when we map both temporal and Mel-frequency cepstral coefficients features into a 2D image and use the Support Vector Machines with a radial basis function kernel. Indeed, by including both temporal and Mel-frequency cepstral coefficients features, we obtain sligthly better results than the ones reported by the challenge participants, which use large amounts of data and high computational power. © Springer International Publishing AG 2017. en
dc.identifier.uri http://repositorio.inesctec.pt/handle/123456789/4266
dc.identifier.uri http://dx.doi.org/10.1007/978-3-319-65340-2_16 en
dc.language eng en
dc.relation 5829 en
dc.relation 4981 en
dc.relation 5340 en
dc.rights info:eu-repo/semantics/openAccess en
dc.title Classifying Heart Sounds Using Images of MFCC and Temporal Features en
dc.type conferenceObject en
dc.type Publication en
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
P-00M-YEV.pdf
Size:
544.18 KB
Format:
Adobe Portable Document Format
Description: