Key points are not available for this paper at this time.
This paper presents and explores a robust deep learning framework for auscultation analysis. This aims to classify anomalies in respiratory cycles and detect diseases, from respiratory sound recordings. The framework begins with front-end feature extraction that transforms input sound into a spectrogram representation. Then, a back-end deep learning network is used to classify the spectrogram features into categories of respiratory anomaly cycles or diseases. Experiments, conducted over the ICBHI benchmark dataset of respiratory sounds, confirm three main contributions towards respiratory-sound analysis. Firstly, we carry out an extensive exploration of the effect of spectrogram types, spectral-time resolution, overlapping/non-overlapping windows, and data augmentation on final prediction accuracy. This leads us to propose a novel deep learning system, built on the proposed framework, which outperforms current state-of-the-art methods. Finally, we apply a Teacher-Student scheme to achieve a trade-off between model performance and model complexity which holds promise for building real-time applications.
Building similarity graph...
Analyzing shared references across papers
Loading...
Lam Pham
Huy Phan
Ramaswamy Palaniappan
IEEE Journal of Biomedical and Health Informatics
Queen Mary University of London
University of Lübeck
University of Kent
Building similarity graph...
Analyzing shared references across papers
Loading...
Pham et al. (Mon,) studied this question.
www.synapsesocial.com/papers/6a08ebf71b91a3b1ea5b72e9 — DOI: https://doi.org/10.1109/jbhi.2021.3064237
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: