Research and Publications

The accuracy of BresoDx® versus the PSG standard has been tested and validated in several clinical trials. Below are links to the research studies and publications.

Comparison of in-laboratory and home diagnosis of sleep apnea using a cordless portable acoustic device

Alshaer H, Fernie GR, Tseng WH, Bradley TD, Sleep Medicine. 2015, 15, 2041-9

135 subjects underwent full overnight PSG and simultaneous recording of breath sounds by BresoDx® in the sleep laboratory. Acoustic data extracted from BresoDx were analyzed using validated proprietary computer algorithms.
RESULTS and CONCLUSION: BresoDx® is a reliable device for diagnosing sleep apnea that can be used by subjects, unattended in their own homes.


Validation of an automated algorithm for detecting apneas and hypopneas by acoustic analysis of breath sounds

Alshaer H, Fernie GR, Maki E, Bradley TD, Sleep Medicine.2013, 14, 562-571.

We tested the validity of a single-channel monitoring system that captures and analyzes breath sounds (BSs) to detect sleep apnea. BS were recorded from 50 patients undergoing simultaneous polysomnography (PSG). Using a novel software, BS were analyzed to identify apneas and hypopneas from which the apnea-hypopnea index (AHI) was calculated. Apneas and hypopneas from PSG were scored blindly by three technicians according to AASM and tidal volume based criteria that don’t involve arousals and oxygen desaturation.
RESULTS: BresoDx derived AHI was strongly correlated with that from PSG according to both AASM (R=93%) and tidal volume criteria (R=94%). Based on a cutoff of AHI-p≥10, overall accuracy of AHI-a reached 90% and negative predictive value reached 100%.
CONCLUSION: Acoustic analysis of BS is a reliable method for quantifying AHI and diagnosing sleep apnea compared to simultaneous PSG. Oxygen desaturation and arousals had a minor effect on overall correlation (1%).

Read More

Estimation of sleep status in sleep apnea patients using a novel head actigraphy technology

Hummel R, Bradley TD, Fernie GR, Chang SJ, Alshaer H, Conf. Proc. IEEE Eng. Med. Biol. Soc.2015, 5416-9.

The aim of this study was to evaluate the ability of head actigraphy to detect sleep/wake status. We obtained full overnight 3-axis head accelerometry datafrom 75 sleep apnea patient recordings. These were split into training and validation groups (2:1). Data were preprocessed and 5 features were extracted. Different feature combinations were fed into 3 different classifiers, namely support vector machine, logistic regression, and random forests, each of which was trained and validated on a different subgroup.
RESULTS:The random forest algorithm yielded the highest performance, with an area under the receiver operating characteristic (ROC) curve of 0.81 for detection of sleep status.
CONCLUSION:This shows that head actigraphy has a very good performance equivalent to popular writs actigraphy in detecting sleep status in SA patients despite the specificities in this population.

Read More

Subject independent identification of breath sounds components using multiple classifiers

Alshaer H, Pandya A, Bradley TD, Rudzicz F. IEEE ICASSP. 2014, 3577-3581.

An experienced operator manually labelled segments of breath sounds from 11 sleeping subjects as: inspiration, expiration, inspiratory snoring, expiratory snoring, wheezing, other noise, and non-audible. Ten features were extracted and fed into 3 different classifiers: Naïve Bayes, Support Vector Machine, and Random Forest. Leave-one-out method was used in which data from each subject, in turn, is evaluated using models trained with all other subject.
RESULTS: Mean accuracy for concurrent classification of all 7 classes reached 85.4%. Mean accuracy for separating data into 2 classes, snoring and non-snoring, reached 97.8%.
CONSLUSION: To our knowledge, these are the highest accuracies achieved in automatic classification of all breath sounds components concurrently and for snoring, in a subject independent model.

Read More

A system for portable sleep apnea diagnosis using an embedded data capturing module

Alshaer H, Levchenko A, Bradley TD, Pong S, Tseng WH, Fernie GR. J. Clin. Monit. Comput.. 2013, 27,303-311.

The purpose of this work was to introduce a newly developed portable system for the diagnosis of SA at home that is both reliable and easy to use. We recruited 49 subjects who used the device independently at home. A subset of 11 subjects used the device on 2 different nights and their results were compared to examine diagnostic reproducibility. Independent of those, system's performance was evaluated against PSG results of 32 subject.
RESULTS: The overall success rate of applying the device in un-attended settings was 94%. Nine of the 11 (82%) subjects had equivalent results on both nights. The system showed 96% correlation with simultaneously performed in-lab PSG.
CONCLUSION: Our results suggest excellent usability and performance of this system.

Read More

Phase tracking of the breathing cycle in sleeping subjects by frequency analysis of acoustic data

Alshaer H, Fernie GR, Bradley TD, Int. J. Healthcare Technology and Management, 2010, 11, 163-175.

We tested the hypothesis that the inspiratory and expiratory phases of breathing could be identified from breath sound recordings. Breath sounds and their frequency spectra were digitally recorded from 10 subjects during sleep. The ratio of frequency magnitude bins between 400-1000 Hz to frequency bins between 10-400 Hz was calculated for inspiration (Ri) and expiration (Re) for each breath.
RESULTS: The Ri/Re ratio was significantly greater than the thresholds of 1.5 (p < 0.001) and 2-fold (p < 0.001). Breathing phases were correctly identified in 90% and 73% of cases using the 1.5 and 2.0 thresholds, respectively.
CONCLUSION: To our knowledge, this is the first study to classify breathing phases in sleeping subjects using a microphone distal to the upper airway tract. This suggests the potential for using bioacoustics analysis to assist in respiratory monitoring.

Read More

Monitoring of breathing phases using a bioacoustics method in healthy awake subjects

Alshaer H, Fernie GR, Bradley TD, J. Clin. Monit. Comput. 2011, 25, 285-294.

Breath sounds were recorded from 15 subjects using a microphone fixed in a face frame, while simultaneously recording respiratory movements by respiratory inductance plethysmography (RIP). Subjects were instructed to breathe normally for 2 min. Inspiratory and expiratory segments of breath sounds were determined and extracted from the acoustic data by comparing it to the RIP trace.
RESULTS: Frequency bands ratios (BR) were chosen as a feature to distinguish between breathing phases. The route of breathing did not affect the BR ratio within the same phase. When this BR was applied to 436 breathing phases in the validation group, 424 (97%) were correctly identified (Kappa = 0.96, P < 0.001) indicating strong agreement between the acoustic method and the RIP.
CONCLUSION: Frequency spectra of breathing sounds recorded from a face-frame, reliably identified the inspiratory and expiratory phases of breathing.

Read More

Adaptive segmentation and normalization of breathing acoustic data of subjects with OSA

Alshaer H, Fernie GR, Sejdic E, Bradley TD, Proc IEEE TIC-STH. 2009, 279-784.

Breath sounds in patients with obstructive sleep apnea are very dynamic and variable signals due to their versatile nature. In this paper, we present an adaptive segmentation algorithm for these sounds. The algorithm divides the breath sounds into segments with similar amplitude levels. As the first step, the algorithm creates an envelope of the signal characterizing its long term amplitude variations. Then, K-means clustering is iteratively applied to detect borders between different segments in the envelope.
RESULTS: The algorithm is successful in creating a uniform baseline in subjects with and without sleep apnea without obliterating troughs representing apneas.
CONCLUSION: We have shown that the proposed algorithm is capable of segmenting waveforms of breath sounds of unknown number of segments and their corresponding duration.

Read More