Domain-aware Intermediate Pretraining for Dementia Detection with Limited Data

Youxiang Zhu, Xiaohui Liang, John A. Batsis, and Robert M. Roth Detecting dementia using human speech is promising but faces a limited data challenge. While recent research has shown general pretrained models (e.g., BERT) can be applied to improve dementia detection, the pretrained model can hardly be fine-tuned with the available small dementia dataset asContinue reading “Domain-aware Intermediate Pretraining for Dementia Detection with Limited Data”

Two papers accepted by “IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)” 2022

Our two papers have been accepted by IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) 2022 Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning, Youxiang Zhu, Bang Tran, Xiaohui Liang, John A. Batsis (University of North Carolina), Robert M. Roth (Dartmouth) Speech Tasks Relevant to Sleepiness Determined with Deep Transfer Learning, BangContinue reading “Two papers accepted by “IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)” 2022″

Speech Tasks Relevant to Sleepiness Determined with Deep Transfer Learning

Bang Tran, Youxiang Zhu, Xiaohui Liang, James W. Schwoebel, Lindsay A. Warrenburg Excessive sleepiness in attention-critical contexts can lead to adverse events, such as car crashes. Detecting and monitoring sleepiness can help prevent these adverse events from happening. In this paper, we use the Voiceome dataset to extract speech from 1,828 participants to develop aContinue reading “Speech Tasks Relevant to Sleepiness Determined with Deep Transfer Learning”

Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning

Youxiang Zhu, Bang Tran, Xiaohui Liang, John A. Batsis, Robert M. Roth Speech pause is an effective biomarker in dementia detection. Recent deep learning models have exploited speech pauses to achieve highly accurate dementia detection, but have not exploited the interpretability of speech pauses, i.e., what and how positions and lengths of speech pauses affectContinue reading “Towards Interpretability of Speech Pause in Dementia Detection using Adversarial Learning”

“Evaluating Voice-Assistant Commands for Dementia Detection” has been accepted by Computer Speech & Language

Xiaohui Liang, John A. Batsis, Youxiang Zhu, Tiffany M. Driesse, Robert M. Roth, David Kotz, and Brian MacWhinney Early detection of cognitive decline involved in Alzheimer’s Disease and Related Dementias (ADRD) in older adults living alone is essential for developing, planning, and initiating interventions and support systems to improve users’ everyday function and quality ofContinue reading ““Evaluating Voice-Assistant Commands for Dementia Detection” has been accepted by Computer Speech & Language”

Dr. Liang is organizing Symposium on e-Health at IEEE International Conference on Communications (ICC) 2022

https://icc2022.ieee-icc.org/ The e-Health track provides an opportunity to bring together healthcare professionals, researchers, scientists, engineers, academics, and students from all around the world to share their experience and latest advances on new technologies and systems development in different healthcare and medicine applications. In particular, the e-Health track of the SAC symposium will focus on theContinue reading “Dr. Liang is organizing Symposium on e-Health at IEEE International Conference on Communications (ICC) 2022”

“WavBERT: Exploiting Semantic and Non-semantic Speech using Wav2vec and BERT for Dementia Detection” has been accepted by INTERSPEECH 2021

Youxiang Zhu, Abdelrahman Obyat, Xiaohui Liang, John A. Batsis, and Robert M. Roth In this paper, we exploit semantic and non-semantic information from patient’s speech data using Wav2vec and Bidirectional En-coder Representations from Transformers (BERT) for dementia detection. We first propose a basic WavBERT model by extracting semantic information from speech data using Wav2vec, andContinue reading ““WavBERT: Exploiting Semantic and Non-semantic Speech using Wav2vec and BERT for Dementia Detection” has been accepted by INTERSPEECH 2021″

Collaborative Research with SondeHealth

Xiaohui’s group and SondeHealth will collaborate on research related to vocal biomarkers and mental health disorders. Thanks to Jim Schwoebel, Vice President of Data and Research at SondeHealth, for this collaboration opportunity and the access to the voice dataset collected over thousands of users at Sonde Health. Check more about SondeHealth at https://www.sondehealth.com/ Check more about SurveylexContinue reading “Collaborative Research with SondeHealth”

Privacy Concerns Among Older Adults Using Voice Assistant Systems

Hillary Spangler, Tiffany Driesse, Robert Roth, Xiaohui Liang, John Batsis, David Kotz Voice Assistant Systems (VAS) are software platforms that complete various tasks using voice commands (e.g., Amazon Alexa), with increasing usage by older adults. It is unknown whether older adults have significant privacy concerns with VAS. 55 participants were evaluated from ambulatory practice sitesContinue reading “Privacy Concerns Among Older Adults Using Voice Assistant Systems”