Go to the content anchor
:::

AI-assistive Speech Chain

In recent years, advanced artificial intelligence (AI) technology has achieved suc-cessful results in information communication, biomedicine, finance, education, and other fields. To develop AI-based systems or products that meet the require-ments of industry and clinical practice, NSTC encourages collaboration among academia, industry, and the medical community. With the support of NSTC, Dr. Yu Tsao, Deputy Director of the Research Center for Information Technology In-novation, Academia Sinica, has developed several oral communication assistance technologies, using cutting-edge AI algorithms, to assist people with oral com-munication disabilities to improve their quality of life and learning ability.


The increasing proportion of the geriatric population and inappropriate use of port-able audio devices have led to a rapid increase in hearing loss incidents. As report-ed in a recent article published by the World Health Organization (WHO), nearly 2.5 billion people worldwide (one in four) will be living with varying de-grees of hearing loss by 2050. Untreated hearing loss can lead to loneliness and isolation in older adults and can cause learning difficulties in students. Over the past few years, BioASP Lab, led by Dr. Yu Tsao, has researched the application of machine learning and signal processing algorithms in various types of assistive listening devices, including personal sound amplification products (PSAPs), hearing aids (HAs), and cochlear implants (CIs), to improve speech communica-tion in hearing-impaired patients and subsequent improvement in their quality of life. Along with this research direction, we have also received the 2018-2020 Na-tional Innovation Award and 2022 FutureTech Award.


In addition to assistive listening devices, BioASP Lab is also researching the de-velopment of machine learning-based assistive speaking devices to enhance the intelligence of patients with speech and language impairments. The human speech production system generally consists of four parts: (1) air generator, (2) vibrating apparatus, (3) resonance modulator, and (4) articulating tract. Speech impairments, which may result from brain injury or nerve and muscle damage around the mouth, can impact these parts, resulting in poor speech intelligibility. Common causes of Speech impairments include larynx removal, oral surgery, hearing loss, and dysathria. Individuals with speech impairments have limited oral communication abilities, which severely degrade their quality of life. In the past few years, we have conducted research on speech disorder diagnosis and speech disorder enhancement with remarkable results. In addition to having pub-lished more than 15 papers on the subject, we organized the pathological Voice Detection Challenge at IEEE Big Da-ta 2018, in which 109 teams from 27 coun-tries participated. Along with this re-search direction, we have also received the 2019 National Innovation Award.


Dr. Yu Tsao has published more than 90 peer-reviewed journal papers and 180 conference papers. He has won Best Student Paper Award at ISCSLP 2018, Best Poster Presentation Award at IEEE MIT URTC 2017, and Poster Presentation Award at APSIPA 2017. Dr. Yu Tsao is an Associate Editor of IEEE/ACM Trans-actions on Audio, Speech, and Language Processing and IEEE Signal Processing Letters. He is a frequent lecturer at leading conferences in the signal processing field. He is a recipient of 2017 Academia Sinica Career Development Award, Gold Award of the 5th World Invention and Innovation Competition, 2018 Dis-tinguished Lecture Award, APSIPA, 2018-2021 National Innovation Awards, 2019-2020 Rotary Edu-cation Foundation Outstanding Elite Award (one per year), 2021 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, and 2022 Future Technology Award.

Related Link(s)

Chinese Version
Last Modified : 2022/11/23