Skip Navigation or Skip to Content


Uncovering Communication Potential in Minimally-Verbal ASD (Phase II)

Over the past few decades there have been dramatic gains made in speech, language, auditory, and brain science, as well as digital signal processing, artificial intelligence, and human/brain-computer interfaces with resulting breakthroughs in modern communication technology. Yet there exists a huge gap in the application of these technologies in helping individuals who are minimally verbal with autism spectrum disorder (mv-ASD) discover their “own voice,” and learn to speak, or communicate through assistive devices. Some individuals with mv-ASD appear to have a rich world of “inner speech” including both content and emotion, yet there is little in the way of understanding why this population does not speak and of uncovering their inner emotional and conceptual world to give them a “voice.” Inner speech has long been hypothesized as an essential precursor to verbal and other forms of expression as early as in the seminal works of Leo Vygotsky (Language and Thought, 1934). It is thought to set the stage for later verbal expression of content and meaning.

In this effort, the researchers first seek to use advanced signal sensing and analysis technologies to characterize both vocal and behavioral means of expression and uncover the nature and locus of the communication deficits in individuals with mv-ASD. They will then use their advanced signal enhancement, human/brain-to-machine, and auditory and other sensory feedback technologies to stimulate the generation of speech.

A goal of this project is to bridge the communication technology gap by bringing together leading experts in the field: (1) the MGH Lurie Center for Autism/MGH Martinos Center for Brain Imaging with clinical experience in recruiting and characterizing a large ASD population, and in ASD speech protocol design and brain imaging, (2) the MIT Media Lab with its innovative real-time on-body sensing and interpreting of human neurophysiology, and (3) the MIT Lincoln Laboratory with its advanced speech and neuro-computational modeling and analysis methods and mobile off-body multi-modal platforms.

The researchers’ purpose in this project is to: (1) Conduct a rich characterization of the communication efforts of individuals with mv-ASD, (2) Assess the effects of various types of feedback (auditory, bone conduction, and haptic) on speech and language in mv-ASD, and (3) Apply the findings toward development of individually tailored speech processing algorithms for use in personalized devices to promote verbal communication. The specific aims focus on objective characterization using multi-modal data, collected with both on-body and off-body technology, with and without feedback to promote verbal and non-verbal communication. The larger vision of this four-year effort is to develop an empirical model of speech communication deficits in mv-ASD, that may be used in the development of personal devices for translating communicative intent into meaningful speech output.