Developing Ultrapraat: Software to See How We Speak

Diana Archangeli, Professor of Linguistics

picture of a face with some sound waves coming from the mouth

dba@email.arizona.edu

 

Diana Archangeli (Linguistics) partnered with Ian Fasel (Computer Science) and Jeff Berry, a linguistics graduate student, to extracted data from videos of the lips and tongue in order to develop a software module for gesture-peak identification from ultrasound data, using modern pattern recognition techniques based on machine learning. Working in consultation with other ultrasound labs, the team also identified other types of desirable ultrasound analysis software.

The Confluencenter seed grant enabled the project to continue via two National Science Foundation grants for $100,000 in 2011 and $255,272 in 2012. 

Media coverage:

06.11.10: A Test in Producing a Visual Capture of Speech UA News