Recent progress in speech production imaging, articulatory control modeling, and tongue biomechanics modeling has led to changes in the way articulatory synthesis is performed Examples include the Haskins CASY model (Configurable Articulatory Synthesis), designed by
Philip Rubin, Mark Tiede [http://www.haskins.yale.edu/staff/tiede.html , and Louis Goldstein which matches midsagittal vocal tracts to actual
magnetic resonance imaging (MRI) data, and uses MRI data to construct a 3D model of the vocal tract. A full 3D articulatory synthesis model has been described by Olov Engwall. A geometrically based 3D articulatory speech synthesizer has been developed by Peter Birkholz (VocalTractLab). The
Directions Into Velocities of Articulators (DIVA) model, a feedforward control approach which takes the neural computations underlying speech production into consideration, was developed by
Frank H. Guenther at
Boston University. The ArtiSynth project, headed by Sidney Fels [http://www.ece.ubc.ca/~ssfels/ at the
University of British Columbia, is a 3D biomechanical modeling toolkit for the human vocal tract and upper airway. Biomechanical modeling of articulators such as the
tongue has been pioneered by a number of scientists, including Reiner Wilhelms-Tricarico Yohan Payan [https://web.archive.org/web/20081006160025/http://www-timc.imag.fr/Yohan.Payan/ and Jean-Michel Gerard Jianwu Dang and Kiyoshi Honda [http://iipl.jaist.ac.jp/dang-lab/en/. == Commercial models ==