Abstract:In this paper, a novel Multi-stream Multi-states Asynchronous Dynamic Bayesian Network based context-dependent TRIphone (MM-ADBN-TRI) model is proposed for audio-visual speech recognition and phone segmentation. The model looses the asynchrony of audio and visual stream to the word level. Both in audio stream and in visual stream, word-triphone-state topology structure is used. Essentially, MM-ADBN-TRI model is a triphone model whose recognition basic units are triphones, which captures the variations in real continuous speech spectra more accurately. Recognition and segmentation experiments are done on continuous digit audio-visual speech database, and results show that: MM-ADBN-TRI model obtains the best overall performance in word accuracy and phone segmentation results with time boundaries, and more reasonable asynchrony between audio and visual speech.