|
|
Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features |
Li Yan-ling①② Yan Yong-hong① |
①(Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China)
②(College of Computer and Information Engineering, Inner Mongolia Normal University, Hohhot 010022, China) |
|
|
Abstract This paper focuses on automatic scoring about ask-and-answer item in large scale of spoken English test. Three kinds of features are extracted to score based on the text from Automatic Speech Recognition (ASR). They are similarity features, parser features and features about speech. All of nine features describe the relation with human raters from different aspects. Among features of similarity measure, Manhattan distance is converted into similarity to improve the performance of scoring. Furthermore, keywords coverage rate based on edit distance is proposed to distinguish words’ variation in order to give students a more objective score. All of those features are put into multiple linear regression model to score. The experiment results show that performance of automatic scoring system based on speakers achieves 98.4% of human raters.
|
Received: 24 February 2012
|
|
Corresponding Authors:
Li Yan-ling
E-mail: liyanling@hccl.ioa.ac.cn
|
|
|
|
|
|
|