Abstract:To meet the requirements with large-scale neural networks for real-world applications, an inverse way of constructing objective functions was proposed in this paper, which translates the task of constructing objective functions into the design of error signals. Followed this way, a set of objective functions has been given as examples to eliminate the false saturation in Mean Squared Error (MSE) and overspecialization in Cross Entropy (CE). The verification of its power was also made by the comparison with MSE and CE in the tasks of estimating the scaled likelihood for the Hidden Markov Models' states in the Hybrid HMM/ANN models, and showed consistent advantages with the theoretical expectations.