Speech signal processing using neural networks mapping the phonology of Sanskrit language using neural networks
This paper introduces and motivates the use of artificial neural networks (ANN) for recognition of speaker independent phoneme in voice signal. It shows the utilization of neural network’s parallel characteristics as well as self-learning characteristics in phoneme recognition using the Kohonen learning rule. It demonstrates the utility of machine learning algorithms in signal processing by trying to emulate biological neuron arrangements. Therefore, different types of neural networks are used at every stage of the whole process.
Artificial neural network’s implementation has improved the performance of feature extraction, and matching techniques of phoneme recognition. This solution based on self organizing clustering of speech features on time axis forming phonemes and unsupervised learning of these clusters together attains an accuracy of 97.77 % giving 3 seconds clean speech input and an accuracy of 98.88% giving 15 seconds of clean speech input. Speech samples were taken from 9 speakers