EFFECTS OF INPUT DIMENSIONALITY REDUCTION ON THE PERFORMANCE OF EPILEPSY DIAGNOSIS BASED ON NEURAL NETWORK

KHARAT P.A.1*, DUDUL S.V.2
1Department of Information Technology, Anuradha Engineering College, Chikhli, Maharashtra India.
2Department of Applied Electronics, Sant Gadge Baba Amravati University, Amravati, Maharashtra, India.
* Corresponding Author : pravinakharat82@gmail.com

Received : 21-12-2011     Accepted : 31-12-2011     Published : 31-12-2011
Volume : 3     Issue : 5       Pages : 396 - 402
Int J Mach Intell 3.5 (2011):396-402
DOI : http://dx.doi.org/10.9735/0975-2927.3.5.396-402

Conflict of Interest : None declared

Cite - MLA : KHARAT P.A. and DUDUL S.V. "EFFECTS OF INPUT DIMENSIONALITY REDUCTION ON THE PERFORMANCE OF EPILEPSY DIAGNOSIS BASED ON NEURAL NETWORK." International Journal of Machine Intelligence 3.5 (2011):396-402. http://dx.doi.org/10.9735/0975-2927.3.5.396-402

Cite - APA : KHARAT P.A., DUDUL S.V. (2011). EFFECTS OF INPUT DIMENSIONALITY REDUCTION ON THE PERFORMANCE OF EPILEPSY DIAGNOSIS BASED ON NEURAL NETWORK. International Journal of Machine Intelligence, 3 (5), 396-402. http://dx.doi.org/10.9735/0975-2927.3.5.396-402

Cite - Chicago : KHARAT P.A. and DUDUL S.V. "EFFECTS OF INPUT DIMENSIONALITY REDUCTION ON THE PERFORMANCE OF EPILEPSY DIAGNOSIS BASED ON NEURAL NETWORK." International Journal of Machine Intelligence 3, no. 5 (2011):396-402. http://dx.doi.org/10.9735/0975-2927.3.5.396-402

Copyright : © 2011, KHARAT P.A. and DUDUL S.V., Published by Bioinfo Publications. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

Abstract

Epilepsy is a common neurological disorder that is characterized by recurrent unprovoked seizures. About 40 to 50 million people worldwide are reported to have epilepsy. In this paper the authors present clinical decision support system (DSS) for the diagnosis of epilepsy. The DSS is developed by using Multilayer Perceptron (MLP), Generalized Feed Forward Neural Network (GFF-NN) and Elman Neural Network (E-NN). The validity of neural networks to diagnose the epilepsy is checked and the most suitable neural network is recommended for the diagnosis of epilepsy. Also the different feature enhancement techniques like principal component analysis (PCA), FFT and statistical parameters are used for the input dimensionality reduction. Epilepsy diagnosis is modeled as the classification of normal EEG, interictal EEG and ictal EEG. With the different input dimensionality reduction methods performance parameters of MLP, GFF-NN and E-NN are measured and compared. For the GFF-NN, number of free parameter is reduced up to 92.22% when PCA is used for input dimensionality reduction with the overall accuracy of 98.61%.

Keywords

Multilayer Perceptron (MLP),Elman Neural Network (E-NN), Generalised Feed Forward Neural Network (GFF-NN), Seizure.

Introduction

Epilepsy is a brain disorder in which clusters of nerve cell, or neurons in the brain sometimes signal abnormally. In the epilepsy, the normal pattern of neurons activity becomes disturbed causing strange sensation, emotion, behavior and loss of consciousness. Epilepsy is a disorder with many possible causes. Anything that disturbs the normal pattern of neurons activity from illness to brain damage to abnormal brain development may cause epilepsy. EEG scan is a common diagnostic test for epilepsy and can detect abnormalities in the brain electrical activity. People with epilepsy frequently have changes in their normal pattern of brain waves; even if they are not experiencing a seizure. EEG plays a more important role for the diagnosis of epilepsy.
The traditional methods of analysis being time consuming and tedious, many computer based diagnostic systems for epilepsy have been invented recently. Automated diagnostic system for epilepsy has been developed using different approaches like fuzzy logic [1] , genetic algorithm [2] . In 1982, Gotman proposed a computerized system for detecting a variety of seizures [5] . Neural network based detection system for epileptic diagnosis has been proposed by several authors [10] - [17] . L. Szilagyi recommended the recognition of epileptic waveform by using the multi-resolution wavelet decomposition of EEG signal [3] . Vairavan Shrinivasan developed the approximate entropy based Elman neural network and probabilistic neural network for detection of epilepsy [4] . The methods proposed by N. Sriraam [10] use Recurrent neural network classifier with wavelet entropy and spectral entropy features as the input for the automated detection of epilepsy.
This paper explores methods by which a Neural Network can diagnose epilepsy with the help of EEG signal. The epilepsy diagnosis problem is modeled as three-class classification problem. The three classes are; 1) Healthy subjects (Normal EEG) 2) Epileptic subjects during seizure free interval (Interictal EEG) and 3) Epileptic subjects during seizure activity (Ictal EEG) MLP, GFF-NN and E-NN are employed for the decision support system. Input dimensionality reduction is obtained by using Principal component analysis (PCA) and FFT for optimal design [18] - [20] . The performance measures of neural networks with different input dimension reduction are noted and compared. The Artificial neural network used can help real genuine patients, which will reduce the time and cost required for diagnosis. Such a system is very useful to assist the doctor. The doctors can then provide their attention to actual patients.
This paper is organized as follows. Firstly, the data used for the experimentation is described. After that three different cases namely Case-I, Case-II and Case-III are described, according to the input dimensionality method used. [Table-1] shows the details about all these three cases. Finally, Results and conclusion are discussed.

EEG Data Base

The EEG data considered for this work is extracted from University of Bonn EEG database which is available in public domain [9] . The complete database is comprised of five sets of dataset referred to as A-E. Each dataset contains 100 single channel EEG segment without any artifacts with 23.6-sec. Set A and B contain recording obtained from surface EEG recording that were carried out on five healthy volunteers using a standardized electrode placement scheme as shown in [Fig-1] . Set C and D contained only activity measured during seizure free interval. Segments in set D where recorded within the epileptognic zone and those in the set C from the hippocampal formation of apposite hemisphere of the brain. Set E only contains the seizure activity.
All signals were recorded with 128-channel amplifier system, using an average common reference. After 12 bit analog-to-digital conversion, the data were written continuously onto the disk of a data acquisition computer system at sampling rate of 173.6 1Hz. Band pass filter setting were 0.53-40 Hz.
We have selected three sets of EEG data from main dataset for further experimentations; set A for healthy subjects, set D for epileptic subjects during a seizure free interval that indicates interictal activity and set E contains seizure activity which indicates ictal activity. An example of first 1000 sampling point of three EEGs for normal, interictal and ictal activity are magnified and displayed in [Fig-2] .

Case – I

The feature vector is formed by using three datasets corresponding to normal, ictal and interictal activity. All the 100 segments of each dataset are used as an input to the neural networks. Three different Neural Networks namely, Multilayer Perceptron (MLP), Generalized Feed Forward Neural Network (GFF-NN) and Elman Neural Network (E-NN) are used one by one for the diagnosis of epilepsy. For E-NN second topology is used. This configuration creates memory trace from the first hidden layer as proposed by Elman. [Fig-3] shows the second topology proposed by Elman. As there are 100 segments in each dataset, 100 processing elements are used in the input layer and three processing elements are used in the output layer for normal, ictal and interictal output. The networks are trained three times with different random initialization of connection weights so as to ensure the true learning. The rigorous experiments are done by varying percentage data used for training, testing and cross validation (CV), number of hidden layers, number of PEs, transfer functions, learning rules and step size to obtained the optimal neural network. The optimal parameters for MLP, GFF-NN and E-NN are as follows.

A. MLP (100-10-03)

Tag data = 80% training 10% testing and 10% CV
Input PEs = 100
Output PEs = 3
Exemplars = 9833
Number of hidden layers = 01
Hidden layer-1
Number of PEs = 10
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Output layer
Number of PEs = 03
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Number of Epoch = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.0032ms
Number of free parameter (P) for MLP = 1043
Number of exemplars in training dataset = 9833
N/P ratio = 9.43

B. GFF-NN (100-09-03)

Tag data = 80% Training 10% testing and 10% CV
Input PEs = 100
Output PEs = 03
Exemplars = 9833
Number of hidden layers = 01
Hidden layer-I
Number of PEs = 09
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Output layer
Number of PEs = 03
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Number of epochs = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.002ms
Number of free parameter (P) for GFF-NN = 939
Number of exemplars in training dataset = 9833
N/P ratio = 10.47

C. E-NN (100- 05-03)

Tag data = 80% training 10% testing and 10% CV
Input PEs = 100
Output PEs = 3
Exemplars = 9833
Number of hidden layers = 01
Topology = Second
Context Layer
Time = 0.8
Transfer Function = Integrator Axon
Number of PEs = 05
Hidden layer-1
Number of PEs = 05
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Output layer
Number of PEs = 03
Transfer function = Linear Tanh
Learning rule = Momentum
Step size = 0.1
Momentum = 0.7
Number of Epoch = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.0008ms
Number of free parameter (P) for E-NN = 823
Number of exemplars in training data set = 9833
N/P ratio = 11.94
[Table-2] shows the performance parameter for MLP, GFF-NN and E-NN with 100 segments input.

Case - II

The feature vector used in above case includes 100 numbers of input features that would require large amount of computational requirements. Reduction in the input dimensionality reduces the computational complexity. Reduction in the input dimensionality can be achieved by Principal Component Analysis (PCA). [Fig-4] shows the overall architecture of proposed PCA based DSS. PCA is feature enhancement procedure that uses an orthogonal transformation to convert a set of observation of possibly correlated variables in to set of values of uncorrelated variable called principal component (PC). The number of principal components is less than the number of original variables. This transformation is defined in such a way that first principal component has as high as variance as possible. PCA is performed by using XLSTAT 2011. Experimentation is done by using different rules like Pearson (n), Pearson (n-1) Covariance (n), Covariance (n-1) Spearman, out of these rules; results with Spearman are observed to be the best as shown in Table-3. To get a optimal network structure, an input feature space containing the number of PCs is fed to the network. Gradually, the number of inputs is increased, and the network performance is observed carefully in terms of testing and CV, MSE and classification accuracy. From [Fig-5] , it is observed that CV MSE and testing MSE are minimum and classification accuracy is maximum when five PCs are selected as input feature space. Performance measures of MLP, GFF-NN and JE-NN with PCs input are as follows.

A. MLP (05-12-03)

Tag data = 80% training 10% testing and 10% CV
Input PEs = 05
Output PEs = 3
Exemplars = 9833
Number of hidden layers = 01
Hidden layer-1
Number of PEs = 12
Transfer function = Linear Tanh
Learning rule = Levenberg Marquardt
Step size = 0.1
Momentum = 0.7
Output layer
Number of PEs = 03
Transfer function = Linear Tanh
Learning rule = Levenberg Marquardt
Step size = 0.1
Momentum = 0.7
Number of Epochs = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.036ms
Number of free parameter (P) for MLP = 111
Number of exemplars in training data set = 9833
N/P ratio = 88.58

B. GFF-NN (05- 10-03)

Tag data = 80% training 10% testing and 10% CV
Input PEs = 05
Output PEs = 03
Exemplars = 9833
Number of hidden layers = 01
Hidden layer-I
Number of PEs = 10
Transfer function = Tanh
Learning rule = Levenberg Marquardt
Step size = 0.1
Momentum = 0.7Output layer
Number of PEs = 03
Transfer function = Tanh
Learning rule = Levenberg Marquardt
Step size = 0.1
Momentum = 0.7
Number of epochs = 30
Number of runs = 03
Termination is after 10 epochs without any improvement
Time elapsed per epoch per exemplar = 0.015ms
Number of free parameter (P) for GFF-NN = 73
Number of exemplars in training data set = 9833
N/P ratio = 134.69

C. E-NN (05-14-03)

Tag data = 80% training 10% testing and 10% CV
Input PEs = 05
Output PEs = 3
Exemplars = 9833
Number of hidden layers = 01
Topology = Second
Context Layer
Time = 0.8
Transfer Function = Integrator Axon
Number of PEs = 14
Hidden layer-1
Number of PEs = 14
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Output layer
Number of PEs = 03
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Number of Epochs = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement
Time elapsed per epoch per exemplar = 0.00026ms
Number of free parameter (P) for E-NN = 129
Number of exemplars in training data set = 9833
N/P ratio = 76.22
[Table-4] shows the performance parameter for MLP, GFF-NN and JE-NN with 100 EEG segments input. [Fig-6] is related to the comparison of N/P ratio and time elapsed per epoch per exemplars of MLP, GFF-NN and E-NN.

Case – III

The feature vector is obtained containing the feature extracted by FFT and 11 statistical features namely standard deviation (STDV), min, max, mean, entropy, minima, maxima, power spectral density (PSD), approximate entropy (ApEn) and number of peaks. The dataset is prepared for interictal, ictal and healthy subjects by using all 100 segments of set D, E and A respectively. All the features are extracted by using MATLAB 2008 and Microsoft Office Excel 2007. Rigorous experimentation are done by varying the number of hidden layers, PEs, number of exemplars for training and CV, transfer function, learning rule, step size and momentum to obtained the optimize neural network. The optimal parameters for MLP, GFF-NN and E-NN are as follows.

A. MLP (75-14-3)

Tag data = 80% training and 20% CV
Input PEs = 75
Output PEs = 3
Exemplars = 240
Number of hidden layers = 01
Hidden layer-1
Number of PEs = 14
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Output layer
Number of PEs = 03
Transfer function = Tanh
Learning rule = delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Number of Epoch = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.004ms
Number of free parameter (P) for MLP = 1109
Number of exemplars in training data set = 240
N/P ratio = 0.21

B. GFF-NN(75-14-03)

Tag data = 70% Training and 30% CV
Input PEs = 75
Output PEs = 03
Exemplars = 210
Number of hidden layers = 01
Hidden layer-I
Number of PEs = 14
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Output layer
Number of PEs = 03
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Number of epochs = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement
Time elapsed per epoch per exemplar = 0.0045ms
Number of free parameter (P) for GFF-NN = 1109
Number of exemplars in training data set = 210
N/P ratio = 0.189

C. E-NN(75-17-03)

Tag data = 80% training and 20% CV
Input PEs = 75
Output PEs = 3
Exemplars = 240
Number of hidden layers = 01
Topology = Second
Context Layer
Time = 0.8
Transfer Function = Integrator Axon
Number of PEs = 75
Hidden layer-1
Number of PEs = 05
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Output layer
Number of PEs = 03
Transfer function = Tanh
Learning rule = Delta Bar Delta
Step size = 0.1
Additive = 0.01
Multiplicand = 0.10
Smoothing = 0.5
Number of Epoch = 1000
Number of runs = 03
Termination is after 100 epochs without any improvement.
Time elapsed per epoch per exemplar = 0.0019ms
Number of free parameter (P) for E-NN = 1346
Number of exemplars in training data set = 240
N/P ratio = 0.178
[Table-5] shows the performance parameters for MLP, GFF-NN and E-NN with FFT and statistical parameters inputs.

Result and Conclusion

The effects of input dimensionality reduction on the performance of automated epileptic diagnosis of epilepsy based on MLP, GFF-NN and E-NN are explored in this paper. The performance parameters of these neural networks with different input dimensionality reduction methods are shown in [Table-2] , [Table-4] , and [Table-5] . PCA, FFT and statistical parameters are used for the input dimensionality reduction. It is observed that N/P ratio is the highest for GFF-NN and MLP when Principal Components (PCs) are used as the input, indicating the simplicity of GFF-NN and MLP. If we compare all three cases, it is evident that in case-II all the three NNs have compact architecture as compared to that in case-I and case-III. As shown in [Table-6] , the percentage of reduction in free parameter achieved in case-II is 89.36%, 92.22% and 84.32% for MLP, GFF-NN and E-NN, respectively. In case-III, the percentage of change in free parameter is significantly high for all the three NNs. In case III, percentage of free parameter is increased by 6.3%, 18.1% and 63.54% for MLP, GFF-NN and E-NN respectively. In case-II it is observed that percentage of reduction in free parameter for GFF-NN is very high (92.22%) as compared to MLP and E-NN. It means that GFF-NN has compact architecture as compared to MLP and E-NN. From [Table-6] , it is inferred to that the PCA dimensionality reduction method along with GFF-NN is efficient for the epilepsy diagnosis. The average classification accuracy of proposed GFF-NN based DSS is 98.67% and 98.69% on testing and cross validation data, respectively.

List of Abbreviations

ApEn Approximate Entropy
CV Cross Validation
DSS Decision Support System
EEG Electroencephalogram
FFT Fast Fourier Transform
NN Neural Network
N Number of exemplars in a data set.
P Number of output processing elements (PEs)
PCA Principal Component Analysis
PCs Principal Components
PSD Power Spectral Density
STDV Standard Deviation
Ʈ Time elapsed per epoch per exemplar

References

[1] Harikumar R. and Sabarikumar Narayanan B. (2003) IEEE Conference on Convergent Technology for Asia-Pacific region, TENCON.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[2] Harikumar R., Raghavan S. and Sukanesh R. (2005) IEEE Region 10 Conference, TENCON,  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[3] Szilagyi L., Benyo Z. and Szilagyi M. (2002) Proceeding of second joint EMBS/BMES conference.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[4] Shrinivasan V., Chikkannan eswaran and Natarajan Sriraam (2007) IEEE transaction on information technology in biomedicine, 11 (3).  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[5] Shankar P Saha, Susant Bhattachria, Biman Kanti Roy, Arindam Basu, Trishit Roy, Bibekananda Maity and Shamal K. Das (2008) Neurology Asi, 13, 41-48.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[6] Seizure and epilepsy (2004) Hope through research institute of neurological disorder and stroke (NINDS).  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[7] Gotman J. (1982) Electroencephalograph. Clinical Neurophysiol, Elsevier, 54, 530-540.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[8] Weng W. and Khorasani K. (1996) Neural Networks, Elsevier, 79, 1223-1240.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[9] Ralph G. Andrzejak, Klaus Lehnertz, Florian Mormann, christoph Rieke, Peter David, and Christian E. Elger (2001) Physical Review E, The American Physical Society, 64, 061907-1-061907-8.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[10] Srinivasan V., Eswaran C. and Sriraam N. (2004) International Conference on Signal Processing and Communication, 0-7803-674-4/04.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[11] Pravin Kumar S., Sriram N. and Benakop P.G. (2008) IEEE Region 10 conference, TENCON.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[12] Eleman J.L. (1990) Cognitive Science Society, 14, 179-211.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[13] Pradhan N., Sadasivan P.K. and Arunodaya G.R. (1996) Comput. Biamed. Rse., 29. 303-313.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[14] Akin M., Arserim M.A., Kiymik M.K. and Turkoglu I. (2001) 23rd annual EMBS international conference.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[15] Anupama Shukla, Ritu Tiwari and Prabhdeep Kaur (2009) World Congress on Computer Science and Information Engineering, IEEE.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[16] Mabel Ramírez-Vélez, Richard Staba, Daniel S. Barth, François G. Meyer (2006) IEEE International Symposium on Biomedical Imaging.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[17] Samanwoy Ghosh-Dastidar, Hojjat Adeli, and Nahid Dadmehr (2008) IEEE Transactions on Biomedical Engineering, 55.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[18] Shengkun Xie, Anna T. Lawniczak, Yuedong Song and Pietro Li (2010) IEEE International Workshop on Machine Learning for Signal Processing.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[19] Kittila Finland, Shengkun Xie and Sridhar Krishnan (2011) IEEE International Conference on Complex Medical Engineering.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[20] Du X, Dua S, Acharya RU, Chua CK (2011) Journal of Medical Systems.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

Images
Fig. 1- Scheme of the location of surface electrodes according to the international 10-20 system
Fig. 2- Sample EEG signals from set A, D and E (top to bottom)
Fig. 3- Topology proposed by Elman
Fig. 4- Overview of PCA based DSS
Fig. 5- Variations in MSE and classification accuracy with a number of PCs as inputs
Fig. 6- Variations in N/P and Ʈ for MLP GFF-NN and E-NN for case-II
Table 1- Description of Case I, Case II and Case III
Table 2- Performance parameter of MLP, GFF-NN and E-NN with 100 segments input
Table 3- Performance parametesr of NN with different PCA rules
Table 4- Performance parameter of MLP, GFF-NN and E-NN with 5 principal component input
Table 5- Performance parameter of MLP, GFF-NN and E-NN with FFT and statistical parameters input
Table 6- Cooperative of performance parameter for Case-I, Case-II and Case-III