Neural Net - Baseline classification above chance?

2 views (last 30 days)
Hello all,
I very recently started using the neural networks toolbox, and I have a basic question about the classification errors. I'm trying to sort my data into one of two categories, so baseline classification should be 50% errors. I am performing classification to predict one of 2 stimulus features at a bunch of time points. Importantly, the time points before the stimulus onset can NOT be predictive of the stimulus feature.
However, my average baseline classification errors have been only 30% in the pre-trial period ("confusion" function output "c" = .30).
1. Does this mean I should say my "empirical baseline" is 70% correct classification? 2. How could this be the case? 3. Are there better functions in the toolbox to prevent inflated estimates of classification accuracy? 4. Am I simply misunderstanding what the confusion error variable "c" means. I was undert he impression that 1-c = % correct classification (classifier accuracy).
I would appreciate any help. I realize my questions are lengthy, so I would just appreciate it if you could point me in the direction of some reading materials /resources so I can educate myself.
Thank you!

Accepted Answer

Greg Heath
Greg Heath on 15 Aug 2014
Changing Notation: If there are c classes with size Ni (i=1:c), N = sum(i=1,c){Ni}, then the a priori probabilities are Pi = Ni/N ;
With no other information "prior" to making calculations and/or measurements, the Naïve Bayes Classifier will assign all inputs to the class with the maximum prior probability. The corresponding per cent classifier accuracy is 100*max(Pi).
If the classes are the same size, the percent classification accuracy is 100*(1/c) (50% for c=2, 33.3% for c=3, etc)..
Hope this helps.
Thank you for formally accepting my answer
Greg
P.S. I probably have posted details in comp.ai.neural-nets and comp.soft-sys.matlab (and maybe even ANSWERS). Try searching on
greg prior (or priors)
  1 Comment
Kirsten
Kirsten on 15 Aug 2014
Hi Greg,
Thanks for your quick response! I appreciate your advice.
I had actually forgotten to take equal numbers of trials for the groups. I just reran the classifier with equal numbers of trials in the two groups (Pi =.50). However, I am still getting baseline classification above chance. This shouldn't be due to different priors for my trial labels. Could it be due to some sort of unexpected bias in my system? Or the particular classification algorithms and parameters I am using?
Thanks for any advice.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!