Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Neural Network with Binary Inputs

Subject: Neural Network with Binary Inputs

From: Jude

Date: 16 Jan, 2013 19:40:09

Message: 1 of 5

I am using neural network tool box to prove a concept. I like to use binary inputs for my learning. Do we have any special learning algorithm available for binary inputs? (OR how should I modify this call (change any arguments for BI inputs?); “newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');” to fit a binary inputs)

I m using as follows:
NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');
NETff.trainParam.epochs = 100000;
NETff.trainParam.goal = 0.00001;
 
NETff= train(NETff,xll',y_learn);
Yff = sim(NETff,xll');

Where xll’ is a binary number, eg: 1010101010

Thanks.
Jude

Subject: Neural Network with Binary Inputs

From: Greg Heath

Date: 17 Jan, 2013 00:12:11

Message: 2 of 5

"Jude" wrote in message <kd6vmo$93u$1@newscl01ah.mathworks.com>...
> I am using neural network tool box to prove a concept. I like to use binary inputs for my learning. Do we have any special learning algorithm available for binary inputs? (OR how should I modify this call (change any arguments for BI inputs?); “newff(xll',y_learn,
>[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');” to fit a binary inputs)

The only serious input recommendation I have is to use bipolar binary {-1 , 1} and 'tansig'
(default) for the hidden layer. In addition, why not tranpose xll once and for all instead of doing it in multiple commands?

For outputs: The transfer and learning functions depend on the type of target

Reals: 'purelin' and 'trainlm'(default)

Unipolar binary {0,1}: 'logsig' and 'trainscg'; %Use for classsification with vec2ind/ind2vec

Bipolar binary {-1,1}: 'tansig' and 'trainscg'

> I m using as follows:
> NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');

Why are you using validation stopping (default) AND 'msereg' ? Because H = 20 is
definitely overfitting? Just use a more practical value for H. See below.

What size are your input and target matrices?

For [I N ]and [O N] , you will have Neq = N*O equations, to estimate , Nw = (I+1)*H+(H+1)*O unknown weights. Without validation stopping or regularization, it is wise to keep Neq > r*Nw for r > 1, i.e.,

  H < (Neq/r -O) / (I+O+1) % r >1

I have successfully used H small enough so that ~2 <= r <= ~ 8 to 20. For
smaller values I recommend val stopping or regularization. I feel better using this ratio as a guide rather than just using a very large value for H (like, um, 20?) and covering up by using both val stopping and reglarization.

> NETff.trainParam.epochs = 100000;

What is wrong with the default?

> NETff.trainParam.goal = 0.00001;

MSEgoal ~ 0.01*mean(var(ylearn')) % or (0.01) -> (0.005)
  
> NETff= train(NETff,xll',y_learn);

[ NETff tr Yff Eff ] = train(NETff,xll',y_learn);

> Yff = sim(NETff,xll');

Unnecessary
 
> Where xll’ is a binary number, eg: 1010101010

Use bipolar binary

> Thanks.
> Jude

OKEY-DOKE

Greg

PS: try tr = tr and see all the goodies that are in that structure!

Subject: Neural Network with Binary Inputs

From: Jude

Date: 25 Jan, 2013 19:12:08

Message: 3 of 5

Thanks Greg! Lots to digest, but will take each of your inputs and will update you the results soon. Thanks again.
Jude

Subject: Neural Network with Binary Inputs

From: Jude

Date: 30 Jan, 2013 19:19:08

Message: 4 of 5

Hi Greg,

Thank you for your help. I incorporated most of your inputs.

"Greg Heath" <heath@alumni.brown.edu> wrote in message <kd7fkr$2h7$1@newscl01ah.mathworks.com>...
> "Jude" wrote in message <kd6vmo$93u$1@newscl01ah.mathworks.com>...
> > I am using neural network tool box to prove a concept. I like to use binary inputs for my learning. Do we have any special learning algorithm available for binary inputs? (OR how should I modify this call (change any arguments for BI inputs?); “newff(xll',y_learn,
> >[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');” to fit a binary inputs)
>
> The only serious input recommendation I have is to use bipolar binary {-1 , 1} and 'tansig'
> (default) for the hidden layer. In addition, why not tranpose xll once and for all instead of doing it in multiple commands?
>
> For outputs: The transfer and learning functions depend on the type of target
>
> Reals: 'purelin' and 'trainlm'(default)
>
> Unipolar binary {0,1}: 'logsig' and 'trainscg'; %Use for classsification with vec2ind/ind2vec
>
> Bipolar binary {-1,1}: 'tansig' and 'trainscg'
>
> > I m using as follows:
> > NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');
>
> Why are you using validation stopping (default) AND 'msereg' ? Because H = 20 is
> definitely overfitting? Just use a more practical value for H. See below.
>
How can I change, so validation will not stop?
NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm');


> What size are your input and target matrices?
>
> For [I N ]and [O N] , you will have Neq = N*O equations, to estimate , Nw = (I+1)*H+(H+1)*O unknown weights. Without validation stopping or regularization, it is wise to keep Neq > r*Nw for r > 1, i.e.,
>
> H < (Neq/r -O) / (I+O+1) % r >1
>
> I have successfully used H small enough so that ~2 <= r <= ~ 8 to 20. For
> smaller values I recommend val stopping or regularization. I feel better using this ratio as a guide rather than just using a very large value for H (like, um, 20?) and covering up by using both val stopping and reglarization.
>
> > NETff.trainParam.epochs = 100000;
>
> What is wrong with the default?
>
> > NETff.trainParam.goal = 0.00001;
>
> MSEgoal ~ 0.01*mean(var(ylearn')) % or (0.01) -> (0.005)
>
> > NETff= train(NETff,xll',y_learn);
>
> [ NETff tr Yff Eff ] = train(NETff,xll',y_learn);
>
> > Yff = sim(NETff,xll');
>
> Unnecessary
>
> > Where xll’ is a binary number, eg: 1010101010
>
> Use bipolar binary
>
> > Thanks.
> > Jude
>
> OKEY-DOKE
>
> Greg
>
> PS: try tr = tr and see all the goodies that are in that structure!

My issues are:
Once I complete the NN training on the GUI (nntraintool) it shows 10 inputs (I 've given 10 binary inputs. However when I type my net on the command prompt (NETff) it only shows one input.

NETff =
    dimensions:
 
         numInputs: 1
         numLayers: 3
        numOutputs: 1
Is matlab automatically change my binary inputs to scalar?

Thanks.

Subject: Neural Network with Binary Inputs

From: Greg Heath

Date: 31 Jan, 2013 14:40:09

Message: 5 of 5

IT IS CONSIDERED A HEINOUS CRIME TO TOP-POST.
PLEASE REFRAIN FROM WRITING REPLIES ABOVE PREVIOUS ENTRIES.
i HAVE REMOVED YOUR REPLY TO THE END.

"Jude" wrote in message <kebrnc$cve$1@newscl01ah.mathworks.com>...
> "Greg Heath" <heath@alumni.brown.edu> wrote in message <kd7fkr$2h7$1@newscl01ah.mathworks.com>...
> > "Jude" wrote in message <kd6vmo$93u$1@newscl01ah.mathworks.com>...
> > > I am using neural network tool box to prove a concept. I like to use binary inputs for my learning. Do we have any special learning algorithm available for binary inputs? (OR how should I modify this call (change any arguments for BI inputs?); “newff(xll',y_learn,
> > >[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');” to fit a binary inputs)
> >
> > The only serious input recommendation I have is to use bipolar binary {-1 , 1} and 'tansig'
> > (default) for the hidden layer. In addition, why not tranpose xll once and for all instead of doing it in multiple commands?
> >
> > For outputs: The transfer and learning functions depend on the type of target
> >
> > Reals: 'purelin' and 'trainlm'(default)
> >
> > Unipolar binary {0,1}: 'logsig' and 'trainscg'; %Use for classsification with vec2ind/ind2vec
> >
> > Bipolar binary {-1,1}: 'tansig' and 'trainscg'
> >
> > > I m using as follows:
> > > NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');
> >
> > Why are you using validation stopping (default) AND 'msereg' ? Because H = 20 is
> > definitely overfitting? Just use a more practical value for H. See below.
> >
> How can I change, so validation will not stop?
> NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm');
>
>
> > What size are your input and target matrices?
> >
> > For [I N ]and [O N] , you will have Neq = N*O equations, to estimate , Nw = (I+1)*H+(H+1)*O unknown weights. Without validation stopping or regularization, it is wise to keep Neq > r*Nw for r > 1, i.e.,
> >
> > H < (Neq/r -O) / (I+O+1) % r >1
> >
> > I have successfully used H small enough so that ~2 <= r <= ~ 8 to 20. For
> > smaller values I recommend val stopping or regularization. I feel better using this ratio as a guide rather than just using a very large value for H (like, um, 20?) and covering up by using both val stopping and reglarization.
> >
> > > NETff.trainParam.epochs = 100000;
> >
> > What is wrong with the default?
> >
> > > NETff.trainParam.goal = 0.00001;
> >
> > MSEgoal ~ 0.01*mean(var(ylearn')) % or (0.01) -> (0.005)
> >
> > > NETff= train(NETff,xll',y_learn);
> >
> > [ NETff tr Yff Eff ] = train(NETff,xll',y_learn);
> >
> > > Yff = sim(NETff,xll');
> >
> > Unnecessary
> >
> > > Where xll’ is a binary number, eg: 1010101010
> >
> > Use bipolar binary
> >
> > > Thanks.
> > > Jude
> >
> > OKEY-DOKE
> >
> > Greg
> >
> > PS: try tr = tr and see all the goodies that are in that structure!
> Hi Greg,
>
> Thank you for your help. I incorporated most of your inputs.
>
> My issues are:
> Once I complete the NN training on the GUI (nntraintool) it shows 10 inputs (I 've given 10 binary inputs. However when I type my net on the command prompt (NETff) it only shows one input.
>
> NETff =
> dimensions:
>
> numInputs: 1
> numLayers: 3
> numOutputs: 1
> Is matlab automatically change my binary inputs to scalar?

I think you mean vector inputs to scalar.

No. The one input is 10-dimensional.

However, your binary inputs are changed to real values for computational purposes.

Hope this helps.

Greg

P.S. Unless you have a specific reason for doing otherwise, only use one hidden layer.
One hidden layer is sufficent for a universal approximator.

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us