Code covered by the BSD License  

Highlights from
Deep Neural Network

4.71429

4.7 | 16 ratings Rate this file 321 Downloads (last 30 days) File Size: 4.57 MB File ID: #42853
image thumbnail

Deep Neural Network

by

 

29 Jul 2013 (Updated )

It provides deep learning tools of deep belief networks (DBNs).

| Watch this File

File Information
Description

Run testDNN to try!
Each function includes description. Please check it!
It provides deep learning tools of deep belief networks (DBNs) of stacked restricted Boltzmann machines (RBMs). It includes the Bernoulli-Bernoulli RBM, the Gaussian-Bernoulli RBM, the contrastive divergence learning for unsupervised pre-training, the sparse constraint, the back projection for supervised training, and the dropout technique.

The sample codes with the MNIST dataset are included in the mnist folder. Please, see readme.txt in the mnist folder.

Hinton et al, Improving neural networks by preventing co-adaptation of feature detectors, 2012.
Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008.
http://read.pudn.com/downloads103/sourcecode/math/421402/drtoolbox/techniques/train_rbm.m__.htm

Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.

It includes the implementation of the following paper. If you use this toolbox, please cite the following paper:
Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.

Required Products MATLAB
MATLAB release MATLAB 7.14 (R2012a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (44)
29 Aug 2014 seoul

I have a question
RBM is always good
Sometimes this will not be getting work BACK-PROPAGATION
Why is that?


for example
rbm = epoch: 500 backpropagation = 2000epoch
NODE Structure [784 1568 1568 10]
iter: 1 -> traindata MSE = 0.8
iter: 2 -> MSE = 0.8
.
.
.
iter: 1000 -> MSE = 0.8

29 Aug 2014 Liang ZOU  
25 Aug 2014 Masayuki Tanaka

Hi Seoul,

That is an actually great question.
The BBRBM is basically developed assuming the binary input. But, we can calculate the real value between 0 and 1. That is one of key point of my ICPR 2014 paper. If you have interests, please check the paper:

Masayuki Tanaka and Masatoshi Okutomi, A Novel Inference of a Restricted Boltzmann Machine, International Conference on Pattern Recognition (ICPR2014), August, 2014.

Thanks.

24 Aug 2014 seoul

Why put a visible neuron to recognize the real value between 0 and 1 BBRBM also
Does well?

24 Aug 2014 seoul  
01 Aug 2014 Junxing  
18 Jul 2014 Masayuki Tanaka

Hi Andre Flipe,

Honestly, I could not get what kind of the network structure you want. But, if you set [10, 2] when create a DBN, it means that 10 nodes for input and 2 nodes for output without hidden nodes.

Thanks.

18 Jul 2014 Masayuki Tanaka

Hi Ari,

I have run testDNN. But, I could not reproduce the error which you mentioned.

Thanks.

18 Jul 2014 Masayuki Tanaka

Hi Alena,

readme.txt is in the folder of mnist. Please check it!

Thanks.

09 Jul 2014 Alena

could you please send me a detailed readme.txt ? I can not download the readme. Thanks very much!

23 Jun 2014 Andre Filipe

Hi,

One thing I am not understanding. lets say I have an input=[rand(100,2)*0.1+0.3;rand(100,2)*0.1+0.6] and output=[zeros(100,1),ones(100,1);ones(100,1),zeros(100,1)]; and I want to create a DBM with two layers (10 and 2 nodes, in order). Shouldn't nodes than be equal to [10 2] ? Currently, it gives a inner matrix error at the v2h. please help

16 Jun 2014 Ari  
12 Jun 2014 Ari

Hi,
Thanks for sharing this code. I downloaded the last version and tried to run testDNN as it recommended. I got these errors:
??? Error using ==> randperm
Too many input arguments.

Error in ==> pretrainRBM at 172
p = randperm(dimV, DropOutNum);

Error in ==> pretrainDBN at 88
dbn.rbm{i} = pretrainRBM(dbn.rbm{i}, X, opts);

Error in ==> testDNN at 21
dnn = pretrainDBN(dnn, IN, opts);

Thanks,
Ari

14 May 2014 hiba

hello, can you send me the technical report of this program?

03 Apr 2014 ted p teng  
03 Apr 2014 siddhartha

So when i run the code by Masayuki Tanaka in MATLAB

I train a RBM with real valued input samples and binary hidden units.

So now I want to feed a new input sample to know the classification.

Which file in the toolbox the code should i use
calRMSE is it

also the values it gives are in decimals

so how will i know which class my input sample has been clasified into

example code

rbm=randRBM( 3, 3, 'GBRBM' )

V=[0.5 -3 1;-0.5 2 0;-0.25 -0.44 1];

rbm=pretrainRBM(rbm, V)

now once trained should i use
v2hall(rbm, [-0.5 -0.5 0])

on the new input vector

01 Apr 2014 MA

hi, Tanaka, I use this toolsbox to train GBRBM, while in your code h2v.m, there is no gaussian distribution:
h = bsxfun(@times, H * dnn.W', dnn.sig);
V = bsxfun(@plus, h, dnn.c );
I think there should be a gaussian sampling: normrnd(h+dnn.c,dnn.sig)

01 Apr 2014 siddhartha

Hi I just have one question
I am using GBRBM type rbm.

So i give it a training set and train it and it gives a rbm with W,b,c,sig

Now when i give it a new input the output should be a binary vector equal to legnth of hidden neurons. Since it will classify which neuron corresponds to the input more

So how do i feed the new input and is my approach correct

19 Mar 2014 ling

Thanks for sharing the easily used package..
It works very well.
I have a question that besides the setting opts.maxIter, I have not found other criteria for stop training. How to decide when to stop training?

19 Mar 2014 ling

Sorry, I am wrong. It should be nrbm-1

07 Mar 2014 TabZim

Thanks a lot for enhancing our understanding with this well commented code. I have a query regarding the sparsity constraint imposed in RBM, i.e in the pretrainRBM function. In order to update the hidden biases according to the sparsity constraint, why have you multiplied the gradients with 2.

dsW = dsW + SparseLambda * 2.0 * bsxfun(@times, (SparseQ-mH)', svdH)';

dsB = dsB + SparseLambda * 2.0 * (SparseQ-mH) .* sdH;

This does not match any update equation given by Lee et al. Could you please elaborate on this? Many thanks!

04 Mar 2014 Sanjanaashree

Hello Sir, I am working in Machine Transliteration, since I am newbie to DBN I wish to know whether will be able to use this code for the matrix created using one-hot representation of data.

27 Feb 2014 Masayuki Tanaka

Hi Yong Ho,

Thank you for your comment.
The linear mapping is just a option. You don't need to use that. But, the training requires the initial parameters. I think the linear mapping is one of candidates for initial parameters.
If you know better initial parameter setting, please let me know.

Thanks.

26 Feb 2014 Yong Ho

Hello,

Your code is very helpful for me.

But during understanding your code, I wonder why you are using linear mapping for calculating weights between TrainLabels and last hidden nodes?

Is there any advantage using linear mapping?

25 Jan 2014 Masayuki Tanaka

Hi Adel,

Thank you for your comment and good rating!
But, my code does not include the sparse RBM feature.

Thanks!

21 Jan 2014 Adel

Dear Prof.

I used the library and it was very useful and easy to use. But when i read the paper:

"Lee et al, Sparse deep belief net model for visual area V2, NIPS 2008. "

I found it uses Sparse RBM. and i want to know how can i apply and use Sparse RBM in my application as in the mentioned paper.

I appreciate any help or advice about this.

thanks.

13 Jan 2014 Masayuki Tanaka

Hi Usman,

I think you can use the Gaussian RBM instead of the Bernoulli RBM.

Thank you!

10 Jan 2014 ashkan abbasi

thanks for your generosity!

06 Jan 2014 Usman

Hi Xiaowang,

You just need to run output=v2h(bbdbn,TestImages) to get the output labels.
Match these with "TestLabels" to verfy your output

27 Dec 2013 Usman

Hi,
I am using this Toolbox for speech recognition features as input/visible units. However my features are both negative and positive and are greater then 1,-1. Can you please help me out as to if i can use -ve values as visible units or do i have to normalize the features to between o and 1. Also how to cater for zero mean and unit variance standardization since standardization makes data greater than 1 and normalizing makes it loose the zero mean unit variance distribution.

27 Dec 2013 xiaowang

I have read your MNIST code part and have some questions.You just use tranimages and trainlabels to train the DNN but I did not see the testimages were used to test and calculate the errorate in comarison with the testlabels? I am not well kown DNN, maybe I am wrong...

08 Dec 2013 Xin

It is a wonderful tool box.

But could you please upload a tutorial-like file?

That will be more helpful for us .

Many thanks!!

05 Dec 2013 Masayuki Tanaka

Hi Arjun,

Thank you for your bug report!
I fixed that bug and updated. Please use DeepNeuralNetwork20131204.zip

Thank you!

03 Dec 2013 Arjun

There seems to be a bug on line 205 in trainDBN.m when using GBDBN.

18 Nov 2013 Ming-Ju

It works! Awesome implementation!

10 Nov 2013 eve

Thanks a lot Masayuki!!!! :-)

08 Nov 2013 Masayuki Tanaka

Hi Sigurd,

Thank you for your comment. I think that the random inputs and outputs in testDNN make such training results. If you train with the MNIST dataset for example, I believe that you will get a reasonable model.

You can get the MNIST dataset from:
http://yann.lecun.com/exdb/mnist/

Thanks!

08 Nov 2013 Sigurd

Hi,

Thanks for providing the code.

Running testDNN, the trained net doesn't actually appear to model the data. 1) v2h(dnn,IN) yields the output layer as (nearly) constant for all input 2) RMSE equal to (and sometimes great than!) the standard deviation of the data. 3)

...just a little puzzled as to what is going on...

Cheers,
Sigurd

06 Nov 2013 altan  
05 Nov 2013 Masayuki Tanaka

Hi eve,

Thank you very much for you comment. That was bug. I already fixed that bug.
Please use DeepNeuralNetwork20131105

Thank you again!

04 Nov 2013 eve

Hi,

I tried running testDNN.m but I got an error in v2h.m line 39 as the sizes of V and dnn.W dont match?? Is that a new bug?
THanks

23 Aug 2013 Masayuki Tanaka

Chong, thanks a lot!
I have fixed it and update the codes.

22 Aug 2013 chong

if( Object = OBJECTSQUARE )?????
der = derSgm .* der;
end

30 Jul 2013 Hwa-Chiang

Nice and Nice!

Updates
22 Aug 2013

Modified the implementation of the dropout.
Added feature of the cross entropy object function for the neural network training.

23 Aug 2013

Debugged. Thank you, chong!

23 Aug 2013

Some bags are fixed.

23 Sep 2013

Bug fixed in GetDroppedDBN.

24 Sep 2013

Modified testDNN.m

05 Nov 2013

Bug fix.

08 Nov 2013

The bug is fixed.

15 Nov 2013

Sample codes of the MNIST dataset are included.

04 Dec 2013

Fixed the bug in trainDBN.m for GBDBN.

13 Dec 2013

CalcErrorRate is debuged.

13 Jan 2014

Added sample of evaluation test data of MNIST.

15 Aug 2014

I added the implementation of the ICPR 2014 algorithm.

Contact us