Transfer Learning shows no enhancements in runtime

1 view (last 30 days)
When configuring my network for transfer learning, to "freeze" the layers I am setting the 'WeightLearnRateFactor' and 'BiasLearnRateFactor' to 0, which from my understanding is Matlab's intended implementation of transfer learning. However, given the same number of iterations and epochs, training my network takes the same amount of time as it would during standard training. Are there any additional steps that must be taken to implement transfer learning in a custom neural network.
  1 Comment
Catalytic
Catalytic on 26 Apr 2024
I don't know if it is such a good idea to freeze the transfered layers. The doc page on transfer learning recommends conversely to increase the learn rate factors on the new layers.

Sign in to comment.

Answers (1)

Matt J
Matt J on 26 Apr 2024
Edited: Matt J on 26 Apr 2024
If you run the same number of iterations and epochs with the same data, then naturally it will take the same amount of time. Run fewer epochs or run with fewer training data samples.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!