CNN architecture for getting intensity map as output.

4 views (last 30 days)
I am implementing a CNN architecture for pore detection in a fingerprint. The CNN takes fingerprint image as input and outputs pore intensity map. The labels I am using are designed using soft labels. In fact, the labels are also the image which contains pixel distance from the pore point. My sample code is attached below.
  1. the paper which I am following the last layer is the convolution layer and MATLAB was giving an error saying the network doesn't have an output layer. Thus, I have used regression layer as output. (Is it the correct way to do it?)
  2. While training the code, I am getting Mini-batch error and Mini-batch RMSE as NaN
Regards.
%%pgm for deep learning pore
clc;
close all;
clear;
%%define layers
layer = regressionLayer('Name','routput');
conv1 = convolution2dLayer(3,64,'Padding','same');
conv1.Weights = randn([3 3 1 64]) * 0.0001;
conv1.Bias = randn([1 1 64])*0.00001 + 1;
layers = [
imageInputLayer([80 80 1],'Normalization','none') % size and channel
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,64,'Padding','same') % size of filter, no of filter
batchNormalizationLayer
reluLayer
convolution2dLayer(3,1,'Padding','same')
regressionLayer
];
%%define SeriesNetwork using the arrays of layers defined above
net1=SeriesNetwork(layers);
specify training options
options = trainingOptions('sgdm',...
'LearnRateSchedule','piecewise',...
'LearnRateDropFactor',0.2,...
'LearnRateDropPeriod',5,...
'MaxEpochs',100,...
'MiniBatchSize',5,...
'InitialLearnRate',0.001,...
'Plots','training-progress');
%%pass training data and labels
% load deep_data_pore.mat;
% load deep_data_pore_label_withimage.mat;
load deep_pore_images.mat;
load deep_pore_labels.mat;
% imds = imageDatastore('F:\data\deep_data\img_patch\*.jpg','FileExtensions','.jpg');
trainedNet = trainNetwork(images,Labels,layers,options);

Answers (0)

Categories

Find more on Recognition, Object Detection, and Semantic Segmentation in Help Center and File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!