This example shows the implementation of a traditional camera processing pipeline that renders an RGB image from a RAW Bayer-pattern color filter array (CFA) image. The example also shows an alternative RGB creation workflow using the RAW file format support functions in the toolbox, such as rawread, rawinfo, and raw2rgb.
Digital single-lens reflex (DSLR) cameras, and many modern phone cameras, can save data collected from the camera sensor directly to a RAW file. Each pixel of RAW data is the amount of light captured by the corresponding camera photosensor. The data depends on fixed characteristics of the camera hardware, such as the sensitivity of each photosensor to a particular range of wavelengths of the electromagnetic spectrum. The data also depends on camera acquisition settings, such as exposure time, and factors of the scene, such as the light source.
These are the main processing steps in a traditional camera processing pipeline:
Import the RAW file contents
Linearize the acquired CFA image
Scale the CFA data to a suitable range
Apply white-balance adjustment
Demosaic and rotate
Convert the CFA Image to an RGB image
You can also apply additional postprocessing steps such as denoising, highlight clipping, and contrast adjustment.
Cameras acquire images and create RAW files. These RAW files contain:
A CFA image recorded by the photosensor of the camera
Metadata, which contains all information needed to render an RGB image
While a RAW file can also contain a camera-generated JPEG preview image, or a JPEG thumbnail, you only need the CFA image and metadata to implement a camera pipeline.
Read a CFA test image from a file using the rawread function and display it.
fileName = "colorCheckerTestImage.NEF"; cfaImage = rawread(fileName); whos cfaImage
Name Size Bytes Class Attributes cfaImage 4012x6034 48416816 uint16
imshow(cfaImage,[])
title("Linear CFA Image")
Many cameras mask a portion of the photosensor, typically at the edges, to prevent those sections from capturing any light. This enables you to accurately determine the black level of the sensor. For such cameras, the number of pixels in the image is smaller than the number of pixels in the sensor.
For example, use the rawinfo function to retrieve the metadata from the test CFA image. In the metadata, note that the column value in the VisibleImageSize field is smaller than in the CFAImageSize field. The rawread function, by default, reads only the visible portion of the CFA.
fileInfo = rawinfo(fileName); if fileInfo.CFASensorType ~= "Bayer" error( "The input file %s has CFA Image created by a % sensor." + ... "The camera pipeline in this example will work only for a CFA Image created by a Bayer sensor", ... fileName,fileInfo.CFASensorType ); end disp(fileInfo.ImageSizeInfo)
CFAImageSize: [4012 6080]
VisibleImageSize: [4012 6034]
VisibleImageStartLocation: [1 1]
PixelAspectRatio: 1
ImageRotation: 0
RenderedImageSize: [4012 6034]
Many cameras apply nonlinear range compression to captured signals before storing them in RAW files. To generate linear data, in preparation for transforming the CFA data, you must reverse this nonlinear range compression. Cameras typically store this range compression as a lookup table (LUT). If the camera does not apply range compression, the LUT contains an identity mapping. The rawread function automatically applies this LUT to the CFA data and returns linearized light values.
Plot a subset of the values in the LinearizationTable field of the test image metadata.
linTable = fileInfo.ColorInfo.LinearizationTable;
plot((0:length(linTable)-1), fileInfo.ColorInfo.LinearizationTable)
title("Linearization Table")
The RAW file metadata includes a black-level value, fileInfo.ColorInfo.BlackLevel, and a white-level value fileInfo.ColorInfo.WhiteLevel. The range of pixel values in the CFA image is [fileInfo.ColorInfo.BlackLevel, fileInfo.ColorInfo.WhiteLevel]. You can use these values to scale the image.
RAW images do not have a true black value. Even with the shutter closed, electricity flowing through the sensors causes nonzero photon counts. Cameras use the value of the masked pixels to compute the black level of the CFA image. Various RAW file formats report this black level differently. Some file formats specify the black level as one scalar value per channel of the CFA image. Other formats, such as DNG, specify black-level as a repeated m-by-n region that starts at the top-left corner of the visible portion of the CFA.
To scale the image, subtract the black level from the CFA image. Use the provided helper functions to perform these calculations.
blackLevel = fileInfo.ColorInfo.BlackLevel; disp(blackLevel)
0 0 0 0
if isvector(fileInfo.ColorInfo.BlackLevel) cfaMultiChannel = performPerChannelBlackLevelSub(cfaImage, fileInfo); else cfa = performRegionBlackLevelSub(cfaImage, fileInfo); % Transforms an interleaved CFA into an (M/2-by-N/2-by-4) array, where % each plane corresponds to a specific color channel. % This transformation simplifies pipeline implementation cfaMultiChannel = raw2planar(cfa); end
To correct for CFA data values less than the black-level value, clamp the values to 0.
cfaMultiChannel( cfaMultiChannel < 0 ) = 0;
RAW file metadata often represent the white level as the maximum value allowed by the data type. If this white-level value is much higher than the highest value in the image, using this white-level value for scaling results in an image that is darker than it should be. To avoid this, scale the CFA image using the maximum pixel value found in the image.
cfaMultiChannel = double(cfaMultiChannel); whiteLevel = max(cfaMultiChannel(:)); disp(whiteLevel)
3366
scaledCFAMultiChannel = cfaMultiChannel ./ whiteLevel;
White balance is the process of removing unrealistic color casts from a rendered image, such that it appears closer to how human eyes would see the subject.
First, get the white-balance value from the metadata.
camWB = fileInfo.ColorInfo.CameraAsTakenWhiteBalance;
Next, scale the multipliers so that the values of the G channel are 1.
gLoc = strfind(fileInfo.CFALayout,"G");
gLoc = gLoc(1);
camWB = camWB/camWB(gLoc);
disp(camWB)1.9336 1.0000 1.0000 1.2656
wbMults = reshape(camWB,[1 1 numel(camWB)]); wbCFAMultiChannel = scaledCFAMultiChannel .* wbMults;
Combine the channels to create an interleaved CFA image.
cfaWB = planar2raw(wbCFAMultiChannel);
Convert the interleaved CFA image to a 16-bit image.
cfaWB = im2uint16(cfaWB);
Convert the Bayer-encoded CFA image into a truecolor image by demosaicing and rotating it. This image is in linear camera space.
camspaceRGB = demosaic(cfaWB,fileInfo.CFALayout);
camspaceRGB = imrotate(camspaceRGB,fileInfo.ImageSizeInfo.ImageRotation);
imshow(camspaceRGB)
title("Rendered Image in Linear Camera Space")
You can convert a CFA image to an RGB image either by using the profile connection space (PCS) conversion functions or by using the conversion matrix returned in the RAW file metadata.
Convert the CFA image to an Adobe RGB 1998 output color space. This conversion consists of these steps:
Convert the image from the linear camera space into a profile connection space, such as XYZ.
Convert the image from the XYZ profile connection space to the Adobe RGB 1998 color space.
cam2xyz = computeXYZTransformation(fileInfo); xyzImage = imapplymatrix(cam2xyz,im2double(camspaceRGB)); % xyz2rgb function applies Gamma Correction for Adobe RGB 1998 color space adobeRGBImage = xyz2rgb(xyzImage,"ColorSpace","adobe-rgb-1998","OutputType","uint16"); imshow(adobeRGBImage); title("Rendered RGB Image in Adobe RGB 1998 Color Space");

Use the transformation matrix in the fileInfo.ColorInfo.CameraTosRGB field of the CFA file metadata to convert the image from the linear camera space to the linear sRGB space.
% This transformation produces a linear sRGB image srgbImageLinear = imapplymatrix(fileInfo.ColorInfo.CameraTosRGB, camspaceRGB,"uint16"); % Apply gamma correction for sRGB colorspace srgbImage = lin2rgb(srgbImageLinear); imshow(srgbImage) title("Rendered RGB Image in sRGB Colorspace")

As this example shows, you can use general image processing toolbox functions and RAW file metadata to convert a CFA image into an sRGB image. You can also perform this conversion by using the raw2rgb function. While using the raw2rgb function does not provide the same flexibility as using the metadata, the results are comparable. The raw2rgb function uses the libraw 0.20.0 RAW file processing library. Compare the result of the raw2rgb conversion to that of the PCS conversion, for the Adobe RGB 1998 color space, and to that of the metadata conversion, for the sRGB color spaces.
adobeRGBReference = raw2rgb(fileName,"ColorSpace","adobe-rgb-1998"); montage({adobeRGBReference, adobeRGBImage},"Size",[1 2]); title("Adobe RGB 1998 Images: Left Image: raw2rgb, Right Image: MATLAB Pipeline")

srgbReference = raw2rgb(fileName,"ColorSpace","srgb"); montage({srgbReference, srgbImage}); title("sRGB Images: Left Image: raw2rgb, Right Image: MATLAB Pipeline");

The performPerChannelBlackLevelSub function performs black-level subtraction when you specify the black level as a per-channel scalar value.
function cfa = performPerChannelBlackLevelSub(cfa, fileInfo) % Transforms an interleaved CFA into an (M/2-by-N/2-by-4) array, where % each plane corresponds to a specific color channel. % This transformation simplifies pipeline implementation cfa = raw2planar(cfa); blackLevel = fileInfo.ColorInfo.BlackLevel; blackLevel = reshape(blackLevel,[1 1 numel(blackLevel)]); cfa = cfa - blackLevel; end
The performRegionBlackLevelSub function performs black-level subtraction when you specify it over an m-by-n region.
function cfa = performRegionBlackLevelSub(cfa,fileInfo) % The m-by-n black-level must be repeated periodically across the entire image repeatDims = fileInfo.ImageSizeInfo.VisibleImageSize ./ size(fileInfo.ColorInfo.BlackLevel); blackLevel = repmat(fileInfo.ColorInfo.BlackLevel, repeatDims); cfa = cfa - blackLevel; end
The computeXYZTransformation function scales the metadata matrix that specifies the transformation between the linear camera space and the XYZ profile connection space. Scaling this matrix avoids a strong, pink color cast in the rendered image.
function cam2xyz = computeXYZTransformation(fileInfo) % The CameraToXYZ matrix imposes an RGB order i.e % [X, Y, Z]' = fileInfo.ColorInfo.CameraToXYZ .* [R, G, B]'. % However, the order of values for white balance mutlipliers are as per % fileInfo.CFALayout. Hence, we have to reorder the multipliers to % ensure we are scaling the correct row of the CameraToXYZ matrix. wbIdx = strfind(fileInfo.CFALayout,"R"); gidx = strfind(fileInfo.CFALayout,"G"); wbIdx(2) = gidx(1); wbIdx(3) = strfind(fileInfo.CFALayout,"B"); wbCoeffs = fileInfo.ColorInfo.D65WhiteBalance(wbIdx); cam2xyz = fileInfo.ColorInfo.CameraToXYZ ./ wbCoeffs; end