You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
Colour Correction with a colour passport basics
9 views (last 30 days)
Show older comments
I am measuring the change of vegetation indicies over time so I am taking a picture every day. Unfortunately the light cant be the same every single time and so i would need to do some colour correction on the images before processing the indicies. I have an image taken with a colour passport in it (attached) and i was wondering how to go about doing the colour correction. I have attached the data sheet for the colour passport as well. Would it be a case of identifying the region in the photo and finding the ratio between the expected RGB values from the data chart and the values in the photo?
Answers (1)
Image Analyst
on 25 Jun 2020
Edited: Image Analyst
on 25 Jun 2020
I do this all the time. Calibrated color measurement is my specialty as you might guess from my avatar.
It looks like you have the x-Rite Color Checker Classic, not the passport. And certainly not the DataColor chart from the PDF you attached - I mean, that one doesn't even have the same number and color of chips on it!
Bascially you have to identify the chart's chips locations. The best way is to just have the chart and camera on a jig where they are in the same location all the time. Then you can read the RGB values right from known row/column locations. If you can't do that and it moves all over each time, then you have to find the chips. First you have to convert to HSV color space and look for highly saturated regions. Now to distinguish between chips and leaves you'll have to find the leaves. You can do that by looking for the black frame of the chart and excluding everything outside of that. Now you'll have to do certain things to make sure you find the centroid of every chip. If you can assume the chart is fairly aligned with the image edges, then you can simply use kmeans() and then add a row for the neutral colored row, which doesn't show up in the thresholded saturation image. If it's tilted, you'll have to use fitPolynomialRANSAC().
The next step is to read the RGB colors of the chips in the order that your reference table has them, which is not necessarily the order in which regionprops() finds them. So there is some reordering you will need to do.
So now you have the RGB value of the chips and you need to develop a transform to convert RGB values into LAB values. I'm attaching a Powerpoint tutorial on that. Basically you should convert the reference LAB to reference XYZ, then pick a model, like cross channel quadratic, and use least squares to determine a transform to go from RGB to XYZ. You go to XYZ instead of LAB because if you have white in your image that is brighter than the white color chip, it won't predict the correct value if you just go immediately straight to LAB (long story, just trust me). Then you can use analytical equations to go from XYZ to LAB. Now once you have the image in LAB color space, you can compare it to the time zero image and get the Delta E color differences.
Note, you'll have to do a background correction before you do anything else. This is because the exposure will not be uniform. Not only might there be illumination variation over the field of view but all cameras produce shading on the image, mostly due to the lens. For example, you may have only 80% the brightness at the corner as you do at the middle, and you have to correct for that. Don't do a background subtraction like novices will recommend to you. You need to do a background division, not subtraction. Why? Well if the light at the corner is only 0.8 as bright as the middle, what do you need to do? You need to divide it by 0.8 to bring it up to what it should be, right?
Why do you need to do background correction? Well you don't want different colors for an object depending on where it is in the field of view do you? If you have the same color and just move it from the middle to the corner it will have a different RGB value and it you don't correct for that then you'd get a different calibrated LAB value.
Anyway, look over the attachment for more info. I've left out a lot of real world considerations, even in the attachment. To explain everything to you I'd need about a full day or even two. But this should get you started.
21 Comments
Simon Kirkman
on 25 Jun 2020
Thanks for your answer, the last 6 columns of the data chart are the same as the colour chart i have. The manufacturer just decided it would be a good idea to put the data for all their charts in the same document. I'll keep at it and see where I get.
Simon Kirkman
on 26 Jun 2020
Im having a bit of trouble understanding the variables in the equations on the presentation. First of all is it ok to use the built in rgb2xyz then xyz to lab functions to get the lab colourspace image (or even the rgb2lab function). After that this is the equation in the rgb to lab section of the presentation. Is the estimated X the L value from the colour data chart (for example if im looking at chip 1F the L value is 47.12).
The next part is having found my chips I can get an average R,G and B value for the chip in my image. But if this ia a lab space equation why would I use values for R,G and B?
The presentation talks about RGB-RGB correction. Is this method less accurate?
Image Analyst
on 26 Jun 2020
Sorry, color science is confusing even for those us us in the field. Even experts in the field can get confused since it's a mixture of radiometry with psychology.
It's only OK to use the built-in rgb2xyx() or other built-in color space conversion functions if you don't want true, calibrated color values that will match your spectrophotometer. Those formula just give you "book formulas". So if you measured some red object with your spectrophotometer and found out that LAB was (50, 20, 0), if you use my formulas you will get those values to within 2-5 units. If you use the built-in book formulas, you may get (60, 30, 10) or some values that could be off by 20 or 30 units or more. Whatever it gives you, it will be less accurate than if you used the true values to do the calibration. This is because the book formula just goes one way -- you put in your RGB values and it gives you xyz value according to some hard coded formula, so those xyz values might not be the actual, true values of your sample. With my way, the formula is not hard coded -- it's determined from your data. So the values will be closer to the "true" values than just some book formula.
Look at it this way. The XYZ and LAB color values of the material are an intrinsic property of the material. The color of your sample does not matter if it's in a dark closet or the bright sunshine. It has a true color determined by its spectrum like you can measure on a spectrophotometer. You do not want the sample colors reported/estimated values to change just because your illumination level changes. With the book formula, if you brighten your image by doubling your exposure time, then your XYZ value using the built in formula will be like twice as bright (well, not exactly but you get the idea). However with my method, you will always get an accurate XYZ out because the formula adjusts. So in super simple terms, just imagine that x was 0.8*red (it's not, but just play along). So now if your scene gets brighter (but your physical object doesn't change) then the book formulas will give you twice the x value, whereas my formula will adjust so that now x=0.4*red, and you will get the same x as before, which is the true x because that's what you trained the formulas with. Which is what should happen because the color is an intrinsic value of the material, not something that changes with the brightness of your light source.
For the least squares method to determine what the formula is, you put in your measured RGB values from your chips, and the "true" XYZ values from the chart's specification sheet. This will give you the alpha, beta, and gamma coefficients of the equations. You then use those equations to put in any arbitrary RGB value to get out estimated XYZ values, which you then put into analytical formulas, like from http://www.easyrgb.com/en/math.php, to get estimated LAB values.
The presentation talks about 2 conversion schemes: color correction, and color calibration. Color correction is an RGB-to-RGB "repair" of your RGB values to match some "gold standard" RGB values, which can be some RGB values measured at time zero, or RGB values from a different system. This is generally only used if you want to compare two images side-by-side, like you have images taken with different exposure times (so one is brighter and one is darker) but you want to put those into a Powerpoint document and have them look the same brightness. It is not needed for doing scientific analysis of the color, like you want to know the Delta E color difference or the LAB color values.
I generally do not do RGB-to-RGB correction since I nearly always want to measure the color, not match the color to some reference color. So you should do the RGB-to-LAB color calibration, not the RGB-to-RGB color correction. It's not that one is more or less accurate than the other. It's just that they are used for different things. You can do both, but it's usually not necessary. If you want the true LAB color, then why bother going from one RGB image to another as an intermediate stepping stone to get to LAB? You don't need to. It will work fine, or probably better, if you just go from the original RGB to the LAB rather than doing a correction in between.
BYK Gardner is having "office hours" and webinars on color and appearance
60 Minutes" Webinars
A 45-minute presentation with discussion, questions and answers on a variety of color, appearance and physical test topics.
- June 25 - Elements of a Color Program - note time is 2pm ET
- July 9 - Automotive Reporting with Smart-Chart
- July 14 - Dispersion for Plastic Resin, Raw Materials and Compounders - note time is 2pm ET
- July 16 - Measuring Film Thickness of Coatings
- August 20 - Basic Building Blocks of Color
- September 17 - Get a Clear View - How to Ensure the Quality of Your Transparent Product
- October 8 - Viscosity Basics
- November 12 - Color Systems for Solid & Effect Colors
- December 3 - Effect Colors
Simon Kirkman
on 26 Jun 2020
Ok. Complicated but slowly but surely haha. So just to be clear on the process
- Background Correct the image (is this stage necessary before colour calibrating, if so I would have to take a grey image as well as the image with the colour chart?)
- Convert RGB to XYZ, going by http://brucelindbloom.com/index.html?Eqn_RGB_to_XYZ.html i would first inverse compand. This would be done by first putting each r,g, and b pixel to the power gamma (v = V^gamma) . One thing im confused about at this stage is what RGB coulourspace would i be using (eg. Adobe, CIE, sRGB??). The second part of this would be using the transform matrix to run those values into R,G and B, this matrix is also dependnt on the RGB colourspace and also the white space. Is it essential to use the right white space or is there a standard one (e.g. D65?)
- For the XYZ to Lab conversion I am a little confused where the white reference values values Xr, Yr and Zr come from. Are there any tables like this http://brucelindbloom.com/index.html?WorkingSpaceInfo.html that would give those values?
Sorry for bothering you like this. If i can get these figured out its a step in the right direction. I think the main confusion I have is what colour and white spaces I am working in. I understand that D65 is mid day but there isnt really a white space value for cloudy morning?
Image Analyst
on 27 Jun 2020
- Yes, you must do background correction to flatten the image before you calibrate the color (develop the RGB-to-XYZ transform).
- No. Again, you cannot use any "book formulas" whether from Bruce Lindbloom's site, easyrgb.com, or any other site. If you do so you will have lost any ability for your color calibration to compensate (take ito account) changing light levels, and will be throwing anway any possibility of comparing results to instruments such as colorimeters or spectrophotometers. You cannot do that. Again, you need to use least squares to come up with the transform, not use one from a web site or book. I'd use D65/10. It's pretty much an industry standard (except in publishing where D50 is popular).
- For converting from XYZ to LAB, you can use the reference chromaticities here: https://www.easyrgb.com/en/math.php
Observer 2° (CIE 1931) 10° (CIE 1964) Note
Illuminant X2 Y2 Z2 X10 Y10 Z10
A 109.850 100.000 35.585 111.144 100.000 35.200 Incandescent/tungsten
B 99.0927 100.000 85.313 99.178; 100.000 84.3493 Old direct sunlight at noon
C 98.074 100.000 118.232 97.285 100.000 116.145 Old daylight
D50 96.422 100.000 82.521 96.720 100.000 81.427 ICC profile PCS
D55 95.682 100.000 92.149 95.799 100.000 90.926 Mid-morning daylight
D65 95.047 100.000 108.883 94.811 100.000 107.304 Daylight, sRGB, Adobe-RGB
D75 94.972 100.000 122.638 94.416 100.000 120.641 North sky daylight
E 100.000 100.000 100.000 100.000 100.000 100.000 Equal energy
F1 92.834 100.000 103.665 94.791 100.000 103.191 Daylight Fluorescent
F2 99.187 100.000 67.395 103.280 100.000 69.026 Cool fluorescent
F3 103.754 100.000 49.861 108.968 100.000 51.965 White Fluorescent
F4 109.147 100.000 38.813 114.961 100.000 40.963 Warm White Fluorescent
F5 90.872 100.000 98.723 93.369 100.000 98.636 Daylight Fluorescent
F6 97.309 100.000 60.191 102.148 100.000 62.074 Lite White Fluorescent
F7 95.044 100.000 108.755 95.792 100.000 107.687 Daylight fluorescent, D65 simulator
F8 96.413 100.000 82.333 97.115 100.000 81.135 Sylvania F40, D50 simulator
F9 100.365 100.000 67.868 102.116 100.000 67.826 Cool White Fluorescent
F10 96.174 100.000 81.712 99.001 100.000 83.134 Ultralume 50, Philips TL85
F11 100.966 100.000 64.370 103.866 100.000 65.627 Ultralume 40, Philips TL84
F12 108.046 100.000 39.228 111.428 100.000 40.353 Ultralume 30, Philips TL83
Simon Kirkman
on 27 Jun 2020
When calculating vegetation indices such as NDVI you are using the red of the image in a formula (NIR-red)/(NIR+red). If I was to convert to Lab space I couldn’t do this formula. Would I perform the transform in Lab then convert back to RGB?
Image Analyst
on 27 Jun 2020
I'm not familiar with that index. It is the contrast between the red band and the NIR band because it's basically the delta value divided by the average value. It looks like it uses the red channel but I'm not sure what wavelength range. Was it developed for certain satellite wavelength bands? If so, you'd have to do some tricky things.
If it's just the general purpose red band, which of course is different for every digital camera because they use different sensors, then you'd have to get that red band after calibrating to LAB. So what I'd do in that case is to use the conversion from lab to sRGB using the built-in MATLAB function lab2rgb(). It's okay to use this because of the prior calibration process you went through to get calibrated LAB values.
Even though I/we don't know the spectral emissitivy of what "red" was used in the formula, it should be okay to use to compare different NDVI indices as long as you're always using the same camera, or at least the same model of camera.
Simon Kirkman
on 29 Jun 2020
It is used on satellites (landsat 8 etc) and in multispectral cameras that are used in drones. My setup has two cameras, one monchrome with an vis light cut filter and one rgb with an IR cut filter. The response of the bayer filter for red light is below 10% in the green and the blus channels so i am pulling the red channel from the image and using it with the monochrome ir image in the formula. I dont want to use specific band filters on the rgb camera because I was hoping to use it to calculate other indicies which use the blue and green channels instead.
For the background correction could i use the values of the grey chip in the colour chart to correct the whole image or would i need to capture a full grey image and a seperate image of the scene with the colour chart in it?
Do you have a link to the scource for the formulas you use in the presentation?
Image Analyst
on 29 Jun 2020
You can use your red, I'm just saying that if other people used a much more narrow red band than you, your NDVI index will not match theirs.
No I don't have a source to the formulas. It's pretty much common sense. It's like asking for a source that says it's okay to fit a quadratic to your data. I guess you could cite the Wikipedia page on least squares fitting if you want.
You cannot use a gray chip to compute background non-uniformities because the gray chip obviously does not cover the entire image. If you know that the gray chip has a gray level of 140, how does that tell you what it might be in the corner, or edge, or middle? Nothing - all it tells you is what the intensity is in the small spot where the chip is. You need to put a uniform gray sheet that covers your entire field of view so you can find out the background intensity everywhere. I'm attaching my demo.
Simon Kirkman
on 29 Jun 2020
I've done the first bit and got a background corrected image and have managed to get the alphas, bethas and gammas for the rgb to rgb conversion. After doing th RGB-RGB conversion my image seems to be over exposed. Should I normalize the image before finding the alphas etc (in which case i should normalize the gold standard as well). Ill put my code for the first part in and I've attached the two images i used for this. The grey image is a picture a large grey chip on the reverse side of the calibration chart.
grey = imread("C:/Users/simon/Documents/ProjIM/Proj Im/ASI/colour_4.png");
colour = imread("C:/Users/simon/Documents/ProjIM/Proj Im/ASI/colour_7.png");
%% background correction
correctedImage = colour;
[rows, columns, numberOfColorChannels] = size(colour);
for colorIndex = 1 : 3
%% get the max value of the grey image
oneColorChannel = colour(:, :, colorIndex);
maxVal = max(max(grey(:,:, colorIndex)))
%% finding a percentage image by dividing the grey image by the max value
percentageIm = (single(grey(:,:,colorIndex)))/(single(maxVal))
%% divding the orginal image by the percentage image
correctedChannel = uint8(single(oneColorChannel)./ percentageIm)
correctedImage(:,:, colorIndex) = correctedChannel
end
clear numberOfColorChannels columns rows correctedChannel colorIndex percentageIm oneColorChannel maxVal grey
subplot(1,2,1);
imshow(colour)
subplot(1,2,2)
imshow(correctedImage)
%% find colour chips #1 is 1E - #24 is 6H, this code was taken from
%% a colour correction toolbox which allows you to draw a rectangle
%% around the colour checker and writes the roi of the chips
[RGB , roi] = checker2colors(correctedImage, [4,6], 'allowadjust', true)
%% intializing matrix for average RGB values in chips
meanR = zeros(24,1);
meanG = zeros(24,1);
meanB = zeros(24,1)
%% calculating the mean red, green and blue values by cropping
%% the roi out of the original value and finding the mean of the
%% cropped region.
for i = 1:24
x1 = roi(i,1)
x2 = roi(i,1)+ roi(i,3)
y1 = roi(i,2)
y2 = roi(i,2)+ roi(i,4)
image = imcrop(colour,[x1 y1 roi(i,3) roi(i,4)])
meanR(i) = mean(mean(image(:,:,1)))
meanG(i) = mean(mean(image(:,:,2)))
meanB(i) = mean(mean(image(:,:,3)))
drawnow
end
clear y2 x2 y1 x1 i
%% initializing the matrix for the A values for regression
A = zeros(24,7);
for row = 1:24;
for col = 1:7;
if col == 1;
A(row,col) = 1;
elseif col == 2;
A(row,col) = meanR(row,1);
elseif col == 3;
A(row,col) = meanG(row,1);
elseif col == 4;
A(row,col) = meanB(row,1);
elseif col == 5;
A(row,col) = (meanR(row,1))^2;
elseif col == 6;
A(row,col) = (meanG(row,1))^2;
elseif col == 7;
A(row,col) = (meanB(row,1))^2;
end
end
end
clear row col
%% initializing matricies to store alpha, betha and gamma values
Alphas = zeros(24,1);
Bethas = zeros(24,1);
Gammas = zeros(24,1);
%% splitting gold values from data chart into R,G, B matricies
goldR = goldRGB(:,1);
goldG = goldRGB(:,2);
goldB = goldRGB(:,3);
%% regrssion
Alphas = inv(A'*A)*A'*goldR;
Bethas = inv(A'*A)*A'*goldG;
Gammas = inv(A'*A)*A'*goldB;
[rows,cols,channels] = size(correctedImage)
colourCorrectedImage = correctedImage
%% colour correcting each pixel by using the formulas
%% red = α0 + α1R + α2G + α3B + α4R^2 + α5G^2 + α6B^2
%% green = β0 + β1R + β2G + β3B + β4R^2 + β5G^2 + β6B^2
%% blue = γ + γR + γG + γB + γR2 + γG2 + γB2
for chan = 1:channels
for rws = 1:rows
for cls = 1:cols
if chan == 1
colourCorrectedImage(rws,cls,chan) = Alphas(1,1)+(Alphas(2,1)* correctedImage(rws,cls,1))...
+(Alphas(3,1)*correctedImage(rws,cls,2))+(Alphas(4,1)*correctedImage(rws,cls,3))...
+(Alphas(5,1)*(correctedImage(rws,cls,1))^2)+(Alphas(6,1)*(correctedImage(rws,cls,2))^2)...
+(Alphas(7,1)*(correctedImage(rws,cls,3))^2);
elseif chan == 2
colourCorrectedImage(rws,cls,chan) = Bethas(1,1)+(Bethas(2,1)* correctedImage(rws,cls,1))...
+(Bethas(3,1)*correctedImage(rws,cls,2))+(Bethas(4,1)*correctedImage(rws,cls,3))...
+(Bethas(5,1)*(correctedImage(rws,cls,1))^2)+(Bethas(6,1)*(correctedImage(rws,cls,2))^2)...
+(Bethas(7,1)*(correctedImage(rws,cls,3))^2);
elseif chan == 3
colourCorrectedImage(rws,cls,chan) = Gammas(1,1)+(Gammas(2,1)* correctedImage(rws,cls,1))...
+(Gammas(3,1)*correctedImage(rws,cls,2))+(Gammas(4,1)*correctedImage(rws,cls,3))...
+(Gammas(5,1)*(correctedImage(rws,cls,1))^2)+(Gammas(6,1)*(correctedImage(rws,cls,2))^2)...
+(Gammas(7,1)*(correctedImage(rws,cls,3))^2);
end
end
end
end
subplot(1,3,1);
imshow(colour)
subplot(1,3,2);
imshow(correctedImage)
subplot(1,3,3);
imshow(colourCorrectedImage)
Image Analyst
on 30 Jun 2020
I guess I forgot to say the first thing you should do, before anything else, is to white balance your camera. Your images seem to have a strong yellowish cast and are probably not white balanced. Your camera should have a procedure to do that. I'll see if I have time to run your code tomorrow.
Simon Kirkman
on 30 Jun 2020
Thanks for that, I have changed it up a bit since last night. I have noemalised the corrected image by using im2double at line 31 and have normalized the gold standard values. I also found an error at line 47 when i was taking the mean values of the chips i was taking them from the original image instead of the corrected image. After normalizing i have a much better result for the rgb-rgb correction. I'll now try rgb-lab. For this i have gotten the gold X,Y and Z values by using the equation from easyrgb with the data sheet values.
Image Analyst
on 30 Jun 2020
RGB-to-RGB correction is not needed unless you want to do something like show a gallery of a bunch of photos taken at different times in a Powerpoint presentation or something. It's not needed to determine Delta E color difference.
I'm attaching a little program that will take an image and alter it, then correct it, so you can see how rgb-to-rgb correction works.
Simon Kirkman
on 1 Jul 2020
The camera is using auto white balancing. If i did a white balancing in matlab using grey world (of the 6 neutral chips) would it affect the accuracy of the colour calibration? I think I have managed to code the RGB-XYZ calibration. In the presentation it says that the image should be background and colour corrected before doing the calibration. Would that mean the workflow would be :-
- Background Correct (I would then white balance the image after this)
- Colour Correct RGB-RGB
- Calibrate RGB-XYZ
or could i leave out the rgb-rgb entirely? Below is my code from RGB to XYZ... It seems to be working ok..
grey = imread("C:/Users/simon/Documents/ProjIM/Proj Im/ASI/colour_4.png");
colour = imread("C:/Users/simon/Documents/ProjIM/Proj Im/ASI/colour_7.png");
%% background correction
correctedImage = colour;
for colorIndex = 1 : 3
%% get the max value of the grey image
oneColorChannel = colour(:, :, colorIndex);
maxVal = max(max(grey(:,:, colorIndex)))
%% finding a percentage image by dividing the grey image by the max value
percentageIm = (single(grey(:,:,colorIndex)))/(single(maxVal))
%% divding the orginal image by the percentage image
correctedChannel = uint8(single(oneColorChannel)./ percentageIm)
correctedImage(:,:, colorIndex) = correctedChannel
end
clear numberOfColorChannels columns rows correctedChannel colorIndex percentageIm oneColorChannel maxVal grey
subplot(1,2,1);
imshow(colour)
subplot(1,2,2)
imshow(correctedImage)
%% find colour chips #1 is 1E - #24 is 6H, this code was taken from
%% a colour correction toolbox which allows you to draw a rectangle
%% around the colour checker and writes the roi of the chips
[RGB , roi] = checker2colors(correctedImage, [4,6], 'allowadjust', true,'mode', 'click')
%% intializing matrix for average RGB values in chips
meanR = zeros(24,1);
meanG = zeros(24,1);
meanB = zeros(24,1);
%% calculating the mean red, green and blue values by cropping
%% the roi out of the original value and finding the mean of the
%% cropped region.
for i = 1:24
x1 = roi(i,1)
y1 = roi(i,2)
image = imcrop(correctedImage,[x1 y1 roi(i,3) roi(i,4)])
meanR(i) = mean(mean(image(:,:,1)))
meanG(i) = mean(mean(image(:,:,2)))
meanB(i) = mean(mean(image(:,:,3)))
drawnow
end
clear y2 x2 y1 x1 i
%% initializing the matrix for the A values for regression
A = zeros(24,7);
for row = 1:24;
for col = 1:7;
if col == 1;
A(row,col) = 1;
elseif col == 2;
A(row,col) = meanR(row,1);
elseif col == 3;
A(row,col) = meanG(row,1);
elseif col == 4;
A(row,col) = meanB(row,1);
elseif col == 5;
A(row,col) = (meanR(row,1))^2;
elseif col == 6;
A(row,col) = (meanG(row,1))^2;
elseif col == 7;
A(row,col) = (meanB(row,1))^2;
end
end
end
clear row col
%% initializing matricies to store alpha, betha and gamma values
Alphas = zeros(24,1,"double");
Bethas = zeros(24,1, "double");
Gammas = zeros(24,1, "double");
load("goldXYZ_D65_10.mat","goldXYZ_D65_10")
goldX = goldXYZ_D65_10(:,1);
goldY = goldXYZ_D65_10(:,2);
goldZ = goldXYZ_D65_10(:,3);
clear goldXYZ_D65_10
Alphas = inv(A'*A)*A'*goldX;
Bethas = inv(A'*A)*A'*goldY;
Gammas = inv(A'*A)*A'*goldZ;
clear goldX goldY goldZ
XYZCorrectedImage = rgb2xyz(correctedImage)
[rows,cols,channels] = size(XYZCorrectedImage)
%% pixel by pixel application of alphas, bethas, gammas
for chan = 1:channels
for col = 1:cols
for row = 1:rows
redPixel = correctedImage(row, col,1);
greenPixel = correctedImage(row,col,2);
bluePixel = correctedImage(row,col,3);
if chan == 1
XYZCorrectedImage(rows,col,1) = Alphas(1,1)+ (Alphas(2,1)*redPixel) + (Alphas(3,1)* greenPixel)+(Alphas(4,1)*bluePixel)+(Alphas(5,1)*redPixel^2)+(Alphas(6,1)* greenPixel^2)+ (Alphas(7,1)*bluePixel^2);
elseif chan == 2
XYZCorrectedImage(rows,col,2) = Bethas(1,1)+ (Bethas(2,1)*redPixel) + (Bethas(3,1)* greenPixel)+(Bethas(4,1)*bluePixel)+(Bethas(5,1)*redPixel^2)+(Bethas(6,1)* greenPixel^2)+ (Bethas(7,1)*bluePixel^2);
elseif chan == 3
XYZCorrectedImage(rows,col,3) = Gammas(1,1)+ (Gammas(2,1)*redPixel) + (Gammas(3,1)* greenPixel)+(Gammas(4,1)*bluePixel)+(Gammas(5,1)*redPixel^2)+(Gammas(6,1)* greenPixel^2)+ (Gammas(7,1)*bluePixel^2);
end
end
end
end
clear rows cols channels
im2 = xyz2rgb(XYZCorrectedImage)
subplot(1,3,1);
imshow(XYZCorrectedImage)
subplot(1,3,2);
imshow(im2)
subplot(1,3,3)
imshow(colour)
Image Analyst
on 1 Jul 2020
You need to turn off auto-white balancing if it's the kind of thing where it does it whenever it thinks it needs to. If you do white balancing it must only do it when you tell it to, and that is when the entire field of view is white. If you just want color difference (Delta E) then you don't need rgb-to-rgb correction. Indeed, if you don't do any color correction, it might alert you to some problems that you should address, like intensity changes. It's best to control illumination as much as possible but if your scene is subject to the weather (clouds, time of day, etc.) then there's not much you can do about that, but as long as you have a color chart in every image, or a separate image taken immediately before, then the calilibration should handle (compensate for) any illumination changes.
Simon Kirkman
on 2 Jul 2020
I have done an RGB-lab conversion using the above code and calculated delta E using the equation from easy rgb (i think it called d76) by taking the mean values from the lab image and subtracting them from the gold standard. I have got a range of deltas from 5.06 to 43! would you have any ideas where im going wrong?
gold standard = CIE-L*1, CIE-a*1, CIE-b*1 //Color #1 CIE-L*ab values
mean from image = CIE-L*2, CIE-a*2, CIE-b*2 //Color #2 CIE-L*ab values
Delta E* = sqrt(((CIE-L*1-CIE-L*2)^2)+((CIE-a*1-CIE-a*2)^2)+((CIE-b*1-CIE-b*2)^2))
%% gold standard
load("goldLab_D65_10.mat","goldLAB_D65_10")
goldL = goldLAB_D65_10(:,1);
golda = goldLAB_D65_10(:,2);
goldb = goldLAB_D65_10(:,3);
clear goldXYZ_D65_10
meanL = zeros(24,1);
meana = zeros(24,1);
meanb = zeros(24,1);
% mean for each chip
for i = 1:24
x1 = roi(i,1)
y1 = roi(i,2)
image = imcrop(LabCorrectedImage,[x1 y1 roi(i,3) roi(i,4)])
meanL(i) = mean(mean(image(:,:,1)))
meana(i) = mean(mean(image(:,:,2)))
meanb(i) = mean(mean(image(:,:,3)))
drawnow
end
deltaE = zeros(24,1)
%calculating delta
for rw = 1:24
tL = (goldL(rw)-meanL(rw))^2;
ta = (golda(rw)- meana(rw))^2;
tb = (goldb(rw)- meanb(rw))^2;
deltaE(rw) = sqrt(tL+ta+tb)
end
Image Analyst
on 2 Jul 2020
Yes, it's possible to get high delta Es for pixels that have drastically different colors.
Don't use image as the name of your variable since its a built-in function.
You need to attach your images, mat files, and functions if I'm to replicate your situation.
Simon Kirkman
on 2 Jul 2020
Ill attach three files. One does rgb-rgb then rgb to xyz. One does rgb to lab directly and calculates delta e and one does rgb to xyz. The checker2colours funtion asks you to draw a boundary rectangle around the colour chart by clicking at the four corners. The first corner should be at the corner of the chart where the neutral white square is. This will ensure that the chips are in the correct order in relation to the matricies holding their colour information. When promted for the number of rows and columns, this example has 4 rows and 6 columns. goldLab, goldRGB and goldXYZ all hold the chip colours from the chart.
Simon Kirkman
on 7 Jul 2020
I have been trying to look for ways to be more accurate and I was wondering if you think iterating the process would make any difference. For example, running throught the process one and then using the corrected image, run through it again to reduce the delta? Thank you for all your help, did you manage to take a look at my code?
María da Fonseca
on 26 Aug 2022
Hi!
Does anyone know the selection criteria for the 24 colors of the x-Rite Color Checker Classic?
María
Image Analyst
on 27 Aug 2022
They picked a gray scale selection, for obvious reasons. And then they wanted colors out near the extremes of the color gamut so that's why they have the 4 "pure" RGBMCY colors which are as vivid and saturated as they can get. Then for the other 6 they tried to pick 6 colors from natural scenes and photographs that sort of evenly sampled the 3-D gamut. By the way, x-rite sold the color chart product line to Calibrite.com.
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)