Color correction between 35 different video clips.
1 view (last 30 days)
Show older comments
Hi.
So I have to determine the mixing efficiency of some silicone mixers, and that's going to be done through color analysis of the mixed material. We have red and yellow components to be mixed, so the final color should be an orange color. I already have a script that analyzes the RGB values of each frame in a cropped section of the video then determines the mean RGB and compares to the goal RGB values. I have a color chart to make sure that each mixing video isn't affected by differences in lighting and whatnot.
I guess I'm just not sure how to standardize the process with the color chart. My initial thoughts are to read the RGB values of the white and/or black and find the values that I need to multiply the readings by to get the RGB values to be actual white or black RGB values, then apply those values to each pixel in each frame. Am I off base with this or is there some other more established or simpler method?
Thanks
0 Comments
Answers (1)
Image Analyst
on 23 Jul 2016
Oh my gosh. This is a complicated subject. Probably way more than you realize. I have a 2 hour seminar on just this topic and obviously I can't get into all that here, so I'll try to simplify it. Basically what you need to do depends on 3 things: (1) What's your final goal, (2) how your illumination and exposure change, and (3) how accurate you need it to be.
Let's start with #2. Does your light change spectrum (color), or does it just go up and down in intensity? The simplest is if it just changes intensity and not color. In that case, you can just measure some standard white chip and multiply the image by some factor to make the intensity what it needs to be. So if the intensity is 210 gray levels and it needs to be 200, just multiply the image by 200/210. This is probably okay as long as you don't need it super accurate (like I mentioned in #3). If you do need it super accurate, you're going to have to take into account the opto-electronic conversion function (gamma) and use several gray chips of different intensity that fill the field of view.
Now, if your light changes color also, then it's a lot more complicated so I won't get into it unless you tell me that's the case.
Now for #1 - what you're going to do. Now you can just standardize color in RGB color space and quit there and measure color differences in RGB color space and scale the image. However if you want more accurate colorimetric color differences, like what you'd get from a colorimeter or spectrophotometer, then you need to convert to a human vision relevant colorspace (like my avatar to the left) such as CIE LAB. The x-rite Color Checker Chart comes with a table that gives the "true" LAB values for each chip. So now you can do a regression, according to some model, from RGB colors into LAB colors. Once you're in CIE LAB colorspace, you can compute the Delta E (color difference). This is better than what you'd get in RGB space. However the regression going from RGB to LAB is a bit tricky. You need to decide on a model, like cross channel cubic, cross channel quadratic, or whatever. If you're doing this, which is the most accurate way, then you don't need to produce a standardized (matched) RGB image as an interim image. You can just skip that part and go right to LAB. Doing this with an 8 bit system will let you measure LAB colors accurate to withing about 3 units of delta E (on average) over the color gamut, and make comparisons down to less than 1 delta E (the limit that people can discern).
Now for #3. All lenses have shading. That means the image will be darker on the edges than in the middle. Lots of reasons for this but I'll skip those. Common lenses like Nikon and Canon are pretty bad. You'll have much flatter fields with small scale, high quality manufacturers like Schneider or Linos. We use Schneider lenses - they're exceptional. So how does shading affect your image? Well if you put a chip at the center, it might be 230 gray levels, but put it in the corner and it might now be 190 gray levels. The chip didn't change but the detected brightness did, due to lens shading. And that doesn't even count chromatic shifts that get worse as you move away from the center. Bottom line is that you'll get different measured colors as you move your subject around in the field of view. Obviously you don't want that so you have to correct for that. And that's another hour long seminar. Let me just simplify it to saying that you need to develop a model (a 2-D shape) for the shading and divide your image by it. So if the image is 230 in the middle but 190 at the corner, you divide by 190 and multiply by 230 to bring it up to what it should be. And if it's 210 at the top edge, divide it there by 210 and multiply by 230. So you can get a model that gives the percentage darker that every pixel is, and divide your actual image by the model image. You might find John D'errico's polyfitn useful for this modeling. This will do flat field correction. This needs to be done before the measurement of chip colors and derivation of the RGB-to-LAB transform. I'm attaching a background correction demo.
Now there are other issues involving accuracy but I don't want to get into those because they take even longer to explain and understand, and you may have quite a mouthful just trying to digest what I've told you so far. So start with this and see how it goes.
0 Comments
See Also
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!