Suppose we have l,a,b values for 5 circles inside an image. These values are calculated using OpenCV.
Actually, we take 100 random pixels from every circle and calculate normal average LAB value for every circle (Which I am not sure it is the right way to do)
values are np.array similar to the following:
LAB Measured Colors Values = [[ 27.553 -26.39 7.13 ] [ 28.357 -27.08 7.36 ] [ 28.365 -27.01 7.21 ] [ 29.749 -27.78 7.42 ] [ 28.478 -26.81 7.14 ]]
Those circles are also measured using colorimeter instrument. The colorimeter generates a reference values.
LAB Reference Colors Values = [35.07, -24.95, 3.12] [35.09, -24.95, 3.18] [35.0, -25.6, 3.21] [34.97, -25.76, 3.36] [35.38, -24.55, 2.9]
Lets call LAB Measured Colors Values as m1
Lets call LAB Reference Colors Values as m2
We have the measured values and the reference values.
How can we calculate the CCM – Color Correction Matrix?
I do that using the following:
def first_order_colour_fit(m_1, m_2 , rcond=1): """ Colour Fitting ============== Performs a first order colour fit from given :math:`m_1` colour array to :math:`m_2` colour array. The resulting colour fitting matrix is computed using multiple linear regression. The purpose of that object is for example the matching of two *ColorChecker* colour rendition charts together Parameters ---------- m_1 : array_like, (3, n) Test array :math:`m_1` to fit onto array :math:`m_2`. m_2 : array_like, (3, n) Reference array the array :math:`m_1` will be colour fitted against. Simply: Creating and clculating CCM - Color Correction Matrix """ print('CCM - Color Correction Matrix = ') ColorCorrectionMatrix = np.transpose(np.linalg.lstsq(m_1, m_2 , rcond))
CCM - Color Correction Matrix = [[-0.979 -2.998 -2.434] [ 0.36 1.467 0.568] [ 0.077 0.031 0.241]]
After getting the CCM – I want to apply the CCM on m1 (LAB Measured Colors), and correct them.
How can we do that ?
I am doing the following, however the results seems not ok:
def CorrectedMeasuredLABValues(measured_colors_by_app , ColorCorrectionMatrix , reference_LAB_colors_values ): CorrectedMeasured_LAB_Values = np.zeros_like(measured_colors_by_app , dtype=object) print('\n\n\n\n Corrected Measured LAB Values Matrix = ') for i in range(measured_colors_by_app.shape): print(ColorCorrectionMatrix.dot(measured_colors_by_app[i])) CorrectedMeasured_LAB_Values[i] = ColorCorrectionMatrix.dot(measured_colors_by_app[i])
We get the following:
Corrected Measured LAB Values Matrix = [ 34.766 -24.742 3.033] [ 35.487 -25.334 3.129] [ 35.635 -25.314 3.096] [ 36.076 -25.825 3.23 ] [ 35.095 -25.019 3.094]
If you do
ColorCorrectionMatrix = np.linalg.lstsq(m_1, m_2)
m_3 = np.matmul(m_1, ColorCorrectionMatrix)
should return an array
m_3 that is close to
m_2. That is, the first line solves the equation
m_1 x = m_2
in the least-squares sense; and therefore a simple matrix multiplication of
m_1 with the
x found by
np.linalg.lstsq should approximate
This means you should remove the transpose in your calculation of
But! This correction applies a transformation to the colors that misses a translation. The plane in the Lab space spanned by a and b is the chromaticity plane. The point at the origin of this plane represents white/grey (colorless). If a picture needs white point adjustment (white balancing), it means that what is true white is not at the origin of this plane. A translation is needed to move it there, no amount of multiplications will be able to accomplish this.
The equation that needs to be solved is
m_1 x + y = m_2
(where y is the whitepoint correction). This can be rewritten as a single matrix multiplication if we add a column of ones to
m_2. This is called homogeneous coordinates, see this Wikipedia article for an idea of what this looks like.
When computing color correction in RGB space, this problem does not occur. In RGB, the origin never moves: black is black. RGB values are always positive. White balancing is accomplished with a multiplication.
I would recommend that you convert your colorimeter reference values to RGB, instead of your image pixels to Lab, and perform the color correction in RGB space. Do make sure that the image you record is in linear RGB space, not sRGB, which is non-linear (you’ll find conversion equations online if it turns out your images are saves as sRGB).
In linear RGB space it is perfectly fine to average pixel values in the same way you did in Lab space.
Answered By – Cris Luengo
Answer Checked By – Marilyn (AngularFixing Volunteer)