Tuesday, August 4, 2009

Activity 11 - Color Camera Processing

Color cameras have different white balancing settings, allowing users to select which setting is appropriate for the conditions. A white-balanced image renders colors properly, where white is white, red is red, blue is blue, and so on [1].

In this activity, we are to implement two algorithms used for white balancing images, probably automatically implemented by cameras for the Auto White Balance setting.

White Patch Algorithm

First, a white patch is selected from the image. The whole image will be normalized with respect to this patch. The image might have saturated pixels (pixels that have either R, G, or B values equal to 1), and should not be used as the white patch.

To determine whether a certain portion of the image is saturated, the pixels which have R,G, or B values equal to 1 are shown.


I will then select a patch that does not belong to any of these areas. In all of the images, the white patch I used came from the white rose. This white patch has three channels: Rpatch, Gpatch, and Bpatch. Then, each channel (Rimage, Gimage and Bimage) of the original image is divided by the average of each channel of the patch. This will then give the new pixel values of each channel in the white balanced image. To make things clear, below is the equation:

Rnew = Rimage./mean(Rpatch)
Gnew = Gimage./mean(Gpatch)
Bnew = Bimage./mean(Bpatch)

The image with the new RGB channel values is now our white balanced image.

Gray World Algorithm

The gray world algorithm assumes that the color of the world, in average, is gray [1]. In this algorithm, the normalizing factors of each channel are the average values of each channel. The equation is also shown below.

Rnew = Rimage./mean(Rimage)
Gnew = Gimage./mean(Gimage)
Bnew = Bimage./mean(Bimage)

Of course, the above equation assumes that no part of the image is saturated.

Shown below are the results of the White Patch Algorithm and the Gray World Algorithm (assuming the image has no saturated pixels). For the white patch



The rows represent different white balancing settings of the camera. For the first row, it seems that the correct setting was used for the capturing condition. Not much changed after white balancing, which means that the image is almost already white balanced. Upon checking for the presence of saturated pixels (pixels with RGB values equal to 1), it appears that there are still a large amount of saturated pixels on the tablecloth area after the White Patch Algorithm (WPA) was implemented while only a small part remain saturated after the Gray World Algorithm. (GWA)

For the second row, the tablecloth area appears to have a yellowish tint in the original image. After the WPA, the yellowish tint was replaced by a very light bluish tint. It seems the correct amount of white was achieved from the GWA.

The original image in the third row has a very obvious bluish tint. After WPA, the tablecloth area appears almost white, with a very faint yellowish tint, which I think is already acceptable. The result is almost the same for the GWA.

An even more obvious bluish tint is seen in the original image in the fourth row. After WPA, the image is better since the tablecloth now appears white, as compared to the results of GWA where the table cloth appears a little yellowish.

Note that the discussion above refers to the implementation of the Gray World Algorithm with the assumption that the image is not saturated. However, this is not the case. Recall that upon checking, all images have saturated pixels. To correctly implement the GWA, the saturated pixels must not be included in the computation. Only the pixels with values not equal to 1 are considered in the computation of the normalizing factor. Below are the results. Only the last column differs from the previous figure.



For the first row, the WPA produced a better result than the GWA because the latter appears a little bluish. For the second row, the GWA is better because the WPA produced a bluish image. For both the third and fourth column, the GWA maintained the bluish tint present in the original image.

Another observation: among the three (original, WPA, GWA), the color of the roses is more vivid in the original image.

In the first implementation of the GWA, I made a mistake, resulting to the inclusion of the saturated pixels in the computation. However, after implementing the correct method which did not include saturated pixels, I noticed some differences.



It seems that including the saturated pixels actually produced better results than in the images where they were excluded.

So far, the WPA seems better in white balancing.

Next, objects of the same hue are arranged against a white background. The camera white balance settings used are daylight (row 1) and tungsten (row 2), not appropriate for the capturing conditions which is fluorescent. In the image below, the first column is the original image, the second column is the white balanced image using WPA, and the third column is the white balanced image using GWA.





The White Patch Algorithm produced better results in white balancing than the Gray World Algorithm. In the images above, the GWA produced images with a pinkish/magenta-ish cast. For images containing objects of the same hue, in this case green, the green channel average is higher than the red or blue channel. This means that the divisor for the green channel is higher than the red and blue. After dividing each channel by the corresponding divisor, the white areas in the image will have a higher value for the red and blue channels, resulting in a magenta-ish cast.

The WPA is better because only a small white patch is selected. This patch contains the cast or tint information, which later proves useful. For example, if the image has a bluish cast, as in the original image with tungsten white balancing (2nd row, 1st column), the white patch will have high values for the blue channel. This means that the divisor for the blue channel of the whole image is higher than the red or green, such that after the processing, the cast on the image will be removed. The white areas will properly appear white, and the other colors will be properly rendered.

I give myself 10 points for this activity. I performed both the algorithms for the images and was able to observe the differences between the two algorithms. White Patch Algorithm proved to be better in white balancing, based on the results presented above.

I would like to thank Ate Cherry Palomero for letting us use her camera for this activity. Credits also go to Orly Tarun and Miguel Sison for their helpful comments and suggestions.

[1] A11 - Color Camera Processing 2009.pdf - Dr. Maricor Soriano

0 comments:

Post a Comment