What kind of data is RAW data?
This section will explain its structure and system.
Before explaining the structure of RAW data, it is necessary to have a basic understanding of how image sensors convert light to data.
Image sensors can recognize light intensity, but sensors cannot identify the color of the light.
Basically, color consists of different combinations of light from the three primary colors Red, Green, and Blue. But, image sensors cannot recognize this RGB balance, only the brightness.
Since image sensors can only recognize brightness, how can color be created?
Even if the sensor is very good, it still cannot reproduce accurate color without color information.
What happens then when Red light (or either Green or Blue light) shines on one pixel?
Light intensity received by that pixel is considered as Red light.
Similarly, when Green or Blue light intensity is recognized by other pixels, the light intensity of each primary color can be measured so that color can be expressed.
Each pixel of actual image sensors is equipped with a color filter on its surface that looks like cellophane so that each pixel can sense each primary color.
Most common image sensors have the same alignment shown in the above picture. This is known as Bayer alignment.
As you can see, one pixel can recognize only one color, so in order to reproduce accurate color, it is necessary to process the Bayer aligned RGB colors into images that humans can recognize.
This is the "Development" process done by SILKYPIX.