In other words say we make an image more blocky...how does this make a file smaller data wise as opposed to an image that is less blocky but has the same dimensions... |
Through this thread I see several subjects mashing together, though here in these last few posts that has begun to focus on a question like this one.
The answers appear to be correct. You're missing details and not “getting” the answers because the subject requires at least 3 or 4 books, but that knowledge is being compressed, chopped up, and expressed in a combined description which leaves you puzzled, and I sense not quite sure what questions to ask or how they should be framed.
To many of us, after years of study and experiment, all of it seems like one subject, but in reality a student must follow the path from one end to the other, not in a combined salad of everything.
Color theory, which has been part of the discussion (RGB in this case), relates to the construction of the human eye and it's relationship to the light spectrum, and from there winds through a number of realizations that have nothing to do with image resolution.
Image resolution is actually a bit simpler, but you've asked questions about "data wise" relationships to image dimensions. The relationship between the size of data storage required for an image involves, among other things, the color representation used to describe the image. These points have been included in the answers, but I then see you ask the questions again because the subject is vast, you're at a beginning level toward this study and these posts are short by format limitation.
Some of us approach the subject after having been through algebra, trigonometry, linear algebra, quaternion algebra (it was a kind of "lost" algebra before the 70's), and likely some of the Calculus. The language of graphics is the language of math and science.
The subject also involves some of the physics of light, especially for color theory.
Whenever a computer touches reality, the means by which science represents reality through math is involved.
I sense it becomes important for us answering these questions to insist on separating the subject components.
I’ll treat color theory, as that has been lacking so far, and treat images separately in another post.
Color theory applies to each pixel of an image. This is similar, though not identical in all respects, whether we discuss a physical pixel on a display monitor, or a more theoretical pixel in an image not yet on display. It begins, however, outside the computer, with the human eye. We are, after all, modeling human image perception in an attempt to appropriately stimulate the nerve sensors in the retina.
The color of light is due to the frequency if its vibration. The color spectrum of visible light, which you can find through Google, is in the order of the rainbow without coincidence, to be found in the physical use of a prism or the atmosphere full of cloudy mist. Red light is the lowest frequency in the visible spectrum, corresponding to musical notes from a bass guitar. As the frequency increases slightly, the color shifts toward yellow, then green, then blue. Violet, at the highest end we can see, is like the sound of cymbals. These colors can be represented by single frequencies (usually described by wavelength), corresponding to a single musical note by one instrument, perhaps one key on a piano. However, human eyes are only able to sense red, green and blue colors. We have no receptors for yellow or violet. We are blind to frequencies above and below these limits just as we are deaf to extremely low and high sound frequencies.
Curiously, there is no pure color in the spectrum for white. White isn’t a light color. It is a human perception of multiple colors at once, much like a chord in music where multiple notes are combined.
As important as white is in human perception, the fact it doesn’t exist in the color spectrum of light is important for how we model color in light. We see white when all 3 sensors at a retinal pixel location (yes, the eye has these pixels) are equally excited at once. It is when one or two of these sensors receives less stimulation than the others that light begins to take on color in our perception.
This is the reason color is modeled as RGB pixels in a computer. The monitor must stimulate these three receptors simultaneously to full intensity to produce the perception of white in an image, while all the other colors involves only one (red, green or blue) sensor in the eye, or a carefully balanced blend of two or three of these sensors at any one pixel location. The human retina is able to detect changes of light intensity to about 1% gradients (some of us can sense 0.5% at most).
Yellow is an interesting color to explore in this reality. First, there is a pure yellow color in the light spectrum, represented by one single frequency (over a range of a few frequencies). How do we perceive yellow, then, when we have no receptor for yellow in our eyes? How do we see yellow in an image from a device which, itself, does not emit yellow light?
First, consider how the light spectrum gradually shifts through the colors of the rainbow as the frequency of the light increases. Every position between two colors has a frequency, and there is an infinite potential number of frequencies between any two colors.
Each sensor in the eye is sensitive to a particular frequency, but it is not a sharp cutoff. The red sensor, in reality, is most sensitive to red light, but still responds with reduced sensitivity as the frequency increases toward yellow. However, by the point at which the frequency approaches green, the red sensor’s sensitivity is reduced to nearly zero. Similarly, a pure red light is not sensed by a green retinal sensor, but as the frequency increases toward green, the green sensor does begin to respond at reduced sensitivity, reaching it’s most sensitive response at the green light frequency.
So, when we perceive yellow it is because both the red and green sensors respond partially to the yellow light.
What is curious is that if we look at a light source that is pure yellow, and we compare that to a light source which has no yellow at all, but has two frequencies of reduced intensity at red and green, we perceive the exact same color. We are blind to the fact these are two completely different light spectra.
This is why a device which can’t emit yellow can produce the sense of yellow color, because it mathematically emulates the function of the retinal sensors in the human eye.
Violet (or purple hues in darker shades) is another curious version of this. It so happens that violet light is beyond our blue sensor’s range. The blue sensors respond in reduced sensitivity to violet, but not nearly as sensitively as with blue light (which is a lower frequency than violet). Why do we perceive violet? It so happens that the frequency of violet light happens to hit a rather coincidental doubling of the frequency of red light. Our red sensors can pick up a faint hint of the “harmonic” frequency of violet light. If it didn’t, our perception of violet would merely be that it is a less intense blue. Our RGB math modeling of light color, therefore, adds a touch of red to a blue light to emulate this dual sensor stimulation, even though the monitor device itself can’t really emit light at those higher frequencies.
I cover this because without this acknowledgment, the theory behind RGB color values seems rather arbitrary. One should recognize that what we are really doing is mathematically describing the measurement of a physical reaction in the human eye, at each pixel, to real imagery from the real world.