I've noticed that adding contrast, and darkening warm hues tend to pull colors towards red. I love dark red hues, but I've been wonder what caused this. I had my doubts whether it was a visual illusion, so I put it to a test in Photoshop.
Lo and behold. According to photoshop's color science, hues do pull towards a reddish hue when darkening.
This test is by no means scientific. And image compression as well as web compression will certainly make the results even less scientific. But visually the results are pretty close.
What are your theories? Limited color space with automatic gamut remapping?
Edited by Will J. Løkken, 02 January 2015 - 12:17 PM.
This is why if I work in PS or After Effects in RGB-mode use transfer/blending modes, so that I can change the contrast without affecting the colors, much like doing a luma-adjustment, or using HSL instead of RGB.
Does it really turn to a reddish in if the shade starts off greenish or blueish? I don't have PS installed on this comp right now, so I can't really check. Anyways, I no longer use curves/levels/contrast if I need to keep the hues correct.
Edited by Karl Eklund, 02 January 2015 - 03:56 PM.
You've got a few considerations to define if you want to discuss this succinctly - let alone develop the code that achieves the manipulations.
I mentioned this on another thread, software is very opaque about what it is doing under the hood, to the point of internal inconsistency in the user interface, leading to all sorts of ingrained misconceptions in lay people about how images are composed as data (eg. Photoshops the use of 'ppi' (pixels per inch)).
All I can suggest is people think more, which it appears the OP is doing
Anyways, %99+ of the time it's not an issue - if it looks good, it is good - I guess it might be relevant when developing your own pipelines, translation to different representations and applications...
I've noticed that adding contrast, and darkening warm hues tend to pull colors towards red. I love dark red hues, but I've been wonder what caused this.
One simple explanation is ill-designed darkening filters: filters that do not take into account how the video is to be decoded and displayed. For example if a video is encoded full range R'G'B', those values generally go through a gamma exponentiation or some other transformation to become the RGB that drive the displays's primaries. If straight gamma is how, then the darkening filter should simply multiply each of the R'G'B' by some number. If simple log is how, then the darkening filter should simply subtract some number from each of the R'G'B'. If it's an unsimple transform then the darkening filter must also be unsimple.
For example, in [0,1] scaling, suppose R'=0.65 and G'=B'=0.40, and suppose the intent is gamma=2, so a ruddy brown color will be displayed made of: 0.42 units of the red primary and 0.16 units of the other two primaries. But what if the darkening filter is designed for log coding, and subtracts 0.15 from each, making R'=0.50 and G'=B'=0.25? That will be displayed made of: 0.25 units of the red primary and just 0.06 units the other two. The result is dramatically redder than before the darkening.
One can also imagine a stupid darkening filter for Y'CbCr video that just changes the Y' without changing the Cb and Cr. Those who believe that the chroma channels carry the chromatic information independent of the luma channel are asking for that one!
My playback and display are set up for legal range BT.709 with simple gamma 2.35. I just now made an appropriate 50% darkening filter as a 1D LUT. It doesn't redden reddish things at all.
Remember how underexposing color transparency films made flesh colors very reddish. It happened so often that you might come to believe it as a fact about color.
It seems logical that a warm color darkened would look redder. This is one reason caucasian faces can look red in underexposed light. ...
No, it was merely because of the RGB emulsions' toes. Normally exposed Caucasian flesh in transparencies makes RGB exposures on toes, R less so than B and G. With underexposure all three are climbing up off the toe, making the gap between R density and the others increase. Red faces. This is an example of the darkening function not according with the recording function (which for transparency film is intimate with the display function).
I fully agree with Chris Millar that "software is very opaque about what is doing under the hood". Film was not that way. Software and digital cameras will not be that way if cinematographers demand so.
My previous post was about "darkening". I see now in Will Løkkens' figure that what he was calling "darkening" was actually gamma increase. Increasing gamma does darken an image, specifically the middle tones as Will notes in the figure, but it does so much more than this that the concepts are incomparable.
Very simply, increasing gamma increases color saturation by exaggerating any inequalities between the R, G, and B channels. The reason is in the numbers. Suppose the R value going to the monitor used to be 1.5× the G and B values. Then if you add a gamma 2 filter the 1.5× factor gets squared. The R value going to the monitor is now 2.25× the G and B values, so saturation has increased.
Darkening applies to the image itself. Gamma does not. Gamma applies to particular channels by which the image is conveyed. If you convey the same image by equivalent channels, like YUV instead of RGB, and apply the gamma to the YUV instead of to the RGB, the gamma effect on the image is different.
"Darken" and "gamma" have pretty well accepted meanings while "contrast" does not. To munge all three concepts together is, well, very messy.