I know this is not exactly a contemporary question that might challenge a large number of modern filmmakers to answer, but I post it in hopes of finding some more experienced cinematographers that have been active in the industry in the past decades, or perhaps someone with a historic interest in old motion picture processes. Anyway, here it goes:
Can someone explain the reason behind the fact that a lot of shots composed in an optical printer, like transitions, blue screen composites etc. all the way up to the time of around 80's, have a pronounced mackie line effect visible?
I guess you could say that I know the basic theory behind what causes this effect; the diffusion of development inhibitors around well developed areas of the image, but I'm not sure this answers my question. Why optical effect shots? Is it because of an increased number of generations of copying?
What confuses me more is the fact that some older movies feature this effect throughout the entire movie, not just in opticals. There is also sometimes a directionality present in regards to this effect. If I remember correctly, it has something to do with the movement of the film through the processing machine.
To make matters more complicated. I'm not 100% sure on this, but I think I remember seeing this effect very pronounced even in the 90's at the ends of projection reels. Just before or after the switch of reels. It's like the reel is not processed evenly throughout its length. Correct me if I'm imagining this part.
Any knowledge shared about this would be most welcome.