Removing the saturation in camera is one of the things which i've never really questioned or understood why you should do it until recently. What is the reason behind doing this? Wouldn't adding saturation in post vs in camera add more banding and similar? I understand for raw and 10 bit 422 cameras when you have a lot of leeway and are using lü†s, but on an 8bit 420 avdhd image i would only assume that it really isn't that good of an idea.
Would love to hear the reasonings behind doing this.